Paper deep dive
Through the Looking-Glass: AI-Mediated Video Communication Reduces Interpersonal Trust and Confidence in Judgments
Nelson Navajas Fernández, Jeffrey T. Hancock, Maurice Jakesch
Abstract
Abstract:AI-based tools that mediate, enhance or generate parts of video communication may interfere with how people evaluate trustworthiness and credibility. In two preregistered online experiments (N = 2,000), we examined whether AI-mediated video retouching, background replacement and avatars affect interpersonal trust, people's ability to detect lies and confidence in their judgments. Participants watched short videos of speakers making truthful or deceptive statements across three conditions with varying levels of AI mediation. We observed that perceived trust and confidence in judgments declined in AI-mediated videos, particularly in settings in which some participants used avatars while others did not. However, participants' actual judgment accuracy remained unchanged, and they were no more inclined to suspect those using AI tools of lying. Our findings provide evidence against concerns that AI mediation undermines people's ability to distinguish truth from lies, and against cue-based accounts of lie detection more generally. They highlight the importance of trustworthy AI mediation tools in contexts where not only truth, but also trust and confidence matter.
Tags
Links
- Source: https://arxiv.org/abs/2603.18868v1
- Canonical: https://arxiv.org/abs/2603.18868v1
Intelligence
Status: not_run | Model: - | Prompt: - | Confidence: 0%
Entities (0)
Relation Signals (0)
No relation signals yet.
Cypher Suggestions (0)
No Cypher suggestions yet.
Full Text
113,917 characters extracted from source content.
Expand or collapse full text
by-nc-nd Through the Looking-Glass: AI-Mediated Video Communication Reduces Interpersonal Trust and Confidence in Judgments Nelson Navajas Fernández 0009-0009-4298-9658 Bauhaus UniversityWeimarGermany , Jeffrey T. Hancock 0000-0001-5367-2677 Stanford UniversityStanfordUnited States and Maurice Jakesch 0000-0002-2642-3322 Bauhaus UniversityWeimarGermany (2026) Abstract. AI-based tools that mediate, enhance or generate parts of video communication may interfere with how people evaluate trustworthiness and credibility. In two preregistered online experiments (N = 2,000), we examined whether AI-mediated video retouching, background replacement and avatars affect interpersonal trust, people’s ability to detect lies and confidence in their judgments. Participants watched short videos of speakers making truthful or deceptive statements across three conditions with varying levels of AI mediation. We observed that perceived trust and confidence in judgments declined in AI-mediated videos, particularly in settings in which some participants used avatars while others did not. However, participants’ actual judgment accuracy remained unchanged, and they were no more inclined to suspect those using AI tools of lying. Our findings provide evidence against concerns that AI mediation undermines people’s ability to distinguish truth from lies, and against cue-based accounts of lie detection more generally. They highlight the importance of trustworthy AI mediation tools in contexts where not only truth, but also trust and confidence matter. AI-mediated communication, video filters, deception detection, trust, credibility, avatars, experiments †journalyear: 2026†copyright: c†conference: Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems; April 13–17, 2026; Barcelona, Spain†booktitle: Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems (CHI ’26), April 13–17, 2026, Barcelona, Spain†doi: 10.1145/3772318.3790845†isbn: 979-8-4007-2278-3/2026/04†ccs: Human-centered computing Empirical studies in collaborative and social computing†ccs: Human-centered computing Interaction design theory, concepts and paradigms†ccs: Computing methodologies Artificial intelligence 1. Introduction Figure 1. Study conditions and primary outcome variables. Participants watched six videos in which video subjects recounted a story about someone they knew, that was either true or false. In the control condition, we embedded the original video in a video call to increase realism. In the weak AI-mediated treatment, we further processed the video using retouching and a virtual background. In the strong AI-mediated treatment, we replaced the subject with an animated avatar. For each video, participants indicated whether they trusted the person in the video, whether they thought the person in the video was lying, and how confident they were in their judgment. Study conditions and primary outcome variables. Participants watched six videos in which video subjects recounted a story about someone they knew, that was either true or false. In the control condition, we embedded the original video in a video call to increase realism. In the weak AI-mediated treatment, we further processed the video using retouching and a virtual background. In the strong AI-mediated treatment, we replaced the subject with an animated avatar. For each video, participants indicated whether they trusted the person in the video, whether they thought the person in the video was lying, and how confident they were in their judgment. In 2021, a lawyer joined a virtual court hearing in Texas, to everyone’s surprise, appearing as a wide-eyed kitten (News, 2021). ”I’m here live. I’m not a cat” he clarified and excused himself with a kitty-worried expression, blinking at the judge. When such mishaps make the news, these incidents demonstrate how deeply AI communication systems can disrupt the expectations and assumptions people hold about mediated communication, particularly in high-stakes contexts where what is said has far-reaching consequences. However, most AI-based transformations we use in our communication today are less obvious than a cat avatar. With the shift to remote communication accelerated by the COVID-19 pandemic (Vargo et al., 2021a; Shockley et al., 2021), video communication platforms are increasingly used in not only casual but also professional and high-stakes settings (Mann et al., 2020; Döring et al., 2022a; O’Conaill et al., 1993; Jacks, 2021). Platforms such as Zoom, Google Meet and Microsoft Teams, integrate a wide range of algorithmic video enhancements and transformations. The available features range from background blurring and replacement to skin improvements, gaze correction and personalized avatars that resemble the speaker. AI-based video features are widely used and broadly regarded as acceptable (Döring et al., 2022b; Javornik et al., 2022; Zoom, 10/20/2023 11:36:03 AM). While often presented as convenience features or aesthetic improvements (Zoom, 10/20/2023 11:36:03 AM), they may alter aspects of communication that are central to impression formation and judgment (Hancock et al., 2020; Hohenstein and Jung, 2020; Jakesch et al., 2019). As AI-mediated video tools become widely deployed, we need to better understand their potential to shape people’s impressions and judgments (Hancock et al., 2020; Hancock and Bailenson, 2021) as well as trust (Jakesch et al., 2019; Ma et al., 2017), honesty (Hohenstein and Jung, 2020; Leib et al., 2024; Suen and Hung, 2024) and credibility (Kim et al., 2022) in online video communication. Indeed, previous research in computer-mediated communication (CMC) shows that the medium of interaction can significantly shape how we present ourselves and how others perceive us (Hancock et al., 2010). In video communication, many cues that people rely on in face-to-face settings, such as posture, eye contact and microexpressions (Connelly et al., 2011), are missing or obscured. Even in traditional computer-mediated communication, judging the honesty, credibility and trustworthiness of others is a difficult task (Bond and DePaulo, 2006; DePaulo et al., 2003; Levine, 2014). Because such evaluations shape how people interpret and respond to others in everyday communication (Levine, 2014; Vrij, 2000), preserving people’s ability to evaluate others in AI-mediated communication remains essential. The introduction of AI-based video tools may affect the ways people form impressions of each other and assess credibility, authenticity and trustworthiness (Hohenstein and Jung, 2020; Jakesch et al., 2019; Kim et al., 2022). Hancock et al. conceptualized the relevant changes under the framework of AI-mediated communication (AI-MC) (Hancock et al., 2020): a paradigm shift in computer-mediated communication where a computational agent modifies, augments or generates message content on behalf of a communicator. Prior work in text-based contexts has shown that a mediating AI system that modifies or creates communication can erode interpersonal trust (Jakesch et al., 2019) and alter how receivers interpret intentions, credibility and agency (Glikson and Asscher, 2023; Mieczkowski et al., 2021b; Purcell et al., 2024). In video communication—the context of the current study—interactions are more dynamic and perceptually rich than text, further complicating assessments of how the AI tools integrated into widely used platforms (Zoom, 10/20/2023 11:36:03 AM) may affect judgments of trust, honesty and credibility. This study investigates how different levels of AI-mediated video communication—ranging from original recordings to weak AI mediation through retouching and virtual backgrounds to strong AI mediation through animated avatars—affect the perceived trustworthiness of the speaker, judgments of truth and confidence in these judgments. In two large online experiments (N = 2,000), participants viewed prerecorded videos of others making truthful or deceptive statements. As illustrated in Figure 1, we processed the video stimuli through the integrated AI video features of Microsoft Teams to reflect different degrees of AI mediation: in the (1) control condition, the videos were unaltered, corresponding to regular computer-mediated communication; in the (2) weak AI mediation condition, we enabled skin smoothing, lighting adjustment and virtual backgrounds, corresponding to widely used video transformations; in the (3) strong AI mediation condition, we transformed the speaker into a fully-animated character (avatar) to test the effect of strong AI-based video transformation. In addition to a uniform communication setting in which participants rated six videos of the same mediation type in Study 1, participants in Study 2 encountered different types of AI mediation in a mixed environment, more closely resembling real-world interactions. After watching each video, we asked participants how much they trusted the person and whether the person was telling the truth. They also rated their confidence in their judgments and answered follow-up questions about the cues they relied on in their judgments. Across both studies, we found that AI-mediated video did not affect deception detection accuracy or participants’ overall likelihood of suspecting the other person of lying. However, it consistently reduced perceived trustworthiness and lowered participants’ confidence in their judgments, particularly in the mixed environment (Study 2). The results suggest that AI-mediated video processing meaningfully affects how people evaluate others in online video communication, even when it does not alter their ability to distinguish truths from lies. Our findings have implications for platform design and for policy debates, as they show how AI mediation shapes people’s evaluations of others in online interactions. Even common tools such as avatars and retouching influence how people form trust, credibility and confidence in online interactions, underscoring the need for greater attention to representational consistency, transparency and the context-sensitive use of mediation features. 2. Related work Our work is motivated by the growing integration of AI-mediated communication tools, such as video retouching and avatars, into everyday video communication (Nowak and Fox, 2018; Yi et al., 2026). Prior work in computer-mediated communication (Bos et al., 2002; Herring, 2002) has examined how reduced cues shape trust and impression formation, and deception research has documented the limits of people’s ability to detect lies (Bond and DePaulo, 2006) as well as truth-default rates (Levine, 2014). We draw on this literature and combine it with recent work on AI-mediated communication (Hancock et al., 2020; Hohenstein and Jung, 2020; Jakesch et al., 2019), HCI studies of avatar-mediated communication, and theories of interpersonal judgment, to investigate how AI-mediated video processing affects deception detection, confidence and interpersonal trust. 2.1. Deception Detection Accurately judging whether someone is being honest is essential in mediated interactions (Burgoon et al., 2003), where people must assess the reliability of information provided by others, such as in hiring conversations, interviews, collaborative online work and educational settings. In many contexts, judgments about honesty shape how people interpret, trust and respond to what is communicated (Bond and DePaulo, 2006; Levine, 2014). At the same time, decades of research show that deception detection is difficult (DePaulo, 1985; Vrij, 2008): people are only slightly better than chance (54% on average) (Bond and DePaulo, 2006; DePaulo et al., 2003) at discerning lies from truth. People also have a strong general tendency to believe what others are saying, known as the ”veracity effect” (Levine et al., 1999; Levine, 2014). Previous research has proposed two broad perspectives to explain why deception is complex: On the one hand, cue-based approaches posit that liars reveal themselves through nonverbal cues, so-called ”leakage”, such as gaze aversion or microexpressions, and that one can discern lies from truth by observing those nonverbal cues (Ekman and Friesen, 1974; Ekman, 1997; Ekman and Friesen, 1969). However, meta-analyses show that deception cues are inconsistent across studies and are therefore weak and unreliable indicators of deception (DePaulo et al., 2003). Nonetheless, people still rely on them to form their deception judgments (DePaulo et al., 2003). In contrast, context-based perspectives, such as Levine’s truth-default theory (TDT), argue that people rely primarily on the plausibility and coherence of what is said and default to believing others unless suspicion is actively triggered, which explains the ”veracity effect” (Levine, 2014). Cue-based theories and Levine’s truth-default theory yield different expectations regarding how the disruption or removal of nonverbal cues in AI-mediated communication might influence deception judgments. If deception cues are central to lie detection, AI systems that modify them could alter truth judgment rate or accuracy. In contrast, if judgments are driven mainly by plausibility and coherence, the AI-based disruption or removal of other elements of the communication may have little to no effect. Our work extends prior research on deception detection and AI-mediated communication by introducing AI-mediated videos into established deception stimuli (Lloyd et al., 2019). Despite extensive research on deception detection, no work has examined how everyday AI video tools, such as retouching or avatars, affect our judgments of deception. It also remains unclear how these tools influence our confidence in those judgments and our perceptions of trustworthiness. Particularly, if AI mediation removes or distorts the nonverbal cues that cue-based theories consider central for deception detection, AI-mediated video could make deceptive statements harder to identify. By systematically comparing judgments of videos with varying degrees of AI-mediated content, we provide empirical evidence that AI mediation in video may shape accuracy and interpersonal dynamics in online video communication. 2.2. Trust in Mediated Communication Trust and belief are not the same psychological processes (Holton, 1994): While belief is a cognitive judgment about whether a claim is true, trust is a relational, affective stance toward a speaker (Holton, 1994; Hardin, 2002), which is a fundamental precondition for effective human cooperation and interaction (Jones and George, 1998; Mayer et al., 1995). High levels of trust enable conflict resolution, problem solving, and fluency in interaction (Edmondson, 1999; Simons and Peterson, 2000; Zand, 1997), while low trust undermines learning and collaboration (Edmondson, 1999; Kiffin-Petersen and Cordery, 2003). However, in computer-mediated communication, establishing trust is more difficult than in face-to-face settings (Hill et al., 2009; Ridings et al., 2002; Rocco, 1998). Research shows that trust tends to start at a lower baseline in computer-mediated settings and is harder to establish when nonverbal cues are missing (Bos et al., 2002; Wilson et al., 2006). Computer-mediated communication enables people to carefully control how they present themselves, and the related reduction in and manipulation of social cues complicates how trust and credibility are assessed (Ellison and Hancock, 2013; Ert et al., 2016; Eslami et al., 2016; Gillespie et al., 2014; Hancock and Guillory, 2015; Hancock et al., 2007; Herring, 2002). The COVID-19 pandemic accelerated the adoption of video communication tools for both casual (Herman et al., 2025; Javornik et al., 2022; Vargo et al., 2021b) and high-stakes settings, including hiring (McCarthy et al., 2021; Mujtaba and Mahapatra, 2025), telemedicine (Bokolo Anthony Jnr., 2020; Lukas et al., 2020; Mann et al., 2020; Sharma et al., 2023), online exams (Westerlund, 2019) and legal proceedings (Remus and Levy, 2017; Vargo et al., 2021b). In high-stakes environments, not only does the accuracy of deception judgments matter, but also how credibility and trust are experienced and processed, so understanding how AI-mediated video alters these socio-psychological processes is essential (Hancock et al., 2020). The integration of artificial intelligence into computer-mediated communication as an additional layer of mediating technologies further increases the sender’s control over how they present themselves while potentially complicating judgments on the receiver’s side (Hancock et al., 2020). Previous work on AI-mediated communication (Hancock et al., 2020) has shown that, in written contexts, algorithmic modifications complicate how receivers evaluate authenticity and can erode trust and credibility (Hancock et al., 2020; Hohenstein and Jung, 2020, 2018; Jakesch et al., 2019, 2023; Walther et al., 2005). This effect is particularly strong in settings where there is a mix of human and AI-generated content, where people start to second-guess others’ authenticity, a behavior termed the Replicant Effect (Jakesch et al., 2019). We currently do not understand the extent to which these effects apply to the more dynamic and complex medium of video communication. Research on AI-mediated video so far has focused on the comparatively extreme cases of generated deceptive video such as deepfakes (Ba et al., 2024; Habbal et al., 2024; Hancock and Bailenson, 2021; Hwang et al., 2021). Here, AI-generated videos are becoming highly realistic and challenging to distinguish from real footage, so people struggle to tell them apart, which leads to increased uncertainty and reduced trust (Hwang et al., 2021; Popa et al., 2025; Westerlund, 2019; Twomey et al., 2023). While deepfakes highlight the risks of realistic synthetic video (Hancock and Bailenson, 2021; Popa et al., 2025), much less is known about the impacts of commercial everyday forms of AI mediation in video communication, such as retouching, background replacements or synthetic avatars (Boyle et al., 2000), offered through widely used video communication platforms like Zoom, Google Meet and Microsoft Teams. 2.3. Theoretical Mechanisms: Expectancy Violations and Uncertainty Reduction We draw on two theoretical perspectives to contextualize a possible decrease in trust in AI-mediated communication. Expectancy Violations Theory (EVT) (Burgoon, 2015) argues that people have internalized assumptions about how others should look and behave in social interaction. When visual or behavioral cues deviate from these internalized expectations in mediated communication—such as when facial features are smoothed, backgrounds are replaced or a speaker is replaced by an animated avatar—people may perceive the interaction as less natural or less aligned with normative social scripts, which can reduce perceived trustworthiness (Grimes et al., 2021; Hohenstein and Jung, 2020; Hong et al., 2021; Lew and Walther, 2023; Rheu et al., 2024). The decrease in trust may be intensified in settings where some people use AI tools and others do not, as differences between mediated and unmediated representations become more salient and thus more likely to violate expectations (Burgoon, 2015; Panda et al., 2022). Uncertainty Reduction Theory (URT) offers a complementary perspective on how AI-mediated video may influence confidence in social judgments. Uncertainty Reduction Theory poses that uncertainty in interpersonal encounters is uncomfortable, and that people are motivated to gather information about others to reduce uncertainty and predict others’ behavior, attitudes, and intentions (Berger and Bradac, 1982; Berger and Calabrese, 1975). The motivation to reduce uncertainty is heightened in ambiguous interactions, in which people lack access to the full range of interpersonal cues they typically rely on to minimize uncertainty about another person. In mediated communication, viewers rely on visible signals—such as facial expression, gaze direction, and the timing of responses to reduce ambiguity about a speaker’s attitudes or intentions (Baxter and Braithwaite, 2008; Berger and Bradac, 1982; Berger and Calabrese, 1975; Parks and Adelman, 1983). When AI-mediated video alters, smooths or obscures visual cues that could reduce uncertainty in the interaction—e.g., by removing micro-expressions, modifying gaze or reducing facial detail—people might have less information with which to form impressions, increasing uncertainty and lowering the confidence in their own judgments. 2.4. Avatar-Mediated Communication in HCI Avatars are digital representations of users that may be abstract, cartoonish, or human-like. Avatars can be used in online video communication when users prefer privacy, cannot use a camera (e.g., bandwidth or multitasking), or want more control over their visual presentation (Nowak and Fox, 2018). The growing integration of avatars into everyday platforms has motivated HCI researchers to examine how such representations affect presence, expressiveness, and interpersonal evaluation in online meetings (Panda et al., 2022). A central finding is that high-fidelity avatars are generally more trusted, that is, those avatars that better reflect the real person and their real movements are seen as more trustworthy (Panda et al., 2022). Ma et al. (Ma et al., 2025) find that a critical factor for meeting outcomes is avatar motion fidelity rather than mere visual realism. Webcam-driven head and facial movement supports comfort, emotional clarity, and smooth interaction, while static or synthetic animations make avatars harder to read and increase cognitive load (Ma et al., 2025). In mixed environments particularly, where some people appear via video and others via avatar, the synthetic nature of the avatars becomes more salient (Junuzovic et al., 2012; Panda et al., 2022) and people may begin to distrust the avatars (Panda et al., 2022). Studies have also shown that in professional settings, avatars can interfere with expectations about workplace appropriateness (Inkpen and Sedlins, 2011) and that low-fidelity avatars often fall short in conveying facial reactions, gaze and turn-taking cues, leading to lower ratings of professionalism (Junuzovic et al., 2012). Research on HCI and avatar-mediated communication highlights that differences in visual fidelity, motion fidelity, and access to nonverbal cues shape how people evaluate others in remote meetings, influencing perceptions of professionalism (Inkpen and Sedlins, 2011), comfort (Ma et al., 2025), and trust (Panda et al., 2022). However, how such transformations affect judgments of truth or credibility remains an open question. Moreover, the current HCI literature has focused primarily on highly stylized avatar representations (Panda et al., 2022; Ma et al., 2025) and has not examined how less sophisticated forms of AI mediation, such as virtual backgrounds, lighting adjustments, or skin retouching, affect people’s judgments. 3. Methods To investigate how AI-mediated video, such as retouching, virtual backgrounds or synthetic avatars, affects people’s judgments of truthfulness and trust in online video communication, we conducted an experiment simulating an online videoconferencing environment. This section provides details on our experimental design, stimuli, procedure, measurements and recruitment. 3.1. Hypotheses and Study Design Our study design is guided by the larger research question: ”Does AI-mediated video communication disrupt or interfere with deception judgments and interpersonal trust in online communication?” Based on prior research on deception detection and AI-mediated and avatar-mediated communication, we formulated five hypotheses about how mediating AI systems might affect people’s ability to detect lies, confidence in their judgments and trust in others. For each hypothesis, we outline the relevant theoretical mechanisms that motivate the predictions and their direction. Trust is a central component of interpersonal evaluation in mediated communication (Hill et al., 2009; Ridings et al., 2002; Rocco, 1998). Here, prior CMC research shows that the reductions or distortions of interpersonal cues can decrease perceived trustworthiness (Bos et al., 2002; Wilson et al., 2006). Expectancy Violations Theory (EVT) (Burgoon, 2015) suggests that visual changes such as avatars or retouching filters may reduce trust when they deviate from what people anticipate as ”normal” in a video-call setting. At the same time, research on avatar-mediated communication shows that some mediated representations—particularly those perceived as appropriate or expressive—can be evaluated positively (Inkpen and Sedlins, 2011). As prior work indicates that trust could increase or decrease depending on the nature and quality of AI mediation, we formulate the following non-directional hypothesis: H1 Interpersonal Trust: AI-mediated video affects the perceived trustworthiness of the speaker. Truth judgment rate refer to how often viewers judge statements as true. They are a central outcome in deception detection research (Bond and DePaulo, 2006). Cue-based perspectives propose that the cognitive effort of lying produces micro-expressions that the receiver can observe to detect deception (Ekman, 1997). If AI mediation reduces access to nonverbal cues, for example, by replacing the speaker with a synthetic avatar, viewers may judge statements as more truthful because they fail to detect cues of deception. In contrast, Levine’s truth-default theory argues that people generally default to judging statements as true unless suspicion is triggered (Levine, 2014). AI mediation could increase uncertainty or suspicion and result in breaking out of that truth-default state, which would reduce truth judgment rate; that is, people would believe the AI-mediated speaker less often, so we hypothesize that: H2 Truth judgment rate: AI-mediated video affects the rate at which participants judge what is said as true. In deception detection research, humans achieve only slightly better-than-chance accuracy (54%) at correctly classifying veracity judgments (Bond and DePaulo, 2006). Similar to truth judgment rate (H2), cue-based theories might predict that removing or disrupting visual cues used to detect deception could impair people’s ability to make accurate truth-lie judgments, thereby reducing accuracy. In contrast, Levine’s truth-default theory predicts that accuracy could improve when people rely less on nonverbal cues and more on message content (Levine, 2014). Reducing visible cues under AI mediation could strengthen the intuition to shift to content-based information rather than visual cues, thereby increasing accuracy. Furthermore, meta-analyses show that accuracy often remains unchanged regardless of viewing conditions (Bond and DePaulo, 2006; DePaulo et al., 2003), which would predict a flat performance in accuracy across all levels of AI mediation. We hypothesize: H3 Judgment accuracy: AI-mediated video affects the rate at which participants judge truths as truths (truth accuracy, true positives) and lies as lies (lie accuracy, true negatives). Confidence in judgments is a subjective metric that captures how sure people feel about their judgments (DePaulo et al., 1997a). Uncertainty Reduction Theory (URT) (Berger and Bradac, 1982; Berger and Calabrese, 1975) suggests that when familiar interpersonal cues (e.g., facial expressions) are reduced or altered, uncertainty increases, which may lower people’s confidence in their own judgments in AI-mediated interactions. At the same time, prior avatar and mediated-communication studies suggest that when representations are appropriate or easy to interpret, people may feel more certain in their evaluations (Panda et al., 2022; Ma et al., 2025). As prior lines of work predict both decreases and increases in confidence, we propose the following non-directional hypothesis: H4 Judgment confidence: AI-mediated video affects participants’ confidence in their deception judgments. In real-world interactions, people may often encounter a mix of original video communications interspersed with retouch effects, virtual backgrounds and avatars. Such mixed environments may increase the salience of the AI mediation and may affect how participants react to the use of AI (Jakesch et al., 2019). Prior HCI and avatar-mediation work shows that differences in environment influence how people evaluate one another (Panda et al., 2022). From a theoretical perspective, Expectancy Violations Theory (EVT) (Burgoon, 2015) suggests that a mixed environment where people communicate with different levels of AI mediation alongside original video representations may reinforce both expectation violations and a sense of uncertainty. We hypothesize: H5 Interaction with type of environment: The impact of AI-mediated video on accuracy, trust and confidence is stronger in settings where people see a mix of different types of AI mediation compared to settings where everyone uses the same AI tools. To test the above hypotheses empirically, we designed a study consisting of two experiments: a between-subjects experiment (Study 1) to test H1 to H4 in an environment where participants encounter a single type of AI mediation only; and a within-subjects experiment (Study 2), where participants encounter different types of AI mediation to test H5 in addition to H1 to H4. We preregistered the hypotheses together with the study design and analysis plan before data collection 111AI Video Filters and Deception Judgments – Study 1 & 2 Preregistration (#239571), submitted 2025-07-23 on AsPredicted.. 3.2. Stimuli and Experimental Treatments We structured the experiment as two complementary studies conducted concurrently. In the experiments, participants evaluated six videos in a videoconferencing platform setting. We processed the videos to different levels of AI-based transformations, with video subjects telling true or fabricated stories about another person. We asked participants to judge whether video subjects were telling the truth or lying. We considered several video deception datasets for studying deception across different lie stakes. The Miami University Deception Detection Database (MU3D) (Lloyd et al., 2019) provides truthful and deceptive videos. The Bag-of-Lies dataset (Gupta et al., 2019) includes multimodal signals such as video, audio, and biometrics in low to medium-stakes laboratory settings. DOLOS (Guo et al., 2023) presents medium-stakes deception from incentivized game-show interactions with richly annotated audiovisual data. Other high-stakes datasets include courtroom trial recordings (Pérez-Rosas et al., 2015) and political deception videos (Walker et al., 2024), where consequences add complexity. For our present work, we used the Miami University Deception Detection Database (Lloyd et al., 2019), which contains webcam-recorded videos of 80 subjects, equally divided by race (black/white) and gender (male/female). Each subject is featured in four videos, in which they either make a truthful or a deceptive statement about their social relationships, under a positive or negative valence. The dataset captures unscripted, conversational speech with direct camera eye contact and natural behavior, closely mimicking the dynamics of online video communication platforms and aligning well with the purpose of the current study. As the valence dimension is irrelevant to the current study and would have introduced unnecessary variation, we focused on positive-valence videos only, yielding a 160-video base set comprising 80 lies and 80 truths. We embedded these videos into a video communication platform, Microsoft Teams, and further processed them to reflect different levels of AI mediation (see Figure 1) in addition to the control condition: T1 Control condition: In the control condition, participants saw the original, unaltered recording embedded in a Microsoft Teams interface. T2 Weak AI mediation condition: In the weak AI mediation treatment, the videos were preprocessed with the skin smoothing, lighting adjustments and virtual backgrounds features offered by Microsoft Teams. T3 Strong AI mediation condition: In the strong AI mediation treatment, we further processed the videos with Microsoft Teams’ avatar feature that replaced the person in the video with a fully synthetic representation of the speaker. For the treatment, 80 digital avatars were manually designed in the Microsoft Avatars App by the first author and a research assistant to resemble the speaker in the original video. By using the integrated video-processing features of a video communication tool commonly used in professional settings, we can study comparatively common and realistic stimuli. In contrast to previous work studying more extreme forms of AI-mediated video communication, such as deepfakes (Hancock and Bailenson, 2021; Twomey et al., 2023), our treatments provide insights into a communication setting encountered daily by millions of people. The calibration of our treatments was further informed by in-person testing and an initial pilot study with a subset of 16 videos from 8 video subjects and N = 100 participants. 3.3. Procedure and Measurements Before beginning the study, all participants provided informed consent and read brief instructions explaining the main task. They also completed an attention-check question to confirm their understanding of the task before proceeding to the main task. The main task consisted of evaluating six short, prerecorded video clips (approximately 40 seconds long) (Lloyd et al., 2019), preprocessed according to the treatment conditions described above. Videos were balanced for veracity, ensuring that each participant viewed exactly three truthful and three deceptive statements. Additionally, we balanced video subjects across gender and race. Although we had multiple videos per speaker, we ensured that each participant saw each speaker at most once. Participants could watch each video only once using standard playback controls, except for the replay option. We limited the number of videos to six per participant to balance sufficient exposure to each condition while minimizing participant fatigue. After each video, participants answered three questions: O1 Veracity Judgment: ”Do you think this person is lying or telling the truth?” (Binary choice: Lie / Truth) O2 Judgment Confidence: ”How confident are you in your judgment?” (5-point Likert scale: Not at all confident to Extremely confident) O3 Trustworthiness: ”How trustworthy does the person in the video seem?” (5-point Likert scale: Not at all trustworthy to Extremely trustworthy) We adjusted the scale directions across participants to mitigate order bias. In addition to the three main outcome variables of judgment, judgment confidence and trustworthiness, participants also answered an open-ended exploratory question for the last video only, where they explained their judgment (”Why do you think this person is lying or telling the truth?”) and completed a multiple-choice question indicating what specific cues influenced their judgment (”Which of the following most influenced your judgment of whether the person was telling the truth?”). We allowed participants to select up to three cues from a set of eight cues we assembled based on a review of categories of cues reported in prior deception research: visual nonverbal cues, vocal/paraverbal cues, content-based cues and global demeanor (Levine, ; DePaulo et al., 2003). Finally, participants estimated their own judgment success (”Out of the six videos you watched, how many do you believe you correctly assessed?”). As a manipulation check, participants also indicated how many of the six videos they believed featured an avatar. They answered an open-ended exploratory question (”Did you notice anything unusual or artificial about the videos?”). The study concluded with demographic and exploratory questions about participants’ use of online video communication platforms and their experience with AI-based video features. After submitting demographic information, participants received a detailed debriefing statement explaining the study’s purpose. 3.4. Analysis Approach We analyzed trust (H1), truth judgment rate (H2), accuracy (H3) and judgment confidence (H4) using separate linear mixed-effects models for each study, with AI mediation level as a fixed effect and random intercepts for participants to account for repeated measurements. To test the effect of the environment on trust, truth judgment rate, accuracy, and confidence (H5), we fitted a combined mixed-effects model across both studies with a fixed-effect interaction between AI mediation level (weak or strong) and environment type (homogeneous or mixed). As a further robustness check, we also estimated an extended version of these models that included demographic and experience-related covariates: age, gender, education, race, English proficiency, prior experience with video tools, prior experience with AI tools, frequency of AI interaction, and general trust in AI. We report descriptive statistics of means and standard deviations alongside model estimates (β coefficients, 95% CIs, p-values). Hypotheses are evaluated based on the model results. 3.5. Recruitment In Study 1 (between-subjects design), participants were randomly assigned to one of the treatment conditions and evaluated six videos of the same type. The between-subjects design isolates the impact of each manipulation, allowing for robust conceptual comparisons. In Study 2 (within-subjects design), participants viewed two videos per condition in random order, reflecting real-world variability in mediated communication and allowing us to capture how participants reacted to AI features in settings where some, but not all, people use them. To allow for valid comparisons across studies, participants for Study 1 and Study 2 were recruited concurrently from the same sample and randomly assigned to one of the two study designs. A total of 2,000 participants were recruited through Prolific (Palan and Schitter, 2018). Eligibility criteria required that participants be 18 years or older and be resident in the United States. We determined the sample size based on a bootstrapped power analysis conducted before data collection. Note that our sample differs from the preregistration, as we initially planned to recruit only 1,000 participants. The initial power analysis, based on pilot study data, estimated the required sample size to achieve approximately 80% power to detect small changes (d = .2) in trust and confidence. After collecting an initial 1,000 answers, we realized that relevant effect sizes in accuracy (3-5%, corresponding to a Cohen’s d = .06 to .1) were substantially smaller than estimated, and the initially planned sample would leave us with variations in accuracy that were difficult to interpret. Increasing the sample size to N = 2,000 enabled us to detect larger accuracy differences (d = .1) with approximately 70% power. The larger sample should also have improved the robustness and interpretability of our overall study. As no changes in accuracy were detected with the increased sample, we see no risk of false positives. We compensated participants $2 for their participation, which, on average, took about 10 minutes, corresponding to a $12 hourly rate. To encourage attentive responding and to raise the stakes of the scenario, we offered participants a $2 bonus if they classified at least 5 videos correctly, doubling their base payment. 289 participants received the bonus payment. Participants ranged in age from 21 to 77 (M = 41, SD = 13.7, Median = 37). Male participants represented 61.3% and female participants represented 38.7% of the sample. 65.8% of the sample self-identified as White or Caucasian; 22.5% as Black or African American; 4.5% as Asian; 4.5% as Latino or Hispanic; and 2.7% as Indigenous, Middle Eastern, North African, or mixed race. Most participants were highly educated, with 42.3% holding a bachelor’s degree and 23.4% a master’s degree. The majority were native English video subjects (90.1%), with the remainder reporting advanced or intermediate English proficiency. 4. Results In this section, we present the empirical results from the experiments and analyze how AI-mediated video processing influenced interpersonal trust, judgment accuracy and confidence. The results reveal that AI mediation affects perceptions of trust and the confidence with which people make judgments, but has limited effects on actual judgment accuracy or on the tendency to believe others. Figure 2. AI mediation affects interpersonal trust, particularly in mixed environments. Average interpersonal trust by video mediation type and study design with 95% confidence intervals; N = 2,000 ratings per data point. Participants in the left panel (Study 1) rated six videos of the same mediation type, whereas participants in the right panel (Study 2) watched two videos of each mediation type in random order. AI mediation affects interpersonal trust, particularly in mixed environments. Average interpersonal trust by video mediation type and study design with 95% confidence intervals; N = 2,000 ratings per data point. Participants in the left panel (Study 1) rated six videos of the same mediation type, whereas participants in the right panel (Study 2) watched two videos of each mediation type in random order. Interpersonal trust (H1):Figure 2 shows participants’ trust in the person in the video across different types of AI mediation. The left panel shows trust ratings in homogeneous environments (Study 1), where participants encountered only one type of mediation, with the different types of mediation shown on the x-axis. In the control condition on the far left, where participants saw only original videos without AI mediation, the video subjects received an average trust rating of 0.51 (SD = 0.265), corresponding to ”moderately trustworthy”. Subjects using retouch effects received similar trust ratings (M = 0.504, SD = 0.263), while video subjects using avatar filters received slightly lower trust ratings (M = 0.485, SD = 0.257). We fitted a linear mixed model to predict reported trust by mediation type in Study 1 with a per-participant random fixed effect to account for the repeated measures design. The effect of avatar-based mediation on trust is statistically significant and negative (β = -0.03, 95% CI [-0.05, -0.002], t(5797) = -2.15, p = .032). In contrast, the effect of the weak AI mediation condition is statistically non-significant (β = -0.006, 95% CI [-0.03, 0.02], t(5797) = -0.52, p = .604). We provide further details on the model in Table 1 in the Appendix. In Study 2, where participants encountered different types of AI mediation in a mixed environment (right panel in Figure 2), the effect of AI mediation on trust was more substantial. Video subjects in the original video, shown in the left column, received an average trust rating of 0.499 (SD = 0.264), similar to the trust ratings in the Study 1 control group. Video subjects who used retouches with virtual backgrounds received slightly lower trust ratings (M = 0.477, SD = 0.259), whereas video subjects who used avatars as the stronger AI mediation received substantially lower trust ratings (M = 0.419, SD = 0.264). We fitted a linear mixed model with a per-participant random fixed effect to predict reported trust by mediation type in Study 2. Compared to the control condition, the effect of AI mediation on perceived trust of the speaker in the mixed environment is statistically significant and negative for the retouch condition (M = .477, β = -0.02, 95% CI [-0.04, -0.006], t(6235) = -2.81, p = .005) and for avatars (M = 0.419, β = -0.08, 95% CI [-0.09, -0.07], t(6235) = -10.51, p ¡ .001). We provide further details on the model in Table 2 in the Appendix. Next, we analyzed the extent to which the effects of different types of AI mediation differed across environments by comparing results from Study 1 and Study 2 (H5). We fitted a linear mixed model predicting reported trust, with mediation type and environment type as predictors, across Study 1 and Study 2, including a per-participant random fixed effect to account for the repeated-measures design. The interaction term for the avatar condition was statistically significant and negative, indicating that the trust penalty for avatars was substantially larger in mixed environments than in homogeneous ones (β = -0.05, 95% CI [-0.08, -0.03], t(12034) = -4.05, p ¡ .001). We found no reliable interaction for retouch videos (β = -0.02, 95% CI [-0.04, 0.01], t(12034) = -1.13, p = .257). The main effect of the environment was non-significant (β = -0.01, 95% CI [-0.03, 0.008], t(12034) = -1.14, p = .253), showing that baseline trust in the control condition did not differ between environments. We provide further details on the model in Table 3 in the Appendix. Overall, the results support the hypothesis that AI-mediated video influences perceived trustworthiness (H1). Considering the effect of the environment (H5), trust in avatar video subjects in mixed environments was significantly lower than in the homogeneous setting. Figure 3. Truth judgment rates were unaffected by mediating AI mediation Percentage of times participants thought the person in the video told the truth, with 95% Wilson confidence intervals; N = 2,000 judgments per data point. Across both studies, participants exhibit a bias towards truth judgment rate (57.6 to 61.5%) independent of AI mediation, while the stimuli dataset contained 50% truths and 50% lies. Truth judgment rates were unaffected by mediating AI mediation. Percentage of times participants thought the person in the video told the truth, with 95% Wilson confidence intervals; N = 2,000 judgments per data point. Across both studies, participants exhibit a bias towards judging statements as true (57.6% to 61.5%) independent of AI mediation, while the stimuli dataset contained 50% truths and 50% lies. Truth judgment rate (H2): Although we observed a reduction in trust through AI mediation, we did not observe changes in how often participants thought the person in the video was lying. Figure 3 shows participants’ truth judgment rate, that is, how often participants indicated that they thought the person in the video was telling the truth. In the control condition, in which participants were shown the original video, they believed video subjects were telling the truth about 60% of the time (59.9% in Study 1 and 59.7% in Study 2). Note that this rate is significantly higher than the ground truth frequency shown in grey, aligning with other studies showing that people are truth-biased (Levine, 2014; Levine et al., 1999). However, we observed no significant difference in truth judgment rates when video subjects used retouches (58.1% and 57.6%) or avatar filters (61.5% and 57.7%) across both studies. Given our overall sample size, we would have expected to detect a change of about 3-5% most of the time. We fitted linear mixed models to predict reported truth judgment rate by mediation type in Study 1 and in Study 2 with a per-participant random fixed effect. In neither study was the effect of avatar-based mediation on truth judgment rate statistically significant. The effect of weak AI mediation (retouch) on truth judgment rate is statistically non-significant in Study 1 (β = -0.02, 95% CI [-0.05, 0.01], t(5797) = -1.09, p = .278) and Study 2 (β = -0.02, 95% CI [-0.05, 0.009], t(6235) = -1.35, p = .176), as is the effect in the avatar condition for Study 1 (β = 0.02, 95% CI [-0.01, 0.05], t(5797) = 1.07, p = .284) and Study 2 (β = -0.02, 95% CI [-0.05, 0.01], t(6235) = -1.29, p = .19). Furthermore, we fitted a linear mixed model to predict the reported truth judgment based on mediation type and environment type across Study 1 and Study 2 with a per-participant random fixed effect. None of the interaction terms were statistically significant, indicating no effects on the type of environment (H5). We provide further details on the models in Table 1, 2 and 3 in the Appendix. We observed no differences in truth judgment rates across mediation types and environments. The results do not support H2 but are consistent with Levine’s truth-default theory, which posits that people have a consistent truth bias and default to believe others unless clear suspicion is triggered (Levine, 2014). Figure 4. Judgment accuracy by AI mediation type and statement veracity. Percentage of times participants correctly identified a truth as a truth or a lie as a lie, with 95% Wilson confidence intervals; N = 1,000-2,000 judgments per data point. Participants identified about 60-65% of truths correctly, but only about 42-45% of lies. Except for truths in the avatar treatment in Study 1, AI mediation did not affect judgment accuracy. Judgment accuracy by AI mediation type and statement veracity. Percentage of times participants correctly identified a truth as a truth or a lie as a lie, with 95% Wilson confidence intervals; N = 1,000-2,000 judgments per data point. Participants identified about 60-65% of truths correctly, but only about 42-45% of lies. Except for truths in the avatar treatment in Study 1, AI mediation did not affect judgment accuracy. Judgment accuracy (H3): Figure 4 shows participants’ judgment accuracy across conditions, that is, how often they rated truths as truths and lies as lies. The black data points at the center of the graph show the overall accuracy on the combined set of stimuli, half of which contained truths and half of which contained lies. The grey reference line shows the baseline accuracy that participants would have achieved by random responses (50%). In line with findings in related work, participants were slightly better than random at telling truths from lies, with 52.3% accuracy in the Study 1 control group and 54.1% in the control group of Study 2. This rate did not change significantly across videos with retouching (51.1% and 52.2%) or avatar (54.1% and 53.4%) in Study 1 and Study 2. We fitted two linear mixed models to predict judgment accuracy by mediation type in Study 1 and Study 2, with a per-participant random fixed effect. Participants achieved higher accuracy on videos in which subjects told the truth (shown in blue), with 62.2% in the control condition in Study 1 and 64.1% in the control condition in Study 2. While this level of accuracy is significantly higher (β = 0.20, 95% CI [0.17, 0.23], p ¡ .001) than the accuracies participants achieved in videos with lies (42.5% and 44.5% respectively), this difference largely reflects the general truth bias in participants’ judgments observed above. In Study 2, we observed a non-significant reduction in accuracy in the retouch condition (59.9%, β = -0.02, 95% CI [-0.05, 0.01], t(6235) = -1.22, p = .224). We tested the mediating effect of the environment type (H5) by fitting a linear mixed-effects model predicting accuracy from mediation type, study environment and their interaction. The model showed no statistically significant effects of either retouch (β = –0.007, 95% CI [–0.08, 0.06], p = .844) or avatar mediation (β = 0.04, 95% CI [–0.03, 0.11], p = .238) relative to the control condition. The interaction terms assessing whether mediation effects differed between homogeneous and mixed environments were also non-significant for both retouch (β = –0.006, 95% CI [–0.05, 0.04], p = .791) and avatar conditions (β = –0.02, 95% CI [–0.07, 0.02], p = .270). We provide the full model details in Table 3 in the Appendix. Participants achieved 51–54.5% overall accuracy, corresponding to just 1–4.5% above chance. Consequently, only relatively large AI mediation effects that meaningfully improved or reduced this limited margin of accuracy would be detectable. Although our studies were adequately powered to detect changes of 3–5%, they were not sufficiently sensitive to reliably capture smaller effects of 1–2%. Our results show no meaningful differences in accuracy across mediation types (H3) or environments (H5), aligning with prior meta-analysis work on deception detection research showing that human deception detection accuracy remains stable across viewing conditions and is only slightly above chance (Bond and DePaulo, 2006; DePaulo et al., 2003). Figure 5. AI mediation affects judgment confidence, but only in mixed environments. Average reported confidence in judgment by video mediation type and study design with 95% confidence intervals; N = 2,000 ratings per data point. Participants encountering different types of AI-mediated content and original content in Study 2 (right panel) were less confident in their judgments, particularly for strong AI-mediated content. AI mediation affects judgment confidence, but only in mixed environments. Average reported confidence in judgment by video mediation type and study design with 95% confidence intervals; N = 2,000 ratings per data point. Participants encountering different types of AI-mediated and original content in Study 2 (right panel) were less confident in their judgments, particularly for strong AI-mediated content. Judgment confidence (H4): We fitted a linear mixed model to predict the reported judgment confidence based on mediation type in Study 1 and Study 2 with a per-participant random fixed effect to account for the repeated measures design. While AI mediation did not affect participants’ judgment accuracy, it did affect their confidence in their judgments, as shown in Figure 5. In the control conditions, participants reported a mean confidence of 0.56 (SD = 0.258) in Study 1 and 0.57 (SD = 0.253) in Study 2, which falls between moderately confident (0.5) and very confident (0.75) on the Likert scale. In Study 1, where participants encountered only one type of mediation, their confidence did not change significantly when they encountered videos with retouching (M = 0.57, SD = 0.255, β = 0.006, 95% CI [-0.02, 0.03], t(5797) = 0.44, p = .661) or avatar filters (M = 0.55, SD = 0.248, β = -0.01, 95% CI [-0.04, 0.02], t(5797) = -0.86, p = .391). In Study 2, however, where participants encountered a mix of different mediation types, they were significantly less confident in their judgments when evaluating videos with retouches (M = 0.556, SD = 0.249) and avatar filters (M = 0.529, SD = 0.258). Linear mixed-effects models confirm a significant negative effect compared to the control condition for retouch (M = 0.556, β = -0.02, 95% CI [-0.03, -0.006], t(6235) = -2.94, p = .003) and avatar filters (M = 0.529, β = -0.05, 95% CI [-0.06, -0.03], t(6235) = -7.10, p ¡ .001). Details on the models are provided in Table 1 and Table 2 in the Appendix. To test whether the effect of AI mediation on confidence differs between homogeneous and mixed environments (H5), we fitted a linear mixed-effects model with an interaction between level of AI mediation and environment, including a random intercept for participants (see Table 3). While the interaction for retouches was not statistically significant (β = -0.03, 95% CI [-0.05, 0.0043], t(12034) = -1.67, p = .095), the model shows a statistically significant interaction in the avatar condition (β = -0.03, 95% CI [-0.06, -0.004], t(12034) = -2.23, p = .026), suggesting that confidence drops more sharply for avatar-mediated videos in mixed environments than in homogeneous ones, aligning with findings from prior work on HCI and avatar-mediated communication research (Panda et al., 2022). Judgment confidence decreased for AI-mediated video subjects, particularly for avatars and in mixed environments, consistent with H4 and H5. While confidence remained stable in homogeneous settings (Study 1), it declined in mixed environments in which participants had to compare differently mediated videos side by side, aligning with prior work on avatar mediation (Panda et al., 2022) and Uncertainty Reduction Theory. Figure 6. Participants relied more on content-based cues and less on expressions and body language when evaluating AI-mediated content. Percentage of participants indicating reliance on different cues based on mediation type, with 95% Wilson confidence intervals; N = 2,000. Participants relied substantially on gaze, expressions and body language (top three rows) when evaluating the original and retouched videos, cues that were lost under strong AI mediation. Instead, participants relied more on voice, fluency, and content consistency (bottom four rows) when evaluating avatar videos. Participants relied more on content-based cues and less on expressions and body language when evaluating AI-mediated content. Percentage of participants indicating reliance on different cues based on mediation type, with 95% Wilson confidence intervals; N = 2,000. Participants relied substantially on gaze, expressions and body language (top three rows) when evaluating the original and retouched videos, cues that were lost under strong AI mediation. Instead, participants relied more on voice, fluency, and content consistency (the bottom four rows) when evaluating videos featuring avatars. Figure 6 summarizes the answers participants gave when asked which cues or elements most influenced their judgments in the last video. We coded each cue option as a binary indicator (selected = 1, not selected = 0) and report the percentage of participants selecting each cue along with Wilson 95% confidence intervals. In a multiple-choice question, participants could select up to three cues, such as gaze and eye contact, body language, or speech fluency, shown on the y-axis on the left of Figure 6. The x-axis shows how often participants selected a cue, depending on the AI mediation type of the relevant video. Control and retouch videos are shown in dark and light grey, respectively, and avatar videos in red. We gathered the set of cues available from three major categories of deception research to cover the most reported cues according to prior work: visual nonverbal cues, vocal or paraverbal cues, content-based cues and global demeanor (Levine, ; DePaulo et al., 2003). In the control condition, participants substantially relied on gaze and eye contact (42.1%), facial expressions (30%), and body language (29.5%) to make their judgments. The retouch condition (weak AI mediation) closely resembles the control condition in the use of nonverbal cues such as gaze and eye contact (44.7%), facial expressions (26.5%), and body language (29.5%). By contrast, when the speaker used an avatar instead, participants reported relying less on these nonverbal cues (17.2%, 13.9%, and 11.2%, respectively). Instead, participants shifted their attention from visual cues to audio and content-based cues, relying on voice tone and pitch (41.6%), speech fluency (51.6%), story specificity (37.3%), and consistency (29.1%) more than participants in the control group (20.1%, 33.3%, 29.8%, and 15.9%, respectively). Overall, these results show that while AI mediation did not affect participants’ judgment accuracy, it did change how they arrived at their judgments and how confident they felt about them, providing mechanisms that help explain why participants felt less confident in their judgments and trusted the speaker less in strongly AI-mediated videos. 4.1. Manipulation Checks As a manipulation check, we asked participants to indicate how many of the six videos they watched featured a fully-animated avatar after the main task. In Study 2 (mixed environment), participants rated exactly two avatar videos and, on average, reported a similar number (M = 2.22, SD = 0.58). In Study 1 (homogeneous environment), participants in the avatar condition (strong AI mediation) correctly recognized that all or most of the videos featured avatars (M = 5.78, SD = 0.86). In the open-ended question (”Did you notice anything unusual or artificial about the videos?”), participants also frequently commented on the avatar videos in the mixed environment (Study 2), describing them as ”not a real person,” ”weird,” ”off,” ”unnatural,” or ”hard to read.” Several participants noted that the avatar ”looked artificial” or that the speaker disappeared to be ”behind an avatar”: ”I immediately don’t trust people who use avatars. They are generally ’social snipers’ who hide behind anonymity so they don’t have to be responsible for their actions” (Participant N474). Comments on the virtual background or visual filters from the retouching condition were less frequent. They typically referred to mild visual artifacts, such as ”blurred background,” ”border around the person,” or ”the lighting seemed edited”. Overall, our two manipulation checks suggest that strong AI mediation was salient to participants, particularly in the mixed environment of Study 2. The qualitative responses indicate that participants noticed strong AI-mediated videos and occasionally noted minor artifacts in weak mediation conditions. 4.2. Robustness Checks To increase the robustness of our results, we fitted an extended linear mixed-effects model across both studies with covariates including age, gender, race, education, English proficiency, subjective trust in AI systems, and prior experience with online video communication platforms, video filters and AI mediation, in addition to the independent variables of mediation type and study environment (see Appendix Table 4). All significant effects reported in the previous studies remain significant in the extended model, after including the covariates: the model predicts that the negative effect of avatars on trust remains significant (β = –0.027, p ¡ .05), particularly in the mixed environment (β = –0.080, p ¡ .001). A somewhat smaller but statistically significant trust reduction was also predicted for retouched videos in mixed environments (β = –0.021, p ¡ .01). Similarly, a significant decrease in confidence was predicted for retouched videos in mixed environments (β = –0.019, p ¡ .01), with an even larger reduction for avatars in mixed environments (β = –0.046, p ¡ .001). As in the main analysis, truth judgment rates and accuracy remained unaffected by mediation type. 5. Discussion In our experiments, AI-mediated video communication substantially affected how people evaluated each other. Particularly in the strong AI-mediated treatment, where video subjects used synthetic avatars, participants’ trust in the speaker and their confidence in their truth-lie judgments were reduced. While AI mediation changed the cues on which participants relied for their judgments, it did not affect how often participants suspected others of lying, nor did it improve or impair judgments accuracy. The observed decreases in trust and confidence were moderated by the type of environment, with larger decreases in mixed environments, where participants encountered a mix of original and AI-mediated videos. In the following, we discuss three possible interpretations of why AI-mediated video might undermine trust without triggering suspicion; how accuracy remains stable while cue reliance changes; and how confidence in judgment drops even though judgment accuracy does not change. We finish by outlining implications for design and policy. 5.1. AI-Mediated Video Reduces Interpersonal Trust Without Raising Suspicion AI-mediated video consistently reduced interpersonal trust (H1), particularly when video subjects were replaced by avatars in a mixed environment alongside more natural, unaltered video subjects (H5). The findings align with prior work on avatar-mediated communication in which low-fidelity representations are trusted less than natural faces (Panda et al., 2022; Ma et al., 2025). The observed effect of reduced trust in strong AI mediation was amplified in mixed environments (Study 2), aligning with Expectancy Violations Theory (EVT), which posits that deviations from the internalized social expectations decrease trust in the speaker (Burgoon, 2015). The mixed environment may have made AI mediation more salient and highlighted participants’ expectations regarding how a speaker should present themselves on video, leading to more negative trust evaluations. Here, our findings align with and extend the Replicant Effect (Jakesch et al., 2019), in which, in a mix of human-generated and AI-generated content, the trustworthiness of subjects suspected of using generated content decreases as people begin to question each other’s humanity. Our findings show that the Replicant Effect holds in the more dynamic medium of video and that even after people have become more accustomed to various forms of AI systems in recent years, AI-mediated communication still decreases interpersonal trust. Our work also highlights that trust is reduced even under weaker forms of AI mediation, such as retouching and virtual backgrounds. Surprisingly, although participants found AI-mediated video subjects less trustworthy, they believed them just as often as they believed those in the original video. The stability in truth and lie rates is consistent with Levine’s truth-default theory (Levine, 2014), which posits that, by default, people believe others are telling the truth unless something triggers suspicion. While the visual unfamiliarity and synthetic nature of the avatars may have disrupted trust, it was not enough to trigger suspicion and override the default behavior of believing others. The apparent contradiction of reduced trust in avatars yet continued belief in them is best understood by distinguishing trust and belief as psychologically distinct processes with different implications (Holton, 1994). While trust reflects an affective and relational judgment about another person’s characteristics, belief is a cognitive judgment about whether a statement is factually true (Holton, 1994). Our findings extend prior work on HCI and avatar-mediated communication (Panda et al., 2022; Ma et al., 2025) by showing that AI-mediated speakers can be believed without being trusted. That distinction matters because AI mediation can erode the interpersonal foundations of communication, such as trust, social connection and willingness to cooperate (Pan and Steed, 2017; Schilke and Reimann, 2025), without undermining social epistemology and without leading to a mediation environment in which people begin to question mediated statements. 5.2. AI-Mediated Video Affects Cue Reliance But Not Deception Detection Accuracy The present work extends prior work on the effects of avatar-mediated communication (Panda et al., 2022) by examining their effect on lie detection. Here, our results largely align with well-documented findings from deception research (DePaulo et al., 2003; Bond and DePaulo, 2006). In both studies, participants identified truths and lies with roughly 52–54% accuracy, replicating the average truth-lie accuracy of 54% reported in (Bond and DePaulo, 2006). Participants were also more accurate at identifying truths than lies, an asymmetry known in deception research as the veracity effect (Levine et al., 1999). However, accuracy rates did not differ meaningfully between the control condition, in which video subjects were unaltered, and the retouch or avatar conditions, in which facial expressions, eye contact and other visual behaviors were altered or removed. The stable accuracy across conditions, even under strong AI mediation, contradicts the predictions of cue-based deception detection theories (Ekman, 1997; Ekman and Friesen, 1969). These theories posit that deception is detected by leakage, that is, by observing a set of nonverbal cues involuntarily produced by the cognitive effort of lying. As avatar-mediated communication would substantially reduce the nonverbal cues that might give away a liar, we would expect accuracy to decrease in environments with reduced leakage. Our studies, however, show that accuracy neither decreased nor improved in the avatar condition. While we do not observe an effect of avatars on deception detection, the stable accuracy across conditions aligns with a common finding in lie detection research: people are not good at detecting lies (DePaulo et al., 2003), regardless of the type of mediation. Instead, the stable accuracy across conditions supports a heuristic view of lie detection (Bond and DePaulo, 2006; Levine, ), which holds that judgments are driven by content-based cues like plausibility, coherence and fact-checking, rather than leakage through involuntary nonverbal cues. These content-based cues, such as plausibility, coherence, and fact-checking, are arguably less affected by the use of retouching, backgrounds and video avatars. While accuracy was not affected by AI mediation, participants shifted to rely on speech fluency, story consistency, and specificity, rather than to facial expressions or body language. This finding is again paralleled by the central tenet of Levine’s truth-default theory (Levine, 2014), which holds that content rather than demeanor is the basis for detecting deception. Based on Levine’s truth-default theory, one might have expected the shift to content-based cues to lead to even greater accuracy, as when people pay more attention to content cues, they are more likely to notice inconsistencies. However, accuracy remained stable across conditions, which may be due to the nature of the interactions we studied: in our tasks, participants judged statements about unknown people, with no means to verify the claims or compare them with existing knowledge, limiting the potential to challenge the default assumption of truth. Although participants shifted to rely on content-based and verbal cues, without information to contextualize or fact-check the statements, participants may have rarely had the chance to notice inconsistencies that might have broken their truth-default bias. 5.3. AI-Mediated Video Complicates Judgments and Reduces Judgment Confidence Participants’ subjective confidence in their own judgments decreased in the mixed environment (Study 2), particularly for the avatar condition—again, despite their truth-lie rates and accuracy staying the same. The decrease in confidence in judgments for avatars aligns with prior findings on avatar-mediated communication (Panda et al., 2022) and is supported by both cue-based theories and Uncertainty Reduction Theory. While decades of deception research have shown that nonverbal cues like gaze or facial expressions are poor indicators of lying (DePaulo et al., 2003), people continue to rely on them, or at least, they believe they depend on them. When these familiar cues are removed or disrupted, people feel less confident about what to rely on instead. In this way, these cues serve a social-psychological function: they reduce uncertainty and help individuals feel familiar in social interactions. Uncertainty Reduction Theory (Berger and Calabrese, 1975) posits that people are motivated to seek information that reduces ambiguity in social communication. AI-mediated communication interrupts that process by stripping away or synthetically simulating these social cues. This disruption holds even after people have gotten more accustomed to AI tools, and even for commonly used transformations such as lighting corrections and virtual backgrounds. Even if the removal of familiar cues does not impair accuracy, it leaves participants feeling less sure of themselves because they can’t rely on some signals that they would usually rely on to make interpretation easier. The most pronounced decline in judgment confidence is in Study 2, in which participants saw all three types of mediation within the same environment. The mix of mediation types likely heightened the salience of visual disruption and made avatars feel more unpredictable or ”out of place”. Here, Expectancy Violation Theory (Burgoon, 2015) posits that people have internalized expectations for how social interactions should look and feel, and when those expectations are violated, such as encountering a synthetic face after two natural ones, it creates friction in the interpretation of the communication. While friction does not make the person seem more deceptive, it may make the interaction more challenging to process. Previous research suggests that a disruption in processing fluency leads to lower confidence in judgments (Burgoon, 2015; Zhou and Jia, 2023). This interpretation aligns with participants’ open-ended responses in our study, in which they describe avatars as ”hard to read” or ”off-putting”, suggesting that a breakdown in fluency of interpretation makes people feel less confident about what to make of the communication. 5.4. Implications Our findings show that AI-mediated video did not affect truth judgment rates or detection accuracy. As such, our findings challenge concerns that “with their ability to alter users’ appearances dramatically, beauty filters can facilitate deception” (Marr, ). Similarly, AI-mediated video has been discussed as a potential threat to people’s ability to judge honesty accurately (Park et al., 2024; Kanji, ). Here, although we find that AI-mediated communication can affect trust and confidence, it does not substantially affect people’s ability to detect lies in video communication. We note, however, that even under AI mediation, people’s ability to detect deception remains close to chance. However, consistent with prior work on trust in avatars (Panda et al., 2022), we show that speakers using AI-mediated video were trusted less and participants felt less confident in their judgments (DePaulo et al., 1997b; Harvey, 1997; O’Connor, 1989). The mismatch between judgment performance and relational trust and confidence is an important implication (Bellemare and Sebald, 2019; Dautriche et al., 2021) in contexts where the feeling of certainty of knowing can be as important as the decision itself. In high-stakes settings such as remote hiring, clinical evaluations or legal proceedings, lower confidence may lead to hesitation, increased caution or reduced assertiveness (Double and Birney, 2024; Liu, 2024; Patalano and LeClair, 2011; Schooler et al., 2024). Our results also broaden the debate about the risks of AI video beyond deepfakes (Hancock and Bailenson, 2021; Popa et al., 2025): even widely used filters like retouching and virtual backgrounds affect evaluations of trust and credibility in ways that matter (Afroogh et al., 2024; Hancock et al., 2020). Our results show that the central risk of AI-mediated video may lie in the erosion of trust and confidence in judgments, particularly when mixed environments make mediation salient. When designing video communication platforms, the question is how new AI-based features that may improve aesthetics and convenience might also undermine trust and confidence. Here, further research is needed to understand what elements are required for representational consistency within calls, what forms of disclosure might mitigate reductions in trust, and higher-fidelity or more expressive avatars may preserve the aspects of communication that people feel they need to feel confident in their judgments (Panda et al., 2022; Ma et al., 2025). Our findings also highlight the need for more context-sensitive forms of AI mediation: communication tools could offer users different AI mediation levels depending on the situation—for example, realistic appearances in professional calls and more stylized options in informal chats—to adjust the communication to the expectations of the context and to calibrate the affordances of the communication based on context-specific needs. 5.5. Limitations and Future Directions Participants evaluated short, prerecorded videos from (Lloyd et al., 2019) in which video subjects made simple personal statements about people unknown to the participants. Despite being incentivized with a bonus payment, the simulated scenario did not constitute a high-stakes situation for participants, and speakers in the video likely felt minimal pressure to lie. A low-stakes context may have muted the behavioral leakage or suspicion triggers on which cue-based theories depend. Future work should examine whether AI-mediated video has a different impact on high-stakes lies, where participants are more motivated to detect deception and speakers are under pressure. Such settings could include hiring, legal evaluations or sensitive interpersonal disclosures, where stakes and incentives are real and carry consequences. Furthermore, AI-mediated communication may be perceived differently in ongoing teams, family calls or workplace meetings. As the video subjects were strangers to the participants, the findings may not generalize to communication with more familiar groups, where prior work shows that familiarity can attenuate negative impressions of mediated cues (Mieczkowski et al., 2021a) and that contextual knowledge can increase lie-detection accuracy. Future work could examine how AI mediation affects trust and confidence in contexts where familiarity and existing relationships shape impressions and social expectations. In addition, the experimental setup lacks the fluid interactive nature of real-time video communication. In live conversations, participants can ask follow-up questions, interpret timing and adapt to social feedback. More research is needed to investigate how AI-mediated appearances affect interpersonal dynamics in live or semi-structured conversations, particularly in collaborative or conflict-prone contexts such as negotiations or interviews. Finally, although participants at the time of our study (August 2025) had substantial exposure to AI tools and weaker forms of AI-mediated video, such as retouching or virtual backgrounds, strong AI-mediated video is still unevenly adopted. As AI-mediated video and synthetic appearances become more normalized, user expectations and reactions may shift. Future work should track how perceptions of AI-mediated video evolve, including through longitudinal studies and cross-cultural comparisons, to assess whether increased exposure to AI reinforces or attenuates the observed effects. Acknowledgements.We thank our research assistant, José Agostinho, for assisting the first author with processing the video stimuli and generating the avatars. We acknowledge the use of ChatGPT for reviewing the author’s original writing and for proposing phrasing improvements to increase clarity. All manuscript text was written and finalized by the authors. References S. Afroogh, A. Akbari, E. Malone, M. Kargar, and H. Alambeigi (2024) Trust in AI: progress, challenges, and future directions. Humanities and Social Sciences Communications 11 (1), p. 1568. External Links: Document, ISSN 2662-9992 Cited by: §5.4. Z. Ba, Q. Liu, Z. Liu, S. Wu, F. Lin, L. Lu, and K. Ren (2024) Exposing the Deception: Uncovering More Forgery Clues for Deepfake Detection. Proceedings of the AAAI Conference on Artificial Intelligence 38 (2), p. 719–728. External Links: Document, ISSN 2374-3468, 2159-5399 Cited by: §2.2. L. A. Baxter and D. O. Braithwaite (2008) Engaging Theories in Interpersonal Communication: Multiple Perspectives. SAGE. External Links: ISBN 978-1-4129-3852-5 Cited by: §2.3. C. Bellemare and A. Sebald (2019) Self-Confidence and Reactions to Subjective Performance Evaluations. Technical report IZA - Institute of Labor Economics. External Links: resrep66649 Cited by: §5.4. C. R. Berger and J. J. Bradac (1982) Language and social knowledge : uncertainty in interpersonal relations. The Social Psychology of Language, E. Arnold. Cited by: §2.3, §3.1. C. R. Berger and R. J. Calabrese (1975) Some Explorations in Initial Interaction and Beyond: Toward a Developmental Theory of Interpersonal Communication. Human Communication Research 1 (2), p. 99–112. External Links: Document, ISSN 0360-3989 Cited by: §2.3, §3.1, §5.3. Bokolo Anthony Jnr. (2020) Use of Telemedicine and Virtual Care for Remote Treatment in Response to COVID-19 Pandemic. Journal of Medical Systems 44 (7), p. 132. External Links: Document, ISSN 1573-689X Cited by: §2.2. C. F. Bond and B. M. DePaulo (2006) Accuracy of Deception Judgments. Personality and Social Psychology Review 10 (3), p. 214–234. External Links: Document, ISSN 1088-8683, 1532-7957 Cited by: §1, §2.1, §2, §3.1, §3.1, §4, §5.2, §5.2. N. Bos, J. Olson, D. Gergle, G. Olson, and Z. Wright (2002) Effects of four computer-mediated communications channels on trust development. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’02, New York, NY, USA, p. 135–140. External Links: Document, ISBN 978-1-58113-453-7 Cited by: §2.2, §2, §3.1. M. Boyle, C. Edwards, and S. Greenberg (2000) The effects of filtered video on awareness and privacy. In Proceedings of the 2000 ACM Conference on Computer Supported Cooperative Work, CSCW ’00, New York, NY, USA, p. 1–10. External Links: Document, ISBN 978-1-58113-222-9 Cited by: §2.2. J.K. Burgoon, G.A. Stoner, J.A. Bonito, and N.E. Dunbar (2003) Trust and deception in mediated communication. In 36th Annual Hawaii International Conference on System Sciences, 2003. Proceedings of The, p. 11 p.–. External Links: Document Cited by: §2.1. J. Burgoon (2015) Expectancy Violations Theory. External Links: Document, ISBN 978-1-118-30605-5 Cited by: §2.3, §3.1, §3.1, §5.1, §5.3. B. L. Connelly, S. T. Certo, R. D. Ireland, and C. R. Reutzel (2011) Signaling Theory: A Review and Assessment. Journal of Management 37 (1), p. 39–67. External Links: Document, ISSN 0149-2063, 1557-1211 Cited by: §1. I. Dautriche, H. Rabagliati, and K. Smith (2021) Subjective confidence influences word learning in a cross-situational statistical learning task. Journal of Memory and Language 121, p. 104277. External Links: Document, ISSN 0749-596X Cited by: §5.4. B. M. DePaulo, K. Charlton, H. Cooper, J. J. Lindsay, and L. Muhlenbruck (1997a) The Accuracy-Confidence Correlation in the Detection of Deception. Personality and Social Psychology Review 1 (4), p. 346–357. External Links: Document, ISSN 1088-8683, 1532-7957 Cited by: §3.1. B. M. DePaulo, K. Charlton, H. Cooper, J. J. Lindsay, and L. Muhlenbruck (1997b) The Accuracy-Confidence Correlation in the Detection of Deception. Personality and Social Psychology Review 1 (4), p. 346–357. External Links: Document, ISSN 1088-8683 Cited by: §5.4. B. M. DePaulo, J. J. Lindsay, B. E. Malone, L. Muhlenbruck, K. Charlton, and H. Cooper (2003) Cues to deception.. Psychological Bulletin 129 (1), p. 74–118. External Links: Document, ISSN 1939-1455, 0033-2909 Cited by: §1, §2.1, §2.1, §3.1, §3.3, §4, §4, §5.2, §5.2, §5.3. B. DePaulo (1985) Deceiving and detecting deceit. The Self and Social Life, p. 323–370. Cited by: §2.1. N. Döring, K. D. Moor, M. Fiedler, K. Schoenenberg, and A. Raake (2022a) Videoconference fatigue: A conceptual analysis. International journal of environmental research and public health 19 (4), p. 2061. Cited by: §1. N. Döring, K. D. Moor, M. Fiedler, K. Schoenenberg, and A. Raake (2022b) Videoconference Fatigue: A Conceptual Analysis. International Journal of Environmental Research and Public Health 19 (4), p. 2061. External Links: Document, ISSN 1660-4601 Cited by: §1. K. S. Double and D. P. Birney (2024) Confidence judgments interfere with perceptual decision making. Scientific Reports 14 (1), p. 14133. External Links: Document, ISSN 2045-2322 Cited by: §5.4. A. Edmondson (1999) Psychological Safety and Learning Behavior in Work Teams. Administrative Science Quarterly 44 (2), p. 350–383. External Links: Document, ISSN 0001-8392 Cited by: §2.2. P. Ekman and W. V. Friesen (1969) Nonverbal Leakage and Clues to Deception†. Psychiatry 32 (1), p. 88–106. External Links: Document, ISSN 0033-2747 Cited by: §2.1, §5.2. P. Ekman and W. V. Friesen (1974) Detecting deception from the body or face. Journal of Personality and Social Psychology 29 (3), p. 288–298. External Links: Document, ISSN 1939-1315 Cited by: §2.1. P. Ekman (1997) Lying and Deception. In Memory for Everyday and Emotional Events, External Links: ISBN 978-1-315-79942-1 Cited by: §2.1, §3.1, §5.2. N. B. Ellison and J. T. Hancock (2013) Profile as Promise: Honest and Deceptive Signals in Online Dating. IEEE Security & Privacy 11 (5), p. 84–88. External Links: Document, ISSN 1558-4046 Cited by: §2.2. E. Ert, A. Fleischer, and N. Magen (2016) Trust and reputation in the sharing economy: The role of personal photos in Airbnb. Tourism Management 55, p. 62–73. External Links: Document, ISSN 0261-5177 Cited by: §2.2. M. Eslami, K. Karahalios, C. Sandvig, K. Vaccaro, A. Rickman, K. Hamilton, and A. Kirlik (2016) First I ”like” it, then I hide it: Folk Theories of Social Feeds. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, CHI ’16, New York, NY, USA, p. 2371–2382. External Links: Document, ISBN 978-1-4503-3362-7 Cited by: §2.2. T. Gillespie, P. J. Boczkowski, and K. A. Foot (2014) Media Technologies: Essays on Communication, Materiality, and Society. MIT Press. External Links: ISBN 978-0-262-31947-8 Cited by: §2.2. E. Glikson and O. Asscher (2023) AI-mediated apology in a multilingual work context: Implications for perceived authenticity and willingness to forgive. Computers in Human Behavior 140, p. 107592. Cited by: §1. G. M. Grimes, R. M. Schuetzler, and J. S. Giboney (2021) Mental models and expectation violations in conversational AI interactions. Decision Support Systems 144, p. 113515. External Links: Document, ISSN 0167-9236 Cited by: §2.3. X. Guo, N. M. Selvaraj, Z. Yu, A. W. Kong, B. Shen, and A. Kot (2023) Audio-Visual Deception Detection: DOLOS Dataset and Parameter-Efficient Crossmodal Learning. arXiv. External Links: Document, 2303.12745 Cited by: §3.2. V. Gupta, M. Agarwal, M. Arora, T. Chakraborty, R. Singh, and M. Vatsa (2019) Bag-Of-Lies: A Multimodal Dataset for Deception Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, p. 0–0. Cited by: §3.2. A. Habbal, M. K. Ali, and M. A. Abuzaraida (2024) Artificial Intelligence Trust, Risk and Security Management (AI TRiSM): Frameworks, applications, challenges and future research directions. Expert Systems with Applications 240, p. 122442. External Links: Document, ISSN 0957-4174 Cited by: §2.2. J. T. Hancock, M. Naaman, and K. Levy (2020) AI-Mediated Communication: Definition, Research Agenda, and Ethical Considerations. Journal of Computer-Mediated Communication 25 (1), p. 89–100. External Links: Document, ISSN 1083-6101 Cited by: §1, §1, §2.2, §2.2, §2, §5.4. J. T. Hancock and J. N. Bailenson (2021) The Social Impact of Deepfakes. Cyberpsychology, Behavior, and Social Networking 24 (3), p. 149–152. External Links: Document, ISSN 2152-2715, 2152-2723 Cited by: §1, §2.2, §3.2, §5.4. J. T. Hancock and J. Guillory (2015) Deception with Technology. In The Handbook of the Psychology of Communication Technology, p. 270–289. External Links: Document, ISBN 978-1-118-42645-6 Cited by: §2.2. J. T. Hancock, C. Toma, and N. Ellison (2007) The truth about lying in online dating profiles. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’07, New York, NY, USA, p. 449–452. External Links: Document, ISBN 978-1-59593-593-9 Cited by: §2.2. J. T. Hancock, M. T. Woodworth, and S. Goorha (2010) See No Evil: The Effect of Communication Medium and Motivation on Deception Detection. Group Decision and Negotiation 19 (4), p. 327–343. External Links: Document, ISSN 1572-9907 Cited by: §1. R. Hardin (2002) Trust and Trustworthiness. Russell Sage Foundation. External Links: ISBN 978-1-61044-271-8 Cited by: §2.2. N. Harvey (1997) Confidence in judgment. Trends in Cognitive Sciences 1 (2), p. 78–82. External Links: Document, ISSN 1364-6613, 1879-307X Cited by: §5.4. A. A. Herman, S. E. Brammer, and N. M. Punyanunt-Carter (2025) Face Off: Exploring College Students’ Perceptions Regarding Face Filters on TikTok. Media Watch 16 (1), p. 93–107. External Links: Document, ISSN 0976-0911 Cited by: §2.2. S. C. Herring (2002) Computer-mediated communication on the internet. Annual Review of Information Science and Technology 36 (1), p. 109–168. External Links: Document, ISSN 1550-8382 Cited by: §2.2, §2. N. S. Hill, K. M. Bartol, P. E. Tesluk, and G. A. Langa (2009) Organizational context and face-to-face interaction: Influences on the development of trust and collaborative behaviors in computer-mediated groups. Organizational Behavior and Human Decision Processes 108 (2), p. 187–201. External Links: Document, ISSN 0749-5978 Cited by: §2.2, §3.1. J. Hohenstein and M. Jung (2018) AI-Supported Messaging: An Investigation of Human-Human Text Conversation with AI Support. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems, CHI EA ’18, New York, NY, USA, p. 1–6. External Links: Document, ISBN 978-1-4503-5621-3 Cited by: §2.2. J. Hohenstein and M. Jung (2020) AI as a moral crumple zone: The effects of AI-mediated communication on attribution and trust. Computers in Human Behavior 106, p. 106190. External Links: Document, ISSN 0747-5632 Cited by: §1, §1, §2.2, §2.3, §2. R. Holton (1994) Deciding to trust, coming to believe. Australasian Journal of Philosophy 72 (1), p. 63–76. External Links: Document, ISSN 0004-8402 Cited by: §2.2, §5.1. J. W. Hong, Q. Peng, and D. Williams (2021) Are you ready for artificial Mozart and Skrillex? An experiment testing expectancy violation theory and AI music. New Media & Society 23 (7), p. 1920–1935. External Links: Document, ISSN 1461-4448 Cited by: §2.3. Y. Hwang, J. Y. Ryu, and S. Jeong (2021) Effects of Disinformation Using Deepfake: The Protective Effect of Media Literacy Education. Cyberpsychology, Behavior, and Social Networking 24 (3), p. 188–193. External Links: Document, ISSN 2152-2715 Cited by: §2.2. K. M. Inkpen and M. Sedlins (2011) Me and my avatar: exploring users’ comfort with avatars for workplace communication. In Proceedings of the ACM 2011 Conference on Computer Supported Cooperative Work, CSCW ’11, New York, NY, USA, p. 383–386. External Links: Document, ISBN 978-1-4503-0556-3 Cited by: §2.4, §2.4, §3.1. T. Jacks (2021) Research on Remote Work in the Era of COVID-19. Journal of Global Information Technology Management 24 (2), p. 93–97. External Links: Document, ISSN 1097-198X Cited by: §1. M. Jakesch, A. Bhat, D. Buschek, L. Zalmanson, and M. Naaman (2023) Co-Writing with Opinionated Language Models Affects Users’ Views. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, p. 1–15. External Links: Document, 2302.00560 Cited by: §2.2. M. Jakesch, M. French, X. Ma, J. T. Hancock, and M. Naaman (2019) AI-Mediated Communication: How the Perception that Profile Text was Written by AI Affects Trustworthiness. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow Scotland Uk, p. 1–13. External Links: Document, ISBN 978-1-4503-5970-2 Cited by: §1, §1, §2.2, §2, §3.1, §5.1. A. Javornik, B. Marder, J. B. Barhorst, G. McLean, Y. Rogers, P. Marshall, and L. Warlop (2022) ‘What lies behind the filter?’ Uncovering the motivations for using augmented reality (AR) face filters on social media and their effect on well-being. Computers in Human Behavior 128, p. 107126. External Links: Document, ISSN 0747-5632 Cited by: §1, §2.2. G. R. Jones and J. M. George (1998) The Experience and Evolution of Trust: Implications for Cooperation and Teamwork. Academy of Management Review 23 (3), p. 531–546. External Links: Document, ISSN 0363-7425 Cited by: §2.2. S. Junuzovic, K. Inkpen, J. Tang, M. Sedlins, and K. Fisher (2012) To see or not to see: a study comparing four-way avatar, video, and audio conferencing for work. In Proceedings of the 2012 ACM International Conference on Supporting Group Work, GROUP ’12, New York, NY, USA, p. 31–34. External Links: Document, ISBN 978-1-4503-1486-2 Cited by: §2.4. [57] N. Kanji Why AI filters can take a toll on our self-esteem | TELUS. Note: https://w.telus.com/en/wise/resources/content/article/why-ai-filters-can-take-a-toll-on-our-self-esteem Cited by: §5.4. S. Kiffin-Petersen and J. Cordery (2003) Trust, individualism and job characteristics as predictors of employee preference for teamwork. The International Journal of Human Resource Management 14 (1), p. 93–116. External Links: Document, ISSN 0958-5192 Cited by: §2.2. J. Kim, K. Merrill Jr., K. Xu, and S. Kelly (2022) Perceived credibility of an AI instructor in online education: The role of social presence and voice features. Computers in Human Behavior 136, p. 107383. External Links: Document, ISSN 0747-5632 Cited by: §1, §1. M. Leib, N. Köbis, R. M. Rilke, M. Hagens, and B. Irlenbusch (2024) Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty. The Economic Journal 134 (658), p. 766–784. External Links: Document, ISSN 0013-0133 Cited by: §1. T. R. Levine, H. S. Park, and S. A. McCornack (1999) Accuracy in detecting truths and lies: Documenting the “veracity effect”. Communication Monographs 66 (2), p. 125–144. External Links: Document, ISSN 0363-7751, 1479-5787 Cited by: §2.1, §4, §5.2. T. R. Levine (2014) Truth-Default Theory (TDT): A Theory of Human Deception and Deception Detection. Journal of Language and Social Psychology 33 (4), p. 378–392. External Links: Document, ISSN 0261-927X Cited by: §1, §2.1, §2.1, §2, §3.1, §3.1, §4, §4, §5.1, §5.2. [63] T. R. Levine Scientific Evidence and Cue Theories in Deception Research: Reconciling Findings From Meta-Analyses and Primary Experiments. Cited by: §3.3, §4, §5.2. Z. Lew and J. B. Walther (2023) Social Scripts and Expectancy Violations: Evaluating Communication with Human or AI Chatbot Interactants. Media Psychology 26 (1), p. 1–16. External Links: Document, ISSN 1521-3269 Cited by: §2.3. Z. Liu (2024) The asymmetric impact of decision-making confidence on regret and relief. Frontiers in Psychology 15. External Links: Document, ISSN 1664-1078 Cited by: §5.4. E. P. Lloyd, J. C. Deska, K. Hugenberg, A. R. McConnell, B. T. Humphrey, and J. W. Kunstman (2019) Miami University deception detection database. Behavior Research Methods 51 (1), p. 429–439. External Links: Document, ISSN 1554-3528 Cited by: §2.1, §3.2, §3.2, §3.3, §5.5. H. Lukas, C. Xu, Y. Yu, and W. Gao (2020) Emerging Telemedicine Tools for Remote COVID-19 Diagnosis, Monitoring, and Management. ACS Nano 14 (12), p. 16180–16193. External Links: Document, ISSN 1936-0851 Cited by: §2.2. F. Ma, J. Zhang, L. Tankelevitch, P. Panda, T. Asadi, C. Hewitt, L. Petikam, J. Clemoes, M. Gillies, X. Pan, S. Rintel, and M. Wilczkowiak (2025) Nods of Agreement: Webcam-Driven Avatars Improve Meeting Outcomes and Avatar Satisfaction Over Audio-Driven or Static Avatars in All-Avatar Work Videoconferencing. Proc. ACM Hum.-Comput. Interact. 9 (2), p. CSCW142:1–CSCW142:28. External Links: Document Cited by: §2.4, §2.4, §3.1, §5.1, §5.1, §5.4. X. Ma, J. T. Hancock, K. Lim Mingjie, and M. Naaman (2017) Self-Disclosure and Perceived Trustworthiness of Airbnb Host Profiles. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing, Portland Oregon USA, p. 2397–2409. External Links: Document, ISBN 978-1-4503-4335-0 Cited by: §1. D. M. Mann, J. Chen, R. Chunara, P. A. Testa, and O. Nov (2020) COVID-19 transforms health care through telemedicine: evidence from the field. Journal of the American Medical Informatics Association 27 (7), p. 1132–1135. Cited by: §1, §2.2. [71] B. Marr Picture Perfect: The Hidden Consequences Of AI Beauty Filters. Note: https://w.forbes.com/sites/bernardmarr/2023/06/09/picture-perfect-the-hidden-consequences-of-ai-beauty-filters/ Cited by: §5.4. R. C. Mayer, J. H. Davis, and F. D. Schoorman (1995) An Integrative Model Of Organizational Trust. Academy of Management Review 20 (3), p. 709–734. External Links: Document, ISSN 0363-7425 Cited by: §2.2. J. M. McCarthy, D. M. Truxillo, T. N. Bauer, B. Erdogan, Y. Shao, M. Wang, J. Liff, and C. Gardner (2021) Distressed and distracted by COVID-19 during high-stakes virtual interviews: The role of job interview anxiety on performance and reactions. Journal of Applied Psychology 106 (8), p. 1103–1117. External Links: Document, ISSN 1939-1854 Cited by: §2.2. H. Mieczkowski, J. T. Hancock, M. Naaman, M. Jung, and J. Hohenstein (2021a) AI-Mediated Communication: Language Use and Interpersonal Effects in a Referential Communication Task. Proceedings of the ACM on Human-Computer Interaction 5 (CSCW1), p. 1–14. External Links: Document, ISSN 2573-0142 Cited by: §5.5. H. Mieczkowski, J. T. Hancock, M. Naaman, M. Jung, and J. Hohenstein (2021b) AI-Mediated Communication: Language Use and Interpersonal Effects in a Referential Communication Task. Proceedings of the ACM on Human-Computer Interaction 5 (CSCW1), p. 1–14. External Links: Document, ISSN 2573-0142 Cited by: §1. D. F. Mujtaba and N. R. Mahapatra (2025) Fairness in AI-Driven Recruitment: Challenges, Metrics, Methods, and Future Directions. arXiv. External Links: Document, 2405.19699 Cited by: §2.2. B. News (2021) Lawyer gets stuck with cat filter during virtual court case. BBC. Note: https://w.bbc.com/news/av/world-us-canada-56005428 Cited by: §1. K. L. Nowak and J. Fox (2018) Avatars and computer-mediated communication: a review of the definitions, uses, and effects of digital representations. Review of Communication Research 6, p. 30–53. External Links: Document, ISSN 2255-4165 Cited by: §2.4, §2. B. O’Conaill, S. Whittaker, and S. Wilbur (1993) Conversations Over Video Conferences: An Evaluation of the Spoken Aspects of Video-Mediated Communication. Human–Computer Interaction 8 (4), p. 389–428. External Links: Document, ISSN 0737-0024, 1532-7051 Cited by: §1. M. O’Connor (1989) Models of human behaviour and confidence in judgement: A review. International Journal of Forecasting 5 (2), p. 159–169. External Links: Document, ISSN 0169-2070 Cited by: §5.4. S. Palan and C. Schitter (2018) Prolific.ac—A subject pool for online experiments. Journal of Behavioral and Experimental Finance 17, p. 22–27. External Links: Document, ISSN 2214-6350 Cited by: §3.5. Y. Pan and A. Steed (2017) The impact of self-avatars on trust and collaboration in shared virtual environments. PLOS ONE 12 (12), p. e0189078. External Links: Document, ISSN 1932-6203 Cited by: §5.1. P. Panda, M. J. Nicholas, M. Gonzalez-Franco, K. Inkpen, E. Ofek, R. Cutler, K. Hinckley, and J. Lanier (2022) AllTogether: Effect of Avatars in Mixed-Modality Conferencing Environments. In Proceedings of the 1st Annual Meeting of the Symposium on Human-Computer Interaction for Work, CHIWORK ’22, New York, NY, USA, p. 1–10. External Links: Document, ISBN 978-1-4503-9655-4 Cited by: §2.3, §2.4, §2.4, §3.1, §3.1, §4, §4, §5.1, §5.1, §5.2, §5.3, §5.4, §5.4. P. S. Park, S. Goldstein, A. O’Gara, M. Chen, and D. Hendrycks (2024) AI deception: A survey of examples, risks, and potential solutions. Patterns 5 (5), p. 100988. External Links: Document, ISSN 2666-3899 Cited by: §5.4. M. R. Parks and M. B. Adelman (1983) Communication Networks and the Development of Romantic Relationships: An Expansion of Uncertainty Reduction Theory. Human Communication Research 10 (1), p. 55–79. External Links: Document, ISSN 0360-3989 Cited by: §2.3. A. L. Patalano and Z. LeClair (2011) The influence of group decision making on indecisiveness-related decisional confidence. Judgment and Decision Making 6 (2), p. 163–175. External Links: Document, ISSN 1930-2975 Cited by: §5.4. V. Pérez-Rosas, M. Abouelenien, R. Mihalcea, and M. Burzo (2015) Deception Detection using Real-life Trial Data. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, Seattle Washington USA, p. 59–66. External Links: Document, ISBN 978-1-4503-3912-4 Cited by: §3.2. C. Popa, R. Pallath, L. Cunningham, H. Tahiri, A. Kesavarajah, and T. Wu (2025) Deepfake Technology Unveiled: The Commoditization of AI and Its Impact on Digital Trust. arXiv. External Links: Document Cited by: §2.2, §5.4. Z. A. Purcell, M. Dong, A. Nussberger, N. Köbis, and M. Jakesch (2024) People have different expectations for their own versus others’ use of AI-mediated communication tools. British Journal of Psychology, p. bjop.12727. External Links: Document, ISSN 0007-1269, 2044-8295 Cited by: §1. D. Remus and F. Levy (2017) Can Robots Be Lawyers: Computers, Lawyers, and the Practice of Law. Georgetown Journal of Legal Ethics 30, p. 501. Cited by: §2.2. M. (. Rheu, Y. (. Dai, J. Meng, and W. Peng (2024) When a Chatbot Disappoints You: Expectancy Violation in Human-Chatbot Interaction in a Social Support Context. Communication Research 51 (7), p. 782–814. External Links: Document, ISSN 0093-6502 Cited by: §2.3. C. M. Ridings, D. Gefen, and B. Arinze (2002) Some antecedents and effects of trust in virtual communities. The Journal of Strategic Information Systems 11 (3), p. 271–295. External Links: Document, ISSN 0963-8687 Cited by: §2.2, §3.1. E. Rocco (1998) Trust breaks down in electronic contexts but can be repaired by some initial face-to-face contact. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems - CHI ’98, Los Angeles, California, United States, p. 496–502. External Links: Document, ISBN 978-0-201-30987-4 Cited by: §2.2, §3.1. O. Schilke and M. Reimann (2025) The transparency dilemma: How AI disclosure erodes trust. Organizational Behavior and Human Decision Processes 188, p. 104405. External Links: Document, ISSN 0749-5978 Cited by: §5.1. L. Schooler, M. Okhan, S. Hollander, M. Gill, Y. Zoh, M.J. Crockett, and H. Yu (2024) Confidence in Moral Decision-Making. Collabra: Psychology 10 (1), p. 121387. External Links: Document, ISSN 2474-7394 Cited by: §5.4. S. Sharma, R. Rawal, and D. Shah (2023) Addressing the challenges of AI-based telemedicine: Best practices and lessons learned. Journal of education and health promotion (1), p. 338. Cited by: §2.2. K. M. Shockley, T. D. Allen, H. Dodd, and A. M. Waiwood (2021) Remote worker communication during COVID-19: The role of quantity, quality, and supervisor expectation-setting.. Journal of applied psychology 106 (10), p. 1466. Cited by: §1. T. L. Simons and R. S. Peterson (2000) Task conflict and relationship conflict in top management teams: The pivotal role of intragroup trust. Journal of Applied Psychology 85 (1), p. 102–111. External Links: Document, ISSN 1939-1854 Cited by: §2.2. H. Suen and K. Hung (2024) Revealing the influence of AI and its interfaces on job candidates’ honest and deceptive impression management in asynchronous video interviews. Technological Forecasting and Social Change 198, p. 123011. External Links: Document, ISSN 0040-1625 Cited by: §1. J. Twomey, D. Ching, M. P. Aylett, M. Quayle, C. Linehan, and G. Murphy (2023) Do deepfake videos undermine our epistemic trust? A thematic analysis of tweets that discuss deepfakes in the Russian invasion of Ukraine. PLOS ONE 18 (10), p. e0291668. External Links: Document, ISSN 1932-6203 Cited by: §2.2, §3.2. D. Vargo, L. Zhu, B. Benwell, and Z. Yan (2021a) Digital technology use during COVID -19 pandemic: A rapid review. Human Behavior and Emerging Technologies 3 (1), p. 13–24. External Links: Document, ISSN 2578-1863, 2578-1863 Cited by: §1. D. Vargo, L. Zhu, B. Benwell, and Z. Yan (2021b) Digital technology use during COVID-19 pandemic: A rapid review. Human Behavior and Emerging Technologies 3 (1), p. 13–24. External Links: Document, ISSN 2578-1863 Cited by: §2.2. A. Vrij (2000) Detecting lies and deceit: the psychology of lying and implications for professional practice. Wiley Series in Psychology of Crime, Policing and Law, Wiley, Chichester. External Links: ISBN 978-0-471-85316-9 Cited by: §1. A. Vrij (2008) Detecting Lies and Deceit: Pitfalls and Opportunities. John Wiley & Sons. External Links: ISBN 978-0-470-51625-6 Cited by: §2.1. C. P. Walker, D. S. Schiff, and K. J. Schiff (2024) Merging AI Incidents Research with Political Misinformation Research: Introducing the Political Deepfakes Incidents Database. Proceedings of the AAAI Conference on Artificial Intelligence 38 (21), p. 23053–23058. External Links: Document, ISSN 2374-3468 Cited by: §3.2. J. B. Walther, T. Loh, and L. Granka (2005) Let Me Count the Ways: The Interchange of Verbal and Nonverbal Cues in Computer-Mediated and Face-to-Face Affinity. Journal of Language and Social Psychology 24 (1), p. 36–65. External Links: Document, ISSN 0261-927X Cited by: §2.2. M. Westerlund (2019) The Emergence of Deepfake Technology: A Review. Technology Innovation Management Review 9 (11), p. 39–52. External Links: Document, ISSN 1927-0321 Cited by: §2.2, §2.2. J. M. Wilson, S. G. Straus, and B. McEvily (2006) All in due time: The development of trust in computer-mediated and face-to-face teams. Organizational Behavior and Human Decision Processes 99 (1), p. 16–33. External Links: Document, ISSN 0749-5978 Cited by: §2.2, §3.1. B. Yi, D. Shi, and G. Li (2026) Real me or digital me? Consumers’ consumption responses to online avatars. International Journal of Hospitality Management 132, p. 104394. External Links: Document, ISSN 0278-4319 Cited by: §2. D. E. Zand (1997) The Leadership Triad: Knowledge, Trust, and Power. Oxford University Press. External Links: ISBN 978-0-19-509240-0 Cited by: §2.2. Y. Zhou and N. Jia (2023) The Impact of Item Difficulty on Judgment of Confidence—A Cross-Level Moderated Mediation Model. Journal of Intelligence 11 (6), p. 113. External Links: Document, ISSN 2079-3200 Cited by: §5.3. Zoom (10/20/2023 11:36:03 AM) Zoom virtual backgrounds, filters, and virtual avatars. Note: https://w.zoom.com/en/products/virtual-meetings/features/backgrounds-filters/ Cited by: §1, §1. Appendix A Appendix A.1. Statistical models In the following, we provide the complete regression tables referenced in the results section, offering a comprehensive overview of all model specifications and coefficients presented in our analysis. Our statistical reporting draws on four different models: We fitted a linear mixed model (formula: outcome AI mediation, estimated using REML and nloptwrap optimizer) to predict the outcome (reported trust, truth judgment rate, judgment accuracy and confidence) based on the type of AI mediation (original, retouches, or avatar-based) with a per-subject random fixed effect in the homogeneous environment in Study 1. The details are reported in Table 1. We calculated an equivalent model for the mixed environment in Study 2 only, with statistics reported in Table 2. We then fitted a linear mixed model across studies (estimated using REML and nloptwrap optimizer) to predict the outcome based on the interaction of the AI mediation and environment type (formula: outcome environment + environment:ai-mediation) with a per-subject random fixed effect across both studies. The details are reported in Table 3. Finally, we fitted an extended version of the previous model that also included covariates for participant age, gender, education, race, and experience with AI. The model details are reported in Table 4. Table 1. Study 1 Linear mixed model predicting the outcome based on mediation type with a per-participant random fixed effect Trust Truth Accuracy Confidence (Intercept) 0.5100.510*** 0.5990.599*** 0.5230.523*** 0.5640.564*** (0.0080.008) (0.0110.011) (0.0110.011) (0.0100.010) MediationRetouch −0.006-0.006 −0.017-0.017 −0.013-0.013 0.0060.006 (0.0120.012) (0.0160.016) (0.0160.016) (0.0140.014) MediationAvatar −0.025-0.025* 0.0170.017 0.0170.017 −0.012-0.012 (0.0120.012) (0.0160.016) (0.0160.016) (0.0140.014) SD (Intercept Subject) 0.1160.116 0.0000.000 0.0000.000 0.1600.160 SD (Observations) 0.2350.235 0.4900.490 0.4990.499 0.1970.197 Num.Obs. 58025802 58025802 58025802 58025802 R2 Marg. 0.0020.002 0.0010.001 0.0010.001 0.0010.001 R2 Cond. 0.1970.197 0.3980.398 AIC 571.3571.3 8219.88219.8 8435.68435.6 −808.1-808.1 BIC 604.6604.6 8253.18253.1 8469.08469.0 −774.8-774.8 ICC 0.20.2 0.40.4 RMSE 0.220.22 0.490.49 0.500.50 0.180.18 Note. * p <0.05<0.05, ** p <0.01<0.01, *** p <0.001<0.001. Table 2. Study 2 Linear mixed model predicting the outcome based on mediation type with a per-participant random fixed effect Trust Truth Accuracy Confidence (Intercept) 0.4990.499*** 0.5970.597*** 0.5410.541*** 0.5750.575*** (0.0060.006) (0.0110.011) (0.0110.011) (0.0060.006) VideoTypeRetouch −0.021-0.021** −0.021-0.021 −0.019-0.019 −0.019-0.019** (0.0080.008) (0.0150.015) (0.0150.015) (0.0060.006) VideoTypeAvatar −0.080-0.080*** −0.020-0.020 −0.007-0.007 −0.046-0.046*** (0.0080.008) (0.0150.015) (0.0150.015) (0.0060.006) SD (Intercept Subject) 0.0920.092 0.0000.000 0.0450.045 0.1460.146 SD (Observations) 0.2460.246 0.4930.493 0.4970.497 0.2070.207 Num.Obs. 62406240 62406240 62406240 62406240 R2 Marg. 0.0160.016 0.0000.000 0.0000.000 0.0050.005 R2 Cond. 0.1370.137 0.0080.008 0.3350.335 AIC 851.4851.4 8912.18912.1 9061.09061.0 −482.1-482.1 BIC 885.1885.1 8945.88945.8 9094.79094.7 −448.4-448.4 ICC 0.10.1 0.00.0 0.30.3 RMSE 0.240.24 0.490.49 0.500.50 0.190.19 Note. * p <0.05<0.05, ** p <0.01<0.01, *** p <0.001<0.001. Table 3. Study 1 and 2 Linear mixed model predicting the outcome based on mediation and environment type with a per-participant random fixed effect Trust Truth Accuracy Confidence (Intercept) 0.5100.510*** 0.5990.599*** 0.5230.523*** 0.5640.564*** (0.0080.008) (0.0110.011) (0.0110.011) (0.0090.009) ConditionRetouch −0.006-0.006 −0.017-0.017 −0.013-0.013 0.0060.006 (0.0110.011) (0.0160.016) (0.0160.016) (0.0140.014) ConditionAvatar −0.025-0.025* 0.0170.017 0.0170.017 −0.012-0.012 (0.0110.011) (0.0160.016) (0.0160.016) (0.0140.014) ConditionMixed −0.011-0.011 −0.002-0.002 0.0170.017 0.0110.011 (0.0100.010) (0.0150.015) (0.0160.016) (0.0110.011) ConditionMixed × VideoTypeRetouch −0.021-0.021** −0.021-0.021 −0.019-0.019 −0.019-0.019** (0.0070.007) (0.0150.015) (0.0150.015) (0.0060.006) ConditionMixed × VideoTypeAvatar −0.080-0.080*** −0.020-0.020 −0.007-0.007 −0.046-0.046*** (0.0070.007) (0.0150.015) (0.0150.015) (0.0060.006) SD (Intercept Subject) 0.1040.104 0.0000.000 0.0000.000 0.1530.153 SD (Observations) 0.2410.241 0.4920.492 0.4990.499 0.2020.202 Num.Obs. 12 04212\,042 12 04212\,042 12 04212\,042 12 04212\,042 R2 Marg. 0.0140.014 0.0010.001 0.0000.000 0.0040.004 R2 Cond. 0.1690.169 0.3660.366 AIC 1438.61438.6 17 128.117\,128.1 17 493.617\,493.6 −1278.2-1278.2 BIC 1497.81497.8 17 187.317\,187.3 17 552.817\,552.8 −1219.1-1219.1 ICC 0.20.2 0.40.4 RMSE 0.230.23 0.490.49 0.500.50 0.190.19 Note. * p <0.05<0.05, ** p <0.01<0.01, *** p <0.001<0.001. Table 4. Robustness check Study 1 and 2 Linear mixed model predicting each outcome from mediation and environment type with a per-participant random intercept and controls for demographic and experience covariates Trust Truth Accuracy Confidence (Intercept) 0.3470.347*** 0.5460.546*** 0.5650.565*** 0.4450.445*** ConditionRetouch −0.006-0.006 −0.019-0.019 −0.011-0.011 0.0030.003 ConditionAvatar −0.027-0.027* 0.0140.014 0.0180.018 −0.020-0.020 ConditionMixed −0.012-0.012 −0.002-0.002 0.0190.019 0.0090.009 AgeNum 0.0000.000 0.0010.001 −0.000-0.000 0.0000.000 GenderMale −0.005-0.005 −0.002-0.002 −0.003-0.003 0.0240.024** GenderNon-binary −0.014-0.014 0.0210.021 −0.004-0.004 −0.051-0.051 EducationNum 0.0050.005* 0.0080.008* 0.0020.002 0.0040.004 RaceBlack or African American 0.0450.045** 0.0010.001 0.0130.013 0.0740.074*** (0.0150.015) (0.0210.021) (0.0220.022) (0.0180.018) RaceIndigenous or Native 0.0740.074* 0.1230.123** −0.007-0.007 0.0940.094** (0.0310.031) (0.0440.044) (0.0450.045) (0.0360.036) RaceMiddle Eastern or North African 0.0510.051 0.1000.100 0.1250.125 0.0490.049 (0.0630.063) (0.0920.092) (0.0930.093) (0.0750.075) RaceMultiracial or Mixed Race 0.0390.039* −0.010-0.010 0.0170.017 −0.011-0.011 (0.0190.019) (0.0280.028) (0.0290.029) (0.0230.023) RacePacific Islander 0.0970.097 0.1580.158 0.0570.057 0.1190.119 (0.0810.081) (0.1170.117) (0.1190.119) (0.0960.096) RaceWhite or Caucasian 0.0580.058*** 0.0150.015 0.0250.025 0.0440.044** (0.0130.013) (0.0190.019) (0.0200.020) (0.0160.016) EnglishLevelNum 0.0020.002 −0.071-0.071 −0.064-0.064 −0.068-0.068 (0.0330.033) (0.0470.047) (0.0480.048) (0.0390.039) ExperienceVideoNum 0.0360.036** −0.003-0.003 0.0120.012 0.0430.043** (0.0120.012) (0.0180.018) (0.0180.018) (0.0150.015) ExperienceAINum 0.0140.014 0.0140.014 −0.000-0.000 0.0750.075*** (0.0150.015) (0.0210.021) (0.0210.021) (0.0170.017) AIinteractionNum 0.0320.032* −0.002-0.002 0.0300.030 0.0810.081*** (0.0140.014) (0.0200.020) (0.0200.020) (0.0160.016) AITrustNum 0.0900.090*** 0.1040.104*** −0.039-0.039 0.0580.058*** (0.0140.014) (0.0210.021) (0.0210.021) (0.0170.017) ConditionMixed × VideoTypeRetouch −0.021-0.021** −0.021-0.021 −0.019-0.019 −0.019-0.019** (0.0070.007) (0.0150.015) (0.0150.015) (0.0060.006) ConditionMixed × VideoTypeAvatar −0.080-0.080*** −0.020-0.020 −0.007-0.007 −0.046-0.046*** (0.0070.007) (0.0150.015) (0.0150.015) (0.0060.006) SD (Intercept Subject) 0.0980.098 0.0000.000 0.0000.000 0.1420.142 SD (Observations) 0.2410.241 0.4910.491 0.4990.499 0.2020.202 Num.Obs. 12 04212\,042 12 04212\,042 12 04212\,042 12 04212\,042 R2 Marg. 0.0340.034 0.0060.006 0.0010.001 0.0550.055 R2 Cond. 0.1710.171 0.3680.368 AIC 1445.11445.1 17 202.417\,202.4 17 616.217\,616.2 −1372.1-1372.1 BIC 1630.01630.0 17 387.317\,387.3 17 801.117\,801.1 −1187.2-1187.2 ICC 0.10.1 0.30.3 RMSE 0.230.23 0.490.49 0.500.50 0.190.19 Note. * p <0.05<0.05, ** p <0.01<0.01, *** p <0.001<0.001.