← Back to papers

Paper deep dive

"Better Ask for Forgiveness than Permission": Practices and Policies of AI Disclosure in Freelance Work

Angel Hsing-Chi Hwang, Senya Wong, Baixiao Chen, Jessica He, Hyo Jin Do

Year: 2026Venue: arXiv preprintArea: cs.HCType: PreprintEmbeddings: 89

Abstract

Abstract:The growing use of AI applications among freelance workers is reshaping trust and relationships with clients. This paper investigates how both workers and clients perceive AI use and disclosure in the freelance economy through a three-stage study: interviews with workers and two survey studies with workers and clients. Findings first reveal a key expectation gap around disclosure: Workers often adopt passive disclosure practices, revealing AI use only when asked, as they assume clients can already detect it. Clients, however, are far less confident in recognizing AI-assisted work and prefer proactive disclosure. A second finding highlights the role of unclear or absent client AI policies, which leave workers consistently misinterpreting clients' expectations for AI use and disclosure. Together, these gaps point to the need for clearer guidelines and practices for AI disclosure. Insights extend beyond freelancing, offering implications for trust, accountability, and policy design in other AI-mediated work domains.

Tags

ai-safety (imported, 100%)cshc (suggested, 92%)preprint (suggested, 88%)

Links

PDF not stored locally. Use the link above to view on the source site.

Intelligence

Status: succeeded | Model: google/gemini-3.1-flash-lite-preview | Prompt: intel-v1 | Confidence: 94%

Last extracted: 3/13/2026, 12:34:07 AM

Summary

This paper investigates the misalignment between freelance workers and clients regarding AI use and disclosure practices. Through a three-stage study involving interviews and surveys, the authors identify that workers often adopt passive disclosure strategies due to unclear client policies and misinterpretations of what constitutes 'major' versus 'minor' AI-assisted tasks. The findings highlight a critical need for clearer AI governance and communication to maintain trust in freelance relationships.

Entities (5)

Clients · stakeholder · 100%Freelance Workers · stakeholder · 100%Passive Disclosure · disclosure-strategy · 98%AI Disclosure · practice · 95%AI Policy · governance · 95%

Relation Signals (3)

Freelance Workers adopt Passive Disclosure

confidence 95% · Workers often adopt passive disclosure practices, revealing AI use only when asked.

Clients prefer Proactive Disclosure

confidence 90% · Clients, however, are far less confident in recognizing AI-assisted work and prefer proactive disclosure.

AI Policy shapes Freelance Workers

confidence 90% · A second finding highlights the role of unclear or absent client AI policies, which leave workers consistently misinterpreting clients' expectations.

Cypher Suggestions (2)

Find the relationship between stakeholders and their preferred disclosure practices. · confidence 90% · unvalidated

MATCH (s:Stakeholder)-[:PREFERS|ADOPTS]->(d:DisclosureStrategy) RETURN s.name, d.name

Identify the impact of AI policies on worker behavior. · confidence 85% · unvalidated

MATCH (p:Policy)-[:SHAPES]->(w:Worker) RETURN p.name, w.name

Full Text

88,530 characters extracted from source content.

Expand or collapse full text

“Better Ask for Forgiveness than Permission”: Practices and Policies of AI Disclosure in Freelance Work Angel Hsing-Chi Hwang University of Southern California Los Angeles, United States angel.hwang@usc.edu Senya Wong University of Southern California Los Angeles, United States senyawon@usc.edu Baixiao Chen Emory University DeKalb County, United States shawn030131@gmail.com Jessica He IBM Research Seattle, United States jessicahe@ibm.com Hyo Jin Do IBM Research Cambridge, United States dohyojin90@gmail.com Abstract The growing use of AI applications among freelance workers is reshaping trust and relationships with clients. This paper investi- gates how both workers and clients perceive AI use and disclosure in the freelance economy through a three-stage study: interviews with workers and two survey studies with workers and clients. Findings first reveal a key expectation gap around disclosure: Work- ers often adopt passive disclosure practices, revealing AI use only when asked, as they assume clients can already detect it. Clients, however, are far less confident in recognizing AI-assisted work and prefer proactive disclosure. A second finding highlights the role of unclear or absent client AI policies, which leave workers consistently misinterpreting clients’ expectations for AI use and dis- closure. Together, these gaps point to the need for clearer guidelines and practices for AI disclosure. Insights extend beyond freelancing, offering implications for trust, accountability, and policy design in other AI-mediated work domains. CCS Concepts • Human-centered computing→Collaborative and social computing. Keywords freelance platform, freelance worker, AI policy, AI disclosure ACM Reference Format: Angel Hsing-Chi Hwang, Senya Wong, Baixiao Chen, Jessica He, and Hyo Jin Do. 2026. “Better Ask for Forgiveness than Permission”: Practices and Policies of AI Disclosure in Freelance Work. In Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems (CHI ’26), April 13–17, 2026, Barcelona, Spain. ACM, New York, NY, USA, 16 pages. https: //doi.org/10.1145/3772318.3791920 1 Introduction In March 2023, a freelancer on Upwork posted a question on the platform’s community forum, asking about “Upwork’s stance on the use of ChatGPT.” What began as a simple request for clarification This work is licensed under a Creative Commons Attribution 4.0 International License. CHI ’26, Barcelona, Spain © 2026 Copyright held by the owner/author(s). ACM ISBN 979-8-4007-2278-3/2026/04 https://doi.org/10.1145/3772318.3791920 escalated into heated discussions that lasted nearly a year before the platform shut down the thread. By the time we wrote the pa- per in 2025, Upwork had not yet released a formal, platform-wide guideline on AI use in response to the question. The Upwork case is not a one-of-a-kind incident. Workplaces across industries and domains are facing similar challenges in clar- ifying what constitutes acceptable and responsible AI use [32]. Even in organizations that have issued guidelines for AI use, work- ers continue to struggle with questions about the appropriate ex- tent of AI assistance and which use cases are considered permissi- ble [5, 41, 57]. While navigating AI use is complex in itself, the challenge of disclosing AI use can be more substantial [4]. Across domains, researchers and practitioners have identified widespread under- the-hood AI use [55]. Such practices pose serious risks, including potential violations of privacy, client contracts, and professional standards — risks that managers, executives, and clients may not fully recognize. Furthermore, when such secret use of AI is eventu- ally revealed, it can severely damage trust relationships, in some cases causing irreparable harm. These dynamics are even more acute in freelance contexts. Un- like traditional workplaces, freelancers typically lack long-term contracts, organizational protocols, or clear escalation channels to mediate disputes about AI use [1]. Trust between workers and clients 1 is fragile: each project depends on maintaining a client’s confidence, and once damaged, the relationship rarely recovers. As a result, freelancers face especially high stakes when deciding whether, when, and how to disclose AI use. At the same time, the freelance space also offers unique op- portunities to shape practices and policies around AI disclosure. Without organizational constraints, clients on freelance platforms have greater direct decision-making power and flexibility. They can communicate expectations, structure “AI policies,” and negotiate agreements with workers in ways that are responsive to specific projects, contexts, and, most importantly, the unique work dynam- ics with individual workers [34]. The present work leverages the unique context of freelance work for two reasons. First, because individual workers and clients have autonomy to define their own AI use and expectations, studying them offers a rare window into how decisions and interpretations 1 Throughout this paper, we refer to workers as freelancers who seek and perform jobs on freelance platforms, while clients are those who hire and work with freelancers. arXiv:2603.07459v1 [cs.HC] 8 Mar 2026 CHI ’26, April 13–17, 2026, Barcelona, SpainHwang et al. around AI policy are formed, in contrast to corporate settings where policy development processes are often opaque. Second, as workers and clients must navigate technological changes and new norms without organizational guidance, studying freelance platforms re- veals what kinds of support, infrastructures, or collective strategies might benefit other decentralized platforms, a domain where AI governance becomes increasingly important. We conducted a three-stage study that iteratively gathered and synthesized perspectives from both workers and clients. We be- gan by interviewing workers to profile their disclosure practices (Study 1.1,푁=41), and then surveyed a larger sample of work- ers to validate these insights (Study 1.2,푁=100). We mirrored the survey content in a subsequent survey with clients (Study 2, 푁=145), enabling us to compare perspectives between the two groups and identify points of alignment and divergence. Because AI policies consistently emerged as central in shaping disclosure, we also asked clients to share the policies they had developed. In Study 3 (푁=100), we presented these policies back to workers, asking them to interpret the forms of AI use and disclosure they implied. This design allowed us not only to map current practices but also to examine how policies are understood, misinterpreted, and negotiated in freelance contexts. Through this multi-stage study, we examined how workers ap- proach AI use and disclosure in their freelance work (RQ1), how these practices align—or misalign—with clients’ expectations (RQ2), and how clients’ AI policies shape workers’ interpretations and behaviors (RQ3). In summary, we highlight the following key find- ings: 1.Workers consistently misinterpreted the scope of permitted AI use under clients’ policies. Misunderstandings were most pronounced under policies that encouraged partial use of AI (categorized as “Use AI for minor tasks” or “Use AI for major tasks” in our analyses), as clients and workers often disagreed on what constitutes minor versus major AI use in practice. 2.We identify five types of AI disclosure practices among free- lancers, with Passive Disclosure (i.e., workers disclose AI use only when clients explicitly ask) emerging as the most common. 3.Clients generally hold higher expectations for workers disclos- ing AI use proactively; however, once clients implement explicit AI policies, they often view disclosure as less necessary. 4.Atop (3.), Workers tended to disclose more proactively when encountering policies that encouraged AI use, whereas clients more often demanded disclosure under policies that discouraged or prohibited AI use. 5.Our qualitative analysis of clients’ AI policies revealed several common issues: reliance on vague responsibility clauses, an emphasis on restrictions (particularly around data protection and privacy) rather than clear guidance on appropriate use cases, and directives to avoid AI only at the “final decision stage,” overlooking how earlier steps in the workflow can also critically shape final outputs. This work contributes to CHI by advancing understanding of how users—specifically freelance workers—adopt, disclose, and ne- gotiate AI use in their tasks. We examine how workers integrate AI into their workflows, how clients respond to and govern these prac- tices, and how disclosure policies are interpreted and enacted in real settings. As formal regulations increasingly mandate AI disclosure, it is essential to understand how users and stakeholders make sense of such guidelines so that policy and regulatory approaches can evolve alongside rapidly advancing AI technologies. We argue that the freelance space offers a unique window into how individuals form decisions around AI use and AI policies. Al- though freelancers may differ from workers in corporate environ- ments, the freelance context provides a less opaque setting for observing how AI policies are created, implemented, adhered to, and adapted over time—conditions that are far more difficult to study within organizations. Moreover, because freelance workers and clients operate independently and with limited institutional support, they face greater challenges in navigating AI use on their own. The misalignments we observe between workers’ and clients’ expectations further inform where future AI literacy initiatives could provide meaningful support. 2 Background and Related Work 2.1 Navigating Freelance Workplace in the Age of Generative AI As seen in all work settings, the use of generative AI has become in- creasingly prevalent on freelance platforms [45]. Although workers often grapple with the limitations of these tools, most remain eager to adopt AI to streamline their tasks and improve productivity [6]. Clients’ attitudes toward workers’ AI use, on the other hand, show greater variation and are evolving rapidly. Some clients impose strict prohibitions on AI, while others hold what recent research has described as “inflated expectations” [6]; namely, clients’ highly optimistic views of AI’s capabilities lead them to set unrealistic demands, assuming that workers can meet them by leveraging AI assistance. Successful freelance work typically depends on strong client–worker relationships, and these dynamics have only grown more important in the age of generative AI [21]. Effective communication, clear goals, and trust are central to high-quality outcomes [1,20,22]. Positive client–worker relationships also improve retention, reducing the costs of repeatedly searching for new talent and initiating new contracts [6]. For workers, securing longer-term collaborations—rather than fragmented, one-off tasks—provides greater job stability and supports career development [20, 37]. As workers increasingly adopt AI to facilitate their tasks, it remains largely unknown whether and how such practices might compromise their long-term, mutually beneficial relationships with clients [3,23,39]. When compromises arise, how can workers and clients mitigate potential harms of AI adoption, and under what conditions might disclosure foster trust rather than erode it [4, 44,53]? The current research addresses this understudied topic by examining the interplay between workers’ AI use, disclosure practices, and clients’ expectations in freelance contexts. 2.2 Perceptual Harms and Secret Use of AI Perceived AI use, the perception that someone has used AI to pro- duce content or complete work, can make a significant impact in both professional and social contexts [29,46]. In many cases, per- ceived use of AI causes negative evaluations of both the work itself AI Disclosure in Freelance WorkCHI ’26, April 13–17, 2026, Barcelona, Spain and the individual who created it [8,26]. This phenomenon has long been observed in research on AI-mediated communication, where messages believed to be AI-generated are judged more harshly, and their authors are likewise viewed as less likable [14,25,36]. Two mechanisms help explain this negative bias. First, AI use is often perceived as less intentional, suggesting that the worker invested less thought and effort in the output [17,35,50]. Second, judgments about content quality can be shaped by “effort heuristics”: people tend to associate greater effort and time with higher quality [19]. Because AI-assisted work is assumed to be produced more quickly and with less effort, it is often evaluated as inferior, regardless of the actual quality of the output [18]. Moreover, recent research has asked whether the perceptual harms of AI use disproportionately affect certain groups, and whether some individuals are more likely than others to be as- sumed to rely on AI [10, 29, 38]. For instance, studies suggest that novices are penalized more heavily when their work is perceived as AI-generated [24], receiving more negative evaluations than what established professionals encounter [43]. In contrast, experienced workers can sometimes frame AI use as a sign of sophistication, efficiency, or a competitive edge, particularly in domains where tech- nical expertise and productivity are highly valued [23,24,48,52]. These dynamics highlight that the social meaning of AI use is not uniform, but shaped by the worker’s status, experience, and profes- sional context. Given these varied reactions to AI use, many workers are acutely concerned about how others will judge them [11,40]. Indeed, the perceptual harms of AI use can be especially salient: as highlighted in recent reviews, simply labeling identical content as AI-assisted is enough to affect viewers’ perceptions of its credibility and lik- ability [2,51]. In response, many workers choose to conceal their AI use [12,55,56]. Recent research indicates that such conceal- ment is more prevalent in tasks that are high-stakes, sensitive, or closely tied to personal assessment (e.g., academic writing or self- presentation), as users exhibit heightened concern about negative external judgment regarding their use of AI in these scenarios [55]. These features closely resemble freelance work, where workers’ reputations hinge on client evaluations. Because public reviews and platform scores strongly influence their ability to secure future opportunities, freelancers may be particularly motivated to conceal their AI use, fearing that disclosure could undermine trust and perceived quality of their work from clients’ perspectives. 2.3 Transparency, Attribution, and Disclosure of AI Use Yet in many contexts, disclosure is no longer optional [15,16]. A growing number of transparency regulations—both at the orga- nizational and governmental levels (e.g., the EU AI Act and the California AI Transparency Act)—require users to disclose their use of AI in specific cases. As Zhang et al. [55] reviewed, numerous policies across different domains require workers to acknowledge AI assistance. An unresolved challenge, however, is that these policies rarely specify how disclosure should occur or what form it should take. Most adopt a binary framing—either AI was used or not—but this approach fails to capture the varied degrees, contexts, and nuances of AI assistance in practice. Recent research has demonstrated that such binary approaches to attribute AI use are inadequate [2, 15,16]: they not only obscure the complexity of AI involvement but can also harm workers by inviting negative judgments even when AI was used minimally or responsibly. More granular and context-sensitive approaches to AI attribution are therefore needed to balance transparency with fairness for workers [7, 32, 47]. We anticipate that the practice of AI disclosure will be particu- larly challenging in the freelance space, where transparency obli- gations intersect with the dynamics of client–worker relationships. As with many practices in freelancing, decisions about when and how AI use should be disclosed are likely to depend on mutual agreements between individual workers and clients. Because each collaboration follows its own trajectory, one-size-fits-all disclosure policies are unlikely to suffice [22,54]. Instead, disclosure arrange- ments may need to be personalized, recognizing that clients’ ex- pectations and workers’ practices vary across projects, tasks, and domains [20, 33]. 3 Methods We conducted a three-stage study to examine how freelance work- ers and clients navigate AI use and disclosure. The full protocol was approved by the authors’ Institutional Review Board (IRB). Below, we present an overview of our methods; the full methodological details are provided in Section A. We attach the full questionnaires and interview protocols. Additional information about participants demographics are reported in Section B. Section C includes a break- down of freelance service types that workers offered and clients sought. Study 1.1 (Interviews with Workers). We interviewed 41 free- lancers (46.3% male, 46.3% female, age: 40.35±11.29) across free- lance platforms about their AI use, disclosure practices, and con- cerns. 45-60-minute long interviews through Zoom were recorded, transcribed and analyzed using open coding and iterative team discussions to identify themes. Study 1.2 (Survey with Workers). Drawing from Study 1.1, we developed a Qualtrics survey with both closed- and open-ended questions on AI use, disclosure (no, passive, active), and experiences with client policies.푁=100 freelance workers (53% male, 39% female, age: 36.4± 9.8) participated in Study 1.2. Study 2 (Survey with Clients). To mirror worker perspectives, we surveyed 145 clients (gender: 51.38% male, 33.33% female; age: 42.31±11.20) from freelance platforms about expectations for AI use, preferred disclosure practices, and any “AI policies” they had in place. This enabled direct comparison between workers and clients. Study 3 (Survey with Workers on Client Policies). Finally, we presented client-defined policies (collected in Study 2) to 100 new workers (gender: 50.00% male, 48.96% female, age: 37.29±9.88). Each worker evaluated one policy from each of five categories (e.g., “no AI use,” “AI for minor tasks”), rating their interpretations, intended practices, and disclosure strategies. 4 Findings 4.1 AI Use in Freelance Work Most freelancers (78.83%) reported using AI in their work, often describing it as essential for meeting client expectations of speed CHI ’26, April 13–17, 2026, Barcelona, SpainHwang et al. and acceptable quality. Workers emphasized that clients cared more about timely deliverables than the process itself, and many felt the urge to adopt AI as they would otherwise “fall behind” and “lose my jobs as there are people charging half my rate overseas.” Aside from 14.06% of clients who discouraged AI entirely and 4.69% who believed AI use was not relevant in their business func- tions, the majority (81.25%) of clients encouraged AI use in some capacities. Specifically, we synthesized various degrees of AI use first based on interviews and surveys with workers and later con- firmed through surveys with clients; these include: Use AI as much as possible (39.06%), Use AI for minor tasks (23.44%), Use AI for major tasks (14.06%), and Use AI to automate work entirely (4.69%). As we recruited workers and issued job contracts through free- lance platforms – identical to the typical process used by clients – we noted that these platforms do not provide any tools for detect- ing AI use. We confirmed this through interviews with workers, ensuring that their perceptions matched this platform reality. De- spite the absence of platform-supported detection mechanisms, nearly two-thirds of workers (63.5%) believed that clients could identify AI use most of the time (75%) simply by examining the work product. As one participant explained, “if I relate to my day job as an instructor, I can tell when my students are using AI. So I assume they (clients) can as well. If they didn’t say anything [about AI use], I assume that’s a silent ‘yes’.” In contrast, clients expressed far lower confidence in their own ability to detect AI involvement, reporting only slightly above-midpoint confidence on a 5-point scale (푀= 3.11, 푆퐷= 0.97). 4.2 Misalignment of Accepted AI Use Due to lack of explicit guidance, workers often developed their own strategies and inferred job postings, deadlines, and compen- sation rates to decide when AI use was appropriate. In particular, high-volume, short-turnaround tasks with low payments were read as tacit permission to use AI. As illustrated in one participant’s example, “if they (clients) want 500 of these Excel tasks done by next Wednesday, I think that’s a clear green light for AI.” Notably, most workers believed clients would likely respond positively or even non-reactively to AI use, analogizing it to standard productivity tools such that “there shouldn’t be a surprise if you use Microsoft Office at work.” Yet workers’ assumptions about how to use AI for work do not align with clients’ actual expectations. As noted above, clients’ ac- ceptance depends on the capacity of AI, but workers and clients differ in how they classify “major” versus “minor” tasks, according to the open-text examples that participants provided through the surveys. Across responses, some tasks were consistently viewed as minor, others uniformly as major, and some could be interpreted either way. Importantly, the placement of use cases along this minor–major-task spectrum (Figure 1) diverged between workers and clients. Below, we highlight the five most frequently mentioned task categories and summarize where perceptions aligned or di- verged. •Written Communication. Workers consistently viewed com- munication tasks, particularly drafting or preparing emails, as minor. Clients, however, believed that the significance of these Figure 1: Workers and clients’ perceptions of common free- lance tasks as major or minor tasks for AI use. tasks depended on their recipients, where customer- or employer- facing communication were viewed as major tasks. •Research. Both groups could view research as either major or minor. AI use for information searches or fact-checking was commonly considered minor. Clients tended to classify research as major when it contributed to important deliverables (e.g., client reports), whereas workers were more likely to see AI-assisted research as minor when it served primarily to offer feedback to their own work. •Data analysis. Workers typically viewed AI-assisted data analy- sis as minor as long as it did not involve sensitive or proprietary data. Clients, by contrast, tied the classification to the worker’s role: for positions where data analysis is central (e.g., data scien- tists), it was clearly a major task. •Idea generation. Clients uniformly classified idea generation as a major task. Workers, however, varied: some viewed it as major, while many considered it minor when AI was used only for early-stage brainstorming, inspiration, or final “touch-ups” to make outputs appear more novel. •Text editing. Tasks such as proofreading, editing or condens- ing text, summarization, and spelling or grammar checks were universally considered minor. Workers additionally mentioned drafting text as a minor task, but this use case was observed in clients’ responses. 4.3 Expectation Gaps in AI Disclosure 4.3.1 Workers’ Approaches to AI Disclosure. Based on their own notions of accepted use cases, workers developed their own ap- proaches to disclosing AI use. Synthesizing interview and survey data, we identified five common approaches to AI disclosure, sum- marized in Table 1 (See Section A for details of analysis processes). The most prevalent was Passive Disclosure, in which workers revealed AI use only when clients explicitly asked. Importantly, most workers did not withhold disclosure out of fear of negative client reactions. Rather, because AI use was so widespread, proac- tive disclosure often felt unnecessary or even awkward. As one participant explained, “it’s weird if you have to discuss with your boss which software you use to send emails. So if you do that (AI disclosure), it feels like either your boss is micromanaging or you are being annoying.” AI Disclosure in Freelance WorkCHI ’26, April 13–17, 2026, Barcelona, Spain Type of disclosure practiceDescription Non-Disclosure or Avoidance n = 3 (7%) A minority of workers stated they would not disclose AI use at all. Reasons included believing it was irrelevant (“it’s just a tool”), fear of clients misinterpreting or undervaluing their work, or concern that disclosure might reduce trust or jeopardize future contracts. Passive Disclosure (Disclose only if asked) n = 16, (39%) The most common stance is that workers disclose AI use only when clients explicitly ask about it. Many framed this as a passive agreement—AI use does not need to be mentioned unless disclosure is requested. In these cases, workers typically give broad or high-level explanations (e.g., “I used AI to draft ideas” or “I used it as a helper”) without going into technical detail. Situational or Conditional Dis- closure n = 8, (20%) Workers also described disclosure as context-dependent. For sensitive or competitive projects, they felt disclosure might be more important, while for routine or obvious AI-supported tasks (e.g., transcription, AI-generated voiceovers), disclosure seemed unnecessary. Some reported tailoring the level of detail based on the client’s familiarity with AI or the perceived risks. Qualified or Framed Disclo- sure n = 9, (22%) A sizable group disclosed with disclaimers or framing. They reassured clients that AI was used only as a supportive tool—for brainstorming, proofreading, or saving time—while emphasizing that the final decisions, accuracy checks, and creative control remained with them. This strategy aimed to normalize AI as akin to other everyday tools (e.g., “like a calculator or spell-checker”) while preserving credibility. Proactive or Full Transparency n = 5, (12%) Some workers preferred open, upfront disclosure. They described mentioning AI use in proposals, contracts, or notes attached to deliverables. For them, honesty and professionalism required clarifying where AI was involved, especially in tasks like writing, editing, or research. A subset even emphasized “disclosing all parts where AI was used,” sometimes because their employers or clients explicitly required it. Table 1: Five types of workers’ AI disclosure practice Given this passive approach, workers rarely brought up their AI use practices on their own; disclosure conversations were typ- ically client-initiated. Yet when and why clients initiated these discussions varied widely—from clients proactively stating their expectations of AI use before a contract was signed, to raising con- cerns only when they suspected AI use, to never addressing the issue at all. Even when AI use was discussed before contracting, clients typically communicated only “a general vibe of whether they encouraged or discouraged AI use” rather than providing structured or written guidance (e.g., explicit AI-use policies or task-specific expectations). As a result, pre-contract expectations were often informal and ambiguous. 4.3.2Mismatched Expectations for AI Disclosure and their Potential Consequences. Under such ambiguities, workers did envision there could be mismatched expectations for AI use between workers and clients. This matches with the significant differences between clients’ and workers’ expectations around disclosure in survey re- sponses (푡=2.35,푝=0.021,푑=0.41; Figure 2). Over half of work- ers (53.8%) reported that disclosure was unnecessary, compared to only 40.7% of clients. Moreover, one quarter of clients (25.9%) ex- pected workers to proactively disclose AI use, while fewer than one in ten workers (8.6%) reported that they would disclose in this way. This misalignment highlights an expectation gap in which work- ers assume disclosure is optional or reactive, while many clients interpret it as a proactive responsibility. Notably, workers were also conscious about the possible conse- quences of such misalignment. As with many aspects of freelance platform labor, workers assumed that clients occupied a structurally powerful position when disagreements on AI use and disclosure arise, leaving workers with limited room to negotiate when dis- putes arose. Though none of the workers mentioned direct conflicts with clients over AI use, workers believed clients could terminate a contract or request refunds if they believed AI was used inappropri- ately. If such cases arise, most workers suggested that they would simply accept the outcome or, at most, file a dispute with limited expectation that the platform would resolve the issue in their favor. 4.3.3 Evaluating the Tradeoff of AI Disclosure. Despite these ex- pected consequences, many workers still use AI quietly without proactive disclosure due to their above-mentioned concerns about falling behind and wanting to get more jobs done to yield compet- itiveness and higher profits on the platforms. As one participant mentioned, “It’s all about getting things done. It sucks to put time into a job without earning anything. 2 But it doesn’t matter that much once you get your next contract.” Furthermore, workers acknowledged that while a small minority of clients remained strongly opposed to AI, the likelihood of en- countering such clients—and thus facing severe consequences—felt relatively low compared to the substantial benefits of using AI for productivity and competitiveness. As one participant explicitly stated, “it’s better to ask for forgiveness than permission, because the pros of using it (AI) just outweigh the cons so much.” As such, even participants working in tech-adjacent roles (e.g., software develop- ment, data analysis, technical project management) adopted passive disclosure practices, despite experiencing the most encouraging client attitudes toward AI use. Moreover, workers may remain un- derestimating clients’ openness to AI use. While 65.59% of workers believed they were allowed to use AI for their tasks, 78.13% of clients said they would permit AI use (푡=1.74,푝= .083,푑=0.28; Figure 3), 2 This describes the possible scenario where a client terminates a job contract and requests refunds due to unacceptable AI use after a worker has already completed the job. CHI ’26, April 13–17, 2026, Barcelona, SpainHwang et al. suggesting that workers could be using AI without disclosure even when clients accept AI use. Figure 2: Clients and workers also differed in their attitudes toward disclosure. Significantly more workers than clients believed that disclosure of AI use is not required, whereas significantly more clients expected workers to actively dis- close when they use AI. Figure 3: Clients’ and workers’ responses to whether workers are allowed to use AI for their freelance work. The majority of clients and workers both believe AI use is allowed. The por- tion of clients allowing AI use is slightly larger than workers’ expectations. 4.4 (Mis)Interpretations of AI Policies Across interviews, workers repeatedly emphasized that they had to make their own judgments about AI use and disclosure because “we were never told what to do” and “I would certainly follow the rules if there were any.” Study 3 supports this sentiment: when asked both how they interpreted clients’ AI policies and how they planned to use AI accordingly, workers showed no significant differences between interpretation and planned behavior. In other words, when explicit policies exist, workers intend to follow them and would act based on their interpretations of policies. In this regard, one might naturally ask whether having AI policies in place would resolve disagreements around AI use and disclosure. However, our comparison of clients’ stated policies and expecta- tions in Study 2 with workers’ interpretations in Study 3 reveals that substantial gaps remain: Because workers’ interpretations of AI policies constantly mismatched with clients’ underlying expecta- tions of these policies, disagreement in AI use and disclosure would likely persist. Below, we outline these key mismatches. Figure 4: Permitted AI use according to clients’ AI policies vs. workers’ interpretation of permitted AI use based on each policy. Policy Type훽푆.퐸.푡푝 (Intercept)0.310.112.830.005 ∗ Use AI for minor tasks0.460.123.69< 0.001 ∗ Use AI for major tasks−0.450.14−3.150.002 ∗ Use AI as much as possible −0.840.12−7.20< 0.001 ∗ Use AI to automate work−1.710.14−11.94< 0.001 ∗ Table 2: Workers’ interpretation of permitted AI use by policy AI Disclosure in Freelance WorkCHI ’26, April 13–17, 2026, Barcelona, Spain Figure 5: Difference between permitted AI use and workers’ interpretations of AI policies. Positive values indicate that workers believed they were allowed to use AI to a greater extent than the policies actually permitted, while negative values indicate that workers interpreted the policies more restrictively than intended. Policy Type푡푝Cohen’s 푑 No AI use3.66< 0.001 ∗ 0.38 Use AI for minor tasks10.30< 0.001 ∗ 0.74 Use AI for major tasks−1.250.214−0.13 Use AI as much as possible8.14< 0.001 ∗ −0.48 Use AI to fully automate tasks13.35< 0.001 ∗ −1.36 Table 3: Interpretation gap by AI policy type 4.4.1Permitted AI use. Workers’ interpretations of client AI poli- cies in Study 3 revealed systematic misalignments. As shown in Figure 4, their judgments about permitted AI use varied widely under nearly all policy types, except when AI was explicitly prohib- ited. Confusion was particularly pronounced for policies intended to allow AI in major tasks or to support full automation. On average, workers underestimated the extent of permitted AI use compared to what the policies allowed (푡= −2.58,푝= .010,푀= −0.17, 푆퐷=1.27). The direction of misinterpretation depended on policy type (Table 2): when policies discouraged AI (e.g., “No AI” or “AI only for minor tasks”), workers often overestimated what was al- lowed; when policies encouraged AI (e.g., “major tasks,” “as much as possible,” or “automation”), they consistently underestimated permissions (Figure 5). 4.4.2Specific AI use cases. A similar pattern emerged when work- ers judged specific use cases. Their interpretations diverged signifi- cantly from clients’ intentions (푡= 21.56, 푝< .001, 푑= 0.78), with the largest discrepancies under mid-range policies such as “AI for minor tasks” or “AI for major tasks” (Table 4). These were precisely the conditions where partial permission proved most difficult to interpret, leading to frequent misjudgments about whether AI was acceptable for a given task. Policy Type훽푆.퐸.푡푝 Baseline: Use AI for minor tasks (Intercept)0.690.0612.03< 0.001 ∗ Use AI for major tasks0.120.091.310.189 Use AI as much as possible−0.210.07−3.080.002 ∗ Use AI to automate work−0.400.09−4.32< 0.001 ∗ Table 4: Workers’ interpretation of specific AI use case by policy 4.4.3Requirement for AI disclosure. We also examined how work- ers understood disclosure requirements under each policy. As poli- cies became more supportive of AI use, workers reported a greater willingness to disclose (Table 5, Figure 6). Yet these disclosure as- sumptions did not fully align with client expectations (Table 6). Un- der permissive policies such as “minor tasks” or “major tasks,” most clients indicated that disclosure was unnecessary, while roughly half of workers believed they would still need to proactively dis- close. Conversely, when policies prohibited AI, clients expected some form of disclosure in the event of violation, while workers often interpreted prohibition as a reason to remain silent, reasoning that disclosure would only confirm misconduct (Figure 7). Finally, we found that workers’ prior experiences shaped their accuracy. General beliefs about whether clients permitted AI or could detect AI use showed no effect. However, prior exposure to explicit AI policies from current or former clients significantly improved work- ers’ ability to correctly interpret new policies on AI use (훽=0.51, 푆.퐸.=0.19,푡=2.66,푝<0.001) and AI disclosure (훽=0.69, 푆.퐸.= 0.22, 푡= 3.20, 푝= 0.001). 4.5 A Closer Look at Clients’ AI Policies To better understand why workers’ interpretations diverged from clients’ policies, we qualitatively analyzed the policies themselves. A first issue was inconsistency in defining “No AI use.” While some employers clearly prohibited all AI involvement, others still allowed limited use for administrative or drafting tasks, making it unclear whether “no AI” meant outright prohibition or partial restriction. Policies also relied heavily on vague or underspecified language, common examples include “using AI with common sense” or en- suring “human judgment.” These ambiguous phrases offered little guidance for practical application. Similarly, many policies prohibit AI use on “sensitive topics,” while they rarely define what counts as sensitive, leaving workers to draw their own boundaries. Responsibility clauses were another common yet imprecise fea- ture. Nearly all policies referenced worker “responsibilities,” but these statements focused on procedural steps (e.g., reporting use to managers, seeking approval, or verifying outputs), rather than clarifying actual permitted or prohibited AI use cases. Furthermore, policies with nearly identical wording often reflected different un- derlying priorities in terms of confidentiality, accuracy, or organi- zational control. CHI ’26, April 13–17, 2026, Barcelona, SpainHwang et al. Figure 6: Workers’ interpretations of clients’ disclosure re- quirements under different AI policies. Workers were more likely to report proactive disclosure of AI use when policies encouraged AI use. Policy Type훽푆.퐸.푡푝 Baseline: No AI use (Intercept)0.640.096.93< 0.001 ∗ Use AI for minor tasks1.370.1112.59< 0.001 ∗ Use AI for major tasks1.440.1311.49< 0.001 ∗ Use AI as much as possible1.650.1016.05< 0.001 ∗ Use AI to automate work1.440.1311.49< 0.001 ∗ Table 5: Workers’ interpretation of required AI disclosure by policy type Many clients chose to highlight AI’s limitations as a part of their AI policies without offering actionable guidance. For example, sev- eral policies highlight AI errors, and thus workers should refrain from using AI for "making final decisions." This left unclear what kinds of work decisions counted and ignored the various steps workers perform before reaching their final deliverables. Likewise, restrictions on logging into AI with company accounts set proce- dural boundaries but did not clarify which substantive uses were acceptable. Figure 7: Comparison of clients’ expectations and workers’ in- terpretations of AI disclosure under different policies. Work- ers frequently misjudged situations where clients deemed disclosure unnecessary, and consistently overestimated the extent to which clients expected proactive disclosure of AI use. Clients’ expectation for AI disclosure푡푝Cohen’s 푑 No AI use2.530.022 ∗ 0.63 No need to disclose15.01< 0.001 ∗ 0.96 Disclose passively1.480.1400.08 Disclose actively−12.21< 0.001 ∗ −0.87 Table 6: Difference between clients’ expectations vs. workers’ interpretation of AI disclosure. Finally, disclosure requirements revealed tensions. In some poli- cies, disclosure was framed as enabling responsible AI use, yet employers demanded reporting of even routine or minor uses with- out specifying thresholds. Workers were often told to report AI use to managers but were given little direction on what needed to be dis- closed. Notably, many of the employers with such expectations did not have formal policies in place, amplifying the uncertainty. Taken together, these inconsistencies, ambiguities, and overbroad require- ments explain why workers systematically misinterpret client AI policies. AI Disclosure in Freelance WorkCHI ’26, April 13–17, 2026, Barcelona, Spain 5 Discussion Our multi-stage study examined how freelance workers use and dis- close AI, how these practices align—or more often misalign—with clients’ expectations, and how client policies shape interpretation and behavior. We identified five disclosure strategies among work- ers, with Passive Disclosure, revealing AI use only when explicitly asked, emerging as the most common. Clients, by contrast, gen- erally expected more active disclosure, though this expectation decreased when formal AI policies were in place. Workers also misinterpreted the scope of permitted AI use, particularly under policies that encouraged some but not all forms of AI use. Similarly, workers often overestimated disclosure obligations under permis- sive policies while underestimating disclosure under prohibitive ones. Finally, our analysis of client AI policies revealed recurring issues: vague responsibility clauses, prohibitions without actionable allowances, and oversimplified stage-based distinctions that failed to reflect the realities of freelance workflows. Together, these findings underscore a persistent pattern of mis- alignment: workers’ disclosure practices, clients’ expectations, and formal policies rarely align in straightforward ways. This creates confusion for workers, undermines trust, and exposes the limita- tions of binary disclosure frameworks in capturing the nuanced ways AI is integrated into freelance work. 5.1 Misalignment as a Design Problem We propose that understanding and shaping norms should be a priority for platforms seeking to resolve misalignments between workers’ and clients’ expectations for AI use and disclosure. Much of the gap we observed stems from the rapid pace of AI adoption, which has led workers and clients to develop different normative assumptions. Workers, for example, increasingly view reporting all AI use as unnecessary and often interpret signals in job descriptions as tacit approval, even when clients did not intend them as such. At the same time, our findings suggest that cultivating certain norms, such as creating an atmosphere that encourages AI use, can make workers more willing to disclose, thereby aligning their practices more closely with what clients hope to see. Like other forms of collaboration on freelance platforms, AI use and disclosure should be treated as a shared process rather than a one-sided obligation [1,20,22]. Workers were most willing to disclose when clients explicitly encouraged AI use, as such en- couragement reduced the stigma they associated with AI—namely, fears of being perceived as unskilled or less dedicated to one’s work [17, 35]. To help address these stigma, platforms could help shift these norms by offering clearer guidance and modeling expectations for both parties. For example, platforms might provide policy templates and disclosure scripts that specify which kinds of AI use are ac- ceptable—removing ambiguity and reducing the stigma workers feel when initiating these conversations. They could also highlight positive examples of transparent AI use to normalize disclosure as a routine part of collaborative work rather than a punitive compli- ance check. By shaping the relational and normative environment in these ways, platforms can make AI transparency more feasible and less risky for both workers and clients. 5.2 Designing AI Policies in Freelance and Beyond Our analysis suggests that most policy failures are fundamentally usability failures: workers struggled not because they resisted dis- closure or compliance, but because policies were ambiguous, binary, or disconnected from concrete AI use cases. For example, many “No AI use” policies contradicted themselves—sometimes listing accept- able uses despite an outright ban—creating confusion and encour- aging secrecy rather than compliance. A more effective approach is to replace blanket prohibitions with stage-aware allowances that specify which practices are acceptable at different points in the workflow (e.g., AI permitted for brainstorming or outlining but restricted for final deliverables). Policies also need clear examples of what to disclose and how to disclose it, not just restrictions. Research on AI-generated content labeling [9] and platform guidelines (e.g., YouTube’s “Appropriate vs. Inappropriate” lists) show that disclosures are more effective when they provide concrete, operational categories rather than abstract labels. Our findings indicate a similar demand: workers repeatedly asked for examples that clarified which tasks count as “minor” versus “major” AI use, what types of assistance require disclosure, and what a “proper” disclosure statement looks like. For instance, a policy might specify the following in parallel: •Permitted uses: “AI may be used for proofreading, grammar checks, or summarizing client-provided documents.” •Prohibited uses: “AI may not draft or substantially rewrite client- facing materials, generate original designs, or make decisions on budget or strategy.” •Disclosure content: “If AI contributed to outlining or drafting, state which sections were AI-assisted and describe the human review process.” Specifically, this should allow users to show the extent to which content is AI-generated. •Conditions and rationales of AI use. “AI is used for image gen- eration for a pitch deck, because the task is not the major job function of the user.” We recommend that future work and users refer to existing AI attribution tools (e.g., IBM Attribution toolkit 3 ) to identify relevant items for disclosure. Providing both examples and counterexam- ples gives workers far clearer guidance than vague phrases such as “avoid sensitive topics” or “use judgment.” Without such specificity, many workers in our study defaulted to borrowing policies from prior employers—policies that often did not match clients’ expecta- tions or freelance workflows, further contributing to mismatch. Disclosure frameworks should also move beyond binary yes/no categories. AI use exists on a spectrum—from minor proofreading to substantial content generation—and collapsing this spectrum into a single checkbox leads to both over- and under-reporting. A proportional model—what we call a disclosure ladder—could tie reporting requirements to the materiality of AI involvement. Minor uses might require no disclosure; supportive uses could warrant a brief note; substantive contributions might require a process summary; and fully automated work could trigger pre-approval or auditing. Explicit thresholds (e.g., disclosure required if more 3 https://aiattribution.github.io/ CHI ’26, April 13–17, 2026, Barcelona, SpainHwang et al. than a set proportion of the deliverable originates from AI) would further reduce guesswork and standardize expectations. Similarly, policies addressing data privacy must provide action- able guidance rather than broad warnings. Workers need to know which tools are allowed, what types of data may be entered, and how to handle client information across common scenarios. Instead of static documents, platforms could offer scenario cards that illus- trate how policies apply to typical tasks; e.g., “Drafting an email with sensitive information,” “Uploading client data to AI tools,” or “Fact-checking using public sources.” Platforms could also support negotiation by allowing clients to specify AI allowances directly in job postings and by attaching automatically generated summaries of relevant rules to contracts. Finally, to encourage honest disclosure, platforms should build safe harbors and appeals into their systems. Workers in our study repeatedly feared punitive consequences even for minor or per- missible AI use. Treating policies as versioned, iterative docu- ments—with change logs, usability testing, and opportunities for clarification—would make them more transparent and trustworthy. These design choices would help shift AI disclosure from a punitive compliance task toward a shared, collaborative norm. 5.3 Supporting Freelance Workers on Decentralized Platforms Our work demonstrates the value of studying freelancers; although freelance workspace differs substantially from conventional corpo- rate settings, studying freelancers enables us to observe the micro- mechanisms through which workers interpret ambiguous AI guid- ance, how clients translate expectations into policy language, and where interpretive mismatches arise. Many of these underlying rationales, and the ways they succeed or break down when formal- ized as policies, are difficult to observe in corporate environments, where AI policy development is typically top-down and opaque. At the same time, freelancing’s structural decentralization makes it more difficult for AI-use norms or disclosure practices to develop collectively. These challenges reflect on our findings—specifically, constant mismatches between workers and clients’ expectations— and resonate with prior research that suggests platform workers repeatedly navigating ambiguity around technological change and shifting norms in the absence of organizational guidance [27,28, 49], while AI use only makes navigating these dynamics more challenging [4, 21, 24, 29]. Our findings highlight why platforms must take a more active role in supporting stakeholders. First, mutual misunderstandings are pervasive: clients struggle to formulate clear policies, and work- ers struggle to interpret them. Second, when disputes arise, workers consistently report that platforms default to client-favoring resolu- tions, reinforcing power asymmetries long noted in the platform- labor literature [6,20,37]. Given that both groups must navigate generative AI without organizational scaffolding, platforms are uniquely positioned to provide infrastructural support. We propose two possible directions: Shared literacy and transparent case resources. Platforms and researchers should invest in assembling illustrative cases that are accessible to the public (akin to dark-pattern repositories [13]). This can entail hosting an online library of "bad AI disclosure policies" or collect examples of AI use cases in freelance work to help stake- holders calibrate what constitutes “minor,” “major,” or prohibited AI use and reduce interpretive ambiguity. Participatory, multi-stakeholder policy-formation systems. Recent HCI work demonstrates scalable deliberative systems for producing consensus-driven policy guidelines through structured dialogue, ranking, and expert iteration [30,31]. Adapting these methods to freelance ecosystems would allow workers and clients to collabora- tively shape AI policies, enabling platforms to implement clearer, democratically informed norms rather than relying solely on client- initiated rules. 5.4 Implications for AI Regulation Literacy We propose that raising awareness of the fast-changing norms around AI use and disclosure should be a core component of AI regulation literacy. Importantly, these norms are shifting in diver- gent ways for workers and clients. Workers, for instance, need to recognize that even though AI use has become widespread, a client’s statement of “No AI use” may still entail an expectation of strict or minimal usage. Conversely, clients should understand that workers’ tendency toward passive disclosure often reflects an effort to maintain positive work dynamics—avoiding unneces- sary micromanagement—rather than an attempt to conceal their practices. Improving regulation literacy has implications beyond freelance marketplaces. As formal regulations such as the EU AI Act and Cal- ifornia’s Transparency in AI Act come into effect, workers across industries will face similar interpretive challenges. Freelance con- texts show how regulations are understood on the ground in the absence of institutional supports, offering lessons for policymakers and organizations about how to design training, templates, and communication strategies that build confidence and clarity rather than confusion. 5.5 Limitations and Future Work This study has limitations. Our focus on freelance platforms high- lights flexible, project-based relationships that may not generalize to more institutionalized work arrangements. Our sample, though diverse, was shaped by recruitment through specific platforms and may not capture all regional or sectoral practices. Future research could extend these findings by comparing dis- closure dynamics in other precarious labor markets such as gig work or creator economies, and contrasting them with more stable workplaces. Experimental studies might test proportional versus binary disclosure frameworks to assess which mechanisms best foster trust and compliance, while longitudinal designs could track how norms evolve as regulations mature and platform-level policies institutionalize best practices. Together, these directions can deepen understanding of AI governance across diverse work contexts and inform more effective, user-centered approaches to policy design. 6 Conclusion Across interviews and surveys with both workers and clients, our study demonstrates that the governance of AI in freelance work is not simply a matter of drafting rules, but of designing policies, AI Disclosure in Freelance WorkCHI ’26, April 13–17, 2026, Barcelona, Spain norms, and platform mechanisms that are interpretable and ac- tionable in practice. Misalignments between workers’ disclosure practices, clients’ expectations, and formal policies highlight the limits of binary frameworks and underscore the importance of proportional, stage-aware, and scenario-driven approaches. Free- lance marketplaces, as flexible and high-stakes environments, make visible the challenges of operationalizing regulation and offer a valuable testbed for developing usable governance models. By cen- tering usability, negotiation, and trust, future policies can better align the realities of work with emerging regulatory requirements, advancing both effective AI governance and fairer conditions for workers. Acknowledgments We thank the participants of the CHIWORK 2025 Workshop on Nav- igating Generative AI Disclosure, Ownership, and Accountability in Co-Creative Domains and Data& Society’s Workshop on What Is Work Worth? for their thoughtful perspectives and feedback, which helped shape this work. We thank the freelance workers and clients who participated in our study and shared their experiences, without whom this work would not have been possible. References [1] Juan Carlos Alvarez De La Vega, Marta E. Cecchinato, and John Rooksby. 2022. Design Opportunities for Freelancing Platforms: Online Freelancers’ Views on a Worker-Centred Design Fiction. In 2022 Symposium on Human-Computer Interac- tion for Work. ACM, Durham NH USA, 1–19. doi:10.1145/3533406.3533410 [2] Tae Hyun Baek, Jungkeun Kim, and Jeong Hyun Kim. 2024. Effect of disclosing AI-generated content on prosocial advertising evaluation. International Journal of Advertising (Sept. 2024), 1–22. doi:10.1080/02650487.2024.2401319 [3]Rex Chen, Ruiyi Wang, Norman Sadeh, and Fei Fang. 2025. Missing Pieces: How Do Designs that Expose Uncertainty Longitudinally Impact Trust in AI Decision Aids? An In Situ Study of Gig Drivers. In Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency. ACM, Athens Greece, 790–816. doi:10.1145/3715275.3732050 [4]Inyoung Cheong, Alicia Guo, Mina Lee, Zhehui Liao, Kowe Kadoma, Dongyoung Go, Joseph Chee Chang, Peter Henderson, Mor Naaman, and Amy X. Zhang. 2025. Penalizing Transparency? How AI Disclosure and Author Demographics Shape Human and AI Judgments About Writing. doi:10.48550/arXiv.2507.01418 arXiv:2507.01418 [cs]. [5]Carolyn Crist. 2025.Employee confusion over AI policies persists, re- ports show. https://w.hrdive.com/news/workplace-ai-policies-employee- confusion/756945/. [6]Mateusz Dolata, Norbert Lange, and Gerhard Schwabe. 2024. Development in times of hype: How freelancers explore Generative AI?. In Proceedings of the IEEE/ACM 46th International Conference on Software Engineering. ACM, Lisbon Portugal, 1–13. doi:10.1145/3597503.3639111 [7]Mateusz Dolata, Norbert Lange, and Gerhard Schwabe. 2025. More Attention, Transformation, Acceleration, and Exploration: Freelance Developers’ Take on Hypes. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems. ACM, Yokohama Japan, 1–21. doi:10.1145/3706598.3713097 [8]Fiona Draxler, Anna Werner, Florian Lehmann, Matthias Hoppe, Albrecht Schmidt, Daniel Buschek, and Robin Welsch. 2024. The AI Ghostwriter Effect: When Users do not Perceive Ownership of AI-Generated Text but Self-Declare as Authors. ACM Transactions on Computer-Human Interaction 31, 2 (April 2024), 1–40. doi:10.1145/3637875 [9]Ziv Epstein, Mengying Cathy Fang, Antonio Alonso Arechar, and David Gertler Rand. 2023. What label should be applied to content produced by generative AI? (July 2023). doi:10.31234/osf.io/v4mfz [10] Phyliss Jia Gai, Jiayi Hou, and Yanping Tu. 2025. Competence Penalty Is a Barrier to the Adoption of New Technology. Available at SSRN 5255039 (2025). [11]Louie Giray. 2024. AI shaming: the silent stigma among academic writers and researchers. Annals of Biomedical Engineering 52, 9 (2024), 2319–2324. [12] Alex Glynn. 2024. Suspected undeclared use of artificial intelligence in the academic literature: an analysis of the Academ-AI dataset. arXiv preprint arXiv:2411.15218 (2024). [13]Colin M. Gray, Yubo Kou, Bryan Battles, Joseph Hoggatt, and Austin L. Toombs. 2018. The Dark (Patterns) Side of UX Design. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–14. doi:10.1145/3173574.3174108 [14]Jeffrey T Hancock, Mor Naaman, and Karen Levy. 2020. AI-Mediated Commu- nication: Definition, Research Agenda, and Ethical Considerations. Journal of Computer-Mediated Communication 25, 1 (March 2020), 89–100. doi:10.1093/ jcmc/zmz022 [15] Jessica He and Hyo Jin Do. 2025. Exploring Industry Practices and Perspectives on AI Attribution in Co-Creative Use Cases. In ACM International Conference on Intelligent User Interfaces. [16]Jessica He, Stephanie Houde, and Justin D. Weisz. 2025. Which Contributions Deserve Credit? Perceptions of Attribution in Human-AI Co-Creation. In Pro- ceedings of the 2025 CHI Conference on Human Factors in Computing Systems. ACM, Yokohama Japan, 1–18. doi:10.1145/3706598.3713522 [17]Sigurd Birk Heimstad, Anders Hauge Wien, and Tarje Gaustad. 2025. Machine heuristic in algorithm aversion: Perceived creativity and effort of output created by or with artificial intelligence. Computers in Human Behavior: Artificial Humans 5 (Aug. 2025), 100190. doi:10.1016/j.chbah.2025.100190 [18] Sigurd Birk Heimstad, Anders Hauge Wien, and Tarje Gaustad. 2025. Machine heuristic in algorithm aversion: Perceived creativity and effort of output created by or with artificial intelligence. Computers in Human Behavior: Artificial Humans 5 (Aug. 2025), 100190. doi:10.1016/j.chbah.2025.100190 [19] Joo-Wha Hong. 2018. Bias in Perception of Art Produced by Artificial Intelligence. In Human-Computer Interaction. Interaction in Context, Masaaki Kurosu (Ed.). Vol. 10902. Springer International Publishing, Cham, 290–303. doi:10.1007/978- 3-319-91244-8_24 Series Title: Lecture Notes in Computer Science. [20] Jane Hsieh, Oluwatobi Adisa, Sachi Bafna, and Haiyi Zhu. 2023. Designing Individ- ualized Policy and Technology Interventions to Improve Gig Work Conditions. In Proceedings of the 2nd Annual Meeting of the Symposium on Human-Computer In- teraction for Work. ACM, Oldenburg Germany, 1–9. doi:10.1145/3596671.3598576 [21]Jessica Huang, Ning F. Ma, Veronica A. Rivera, Tabreek Somani, Patrick Yung Kang Lee, Joanna Mcgrenere, and Dongwook Yoon. 2024. Design Ten- sions in Online Freelancing Platforms: Using Speculative Participatory Design to Support Freelancers’ Relationships with Clients. Proceedings of the ACM on Human-Computer Interaction 8, CSCW1 (April 2024), 1–28. doi:10.1145/3653700 [22]Srihari Hulikal Muralidhar, Sean Rintel, and Siddharth Suri. 2022. Collaboration, Invisible Work, and The Costs of Macrotask Freelancing. Proceedings of the ACM on Human-Computer Interaction 6, CSCW2 (Nov. 2022), 1–25. doi:10.1145/3555175 [23]Angel Hsing-Chi Hwang, Q. Vera Liao, Su Lin Blodgett, Alexandra Olteanu, and Adam Trischler. 2024. "It was 80% me, 20% AI": Seeking Authenticity in Co-Writing with Large Language Models. https://arxiv.org/abs/2411.13032. arXiv:2411.13032 [cs.HC] [24]Angel Hsing-Chi Hwang and Yao-Yuan Yang. 2025. Popularity Matters: Revealing Their Use of AI Tools Harms Freelancers with Smaller Follower Bases. Academy of Management Proceedings 2025, 1 (July 2025), 20292. doi:10.5465/AMPROC. 2025.20292abstract [25] Maurice Jakesch, Megan French, Xiao Ma, Jeffrey T. Hancock, and Mor Naaman. 2019. AI-Mediated Communication: How the Perception that Profile Text was Written by AI Affects Trustworthiness. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, Glasgow Scotland Uk, 1–13. doi:10.1145/3290605.3300469 [26] Maurice Jakesch, Jeffrey T. Hancock, and Mor Naaman. 2023. Human heuristics for AI-generated language are flawed. Proceedings of the National Academy of Sciences 120, 11 (March 2023), e2208839120. doi:10.1073/pnas.2208839120 [27] Mohammad Hossein Jarrahi. 2018. Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making. Business Horizons 61, 4 (July 2018), 577–586. doi:10.1016/j.bushor.2018.03.007 [28]Mohammad Hossein Jarrahi, Gemma Newlands, Min Kyung Lee, Christine T. Wolf, Eliscia Kinder, and Will Sutherland. 2021. Algorithmic management in a work context. Big Data & Society 8, 2 (July 2021), 20539517211020332. doi:10. 1177/20539517211020332 [29]Kowe Kadoma, Danaé Metaxa, and Mor Naaman. 2025. Generative AI and Perceptual Harms: Who’s Suspected of using LLMs?. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems. ACM, Yokohama Japan, 1–17. doi:10.1145/3706598.3713897 [30]Andrew Konya, Lisa Schirch, Colin Irwin, and Aviv Ovadya. 2023. Democratic Policy Development using Collective Dialogues and AI. arXiv:2311.02242 [cs.CY] https://arxiv.org/abs/2311.02242 [31]Tzu-Sheng Kuo, Quan Ze Chen, Amy X. Zhang, Jane Hsieh, Haiyi Zhu, and Kenneth Holstein. 2025. PolicyCraft: Supporting Collaborative and Participatory Policy Design through Case-Grounded Deliberation. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25). Association for Computing Machinery, New York, NY, USA, Article 805, 24 pages. doi:10. 1145/3706598.3713865 [32]Lin Kyi, Amruta Mahuli, M. Six Silberman, Reuben Binns, Jun Zhao, and Asia J. Biega. 2025. Governance of Generative AI in Creative Work: Consent, Credit, Compensation, and Beyond. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems. ACM, Yokohama Japan, 1–16. doi:10.1145/3706598. 3713799 CHI ’26, April 13–17, 2026, Barcelona, SpainHwang et al. [33]Toby Jia-Jun Li, Yuwen Lu, Jaylexia Clark, Meng Chen, Victor Cox, Meng Jiang, Yang Yang, Tamara Kay, Danielle Wood, and Jay Brockman. 2022. A Bottom-Up End-User Intelligent Assistant Approach to Empower Gig Workers against AI Inequality. In 2022 Symposium on Human-Computer Interaction for Work. ACM, Durham NH USA, 1–10. doi:10.1145/3533406.3533418 [34]Shuhao Ma, Zhiming Liu, Valentina Nisi, Sarah E Fox, and Nuno Jardim Nunes. 2025. Speculative Job Design: Probing Alternative Opportunities for Gig Workers in an Automated Future. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems. ACM, Yokohama Japan, 1–18. doi:10.1145/3706598. 3713885 [35]UweMesser.2024.Co-creatingartwithgenerativear- tificialintelligence:Implicationsforartworksandartists. https://linkinghub.elsevier.com/retrieve/pii/S2949882124000161.Com- puters in Human Behavior: Artificial Humans 2, 1 (Jan. 2024), 100056. doi:10.1016/j.chbah.2024.100056 [36]Hannah Mieczkowski, Jeffrey T. Hancock, Mor Naaman, Malte Jung, and Jess Hohenstein. 2021. AI-Mediated Communication: Language Use and Interpersonal Effects in a Referential Communication Task. Proceedings of the ACM on Human- Computer Interaction 5, CSCW1 (April 2021), 1–14. doi:10.1145/3449091 [37]Isabel Munoz, Michael Dunn, Steve Sawyer, and Emily Michaels. 2022. Platform- mediated Markets, Online Freelance Workers and Deconstructed Identities. Pro- ceedings of the ACM on Human-Computer Interaction 6, CSCW2 (Nov. 2022), 1–24. doi:10.1145/3555092 [38]Nicholas G Otis, Solène Delecourt, Katelyn Cranney, and Rembrand Koning. 2024. Global evidence on gender gaps and generative AI. Harvard Business School Boston, MA, USA. [39]Julien Porquet, Sitong Wang, and Lydia B Chilton. 2025. Copying style, Extracting value: Illustrators’ Perception of AI Style Transfer and its Impact on Creative Labor. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems. ACM, Yokohama Japan, 1–16. doi:10.1145/3706598.3713854 [40]Jessica A Reif, Richard P Larrick, and Jack B Soll. 2025. Evidence of a social evaluation penalty for using AI. Proceedings of the National Academy of Sciences 122, 19 (2025), e2426766122. [41]Bryan Robinson. 2024. 77% Of Employees Lost On How To Use AI In Their Careers, New Study Shows. https://w.forbes.com/sites/bryanrobinson/2024/09/09/77- of-employees-lost-on-how-to-use-ai-in-their-careers-new-study-shows/. [42] Muhammad Sadi Adamu. 2021. Problematising Identity, Positionality, and Ad- equacy in HCI4D Fieldwork: A Reflection. In Proceedings of the 3rd African Human-Computer Interaction Conference: Inclusiveness and Empowerment (Ma- puto, Mozambique) (AfriCHI ’21). Association for Computing Machinery, New York, NY, USA, 65–74. doi:10.1145/3448696.3448703 [43]Advait Sarkar. 2025. AI Could Have Written This: Birth of a Classist Slur in Knowledge Work. In Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems. 1–12. [44]Oliver Schilke and Martin Reimann. 2025. The transparency dilemma: How AI disclosure erodes trust. Organizational Behavior and Human Decision Processes 188 (2025), 104405. [45]Edward Segal. 2023. New Report Provides Reality Check About Freelancers In The Workforce. https://w.forbes.com/sites/edwardsegal/2023/12/12/new-report- provides-reality-check-about-freelancers-in-the-workforce/. Forbes (2023). [46] Renee Shelby, Shalaleh Rismani, Kathryn Henne, AJung Moon, Negar Ros- tamzadeh, Paul Nicholas, N’Mah Yilla-Akbari, Jess Gallegos, Andrew Smart, Emilio Garcia, and Gurleen Virk. 2023. Sociotechnical Harms of Algorithmic Systems: Scoping a Taxonomy for Harm Reduction. In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society. ACM, Montréal QC Canada, 723–741. doi:10.1145/3600211.3604673 [47]Trevor Stalnaker, Nathan Wintersgill, Oscar Chaparro, Laura A. Heymann, Mas- similiano Di Penta, Daniel M German, and Denys Poshyvanyk. 2025. Developer Perspectives on Licensing and Copyright Issues Arising from Generative AI for Software Development. ACM Transactions on Software Engineering and Method- ology (June 2025), 3743133. doi:10.1145/3743133 [48]Na Sun and Donald Kalar. 2025. Gemini at Work: Knowledge Workers’ Perceptions and Assessment of Productivity Gains. In Proceedings of the 2025 ACM Designing Interactive Systems Conference (DIS ’25). Association for Computing Machinery, New York, NY, USA, 3681–3695. doi:10.1145/3715336.3735679 [49]Will Sutherland, Mohammad Hossein Jarrahi, Michael Dunn, and Sarah Beth Nelson. 2020. Work Precarity and Gig Literacies in Online Freelancing. Work, Employment and Society 34, 3 (June 2020), 457–475. doi:10.1177/0950017019886511 [50]Francisco Tigre Moura. 2023. Artificial intelligence, creativity, and intentionality: The need for a paradigm shift. The Journal of Creative Behavior 57, 3 (2023), 336–338. [51]Benjamin Toff and Felix M Simon. 2025. “Or they could just not use it?”: The dilemma of AI disclosure for audience trust in news. The International Journal of Press/Politics 30, 4 (2025), 881–903. [52]Michelle Vaccaro, Abdullah Almaatouq, and Thomas Malone. 2024. When com- binations of humans and AI are useful: A systematic review and meta-analysis. Nature Human Behaviour 8, 12 (Oct. 2024), 2293–2303. doi:10.1038/s41562-024- 02024-1 [53]Allison Woodruff, Renee Shelby, Patrick Gage Kelley, Steven Rousso-Schindler, Jamila Smith-Loud, and Lauren Wilcox. 2024. How Knowledge Workers Think Generative AI Will (Not) Transform Their Industries. In Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM, Honolulu HI USA, 1–26. doi:10.1145/3613904.3642700 [54]Angie Zhang, Rocita Rana, Alexander Boltz, Veena Dubal, and Min Kyung Lee. 2024. Data Probes as Boundary Objects for Technology Policy Design: Demys- tifying Technology for Policymakers and Aligning Stakeholder Objectives in Rideshare Gig Work. In Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM, Honolulu HI USA, 1–21. doi:10.1145/3613904.3642000 [55]Zhiping Zhang, Chenxinran Shen, Bingsheng Yao, Dakuo Wang, and Tianshi Li. 2025. Secret Use of Large Language Model (LLM). Proceedings of the ACM on Human-Computer Interaction 9, 2 (May 2025), 1–26. doi:10.1145/3711061 [56]Carl Zimmer. [n. d.]. Fraudulent Scientific Papers Are Rapidly Increasing, Study Finds. https://w.nytimes.com/2025/08/04/science/04hs-science-papers-fraud- research-paper-mills.html. Accessed: 2025-08-17. [57]Araz Zirar, Syed Imran Ali, and Nazrul Islam. 2023. Worker and workplace Artificial Intelligence (AI) coexistence: Emerging themes and research agenda. Technovation 124 (June 2023), 102747. doi:10.1016/j.technovation.2023.102747 Appendix A Methodological Details We conducted a three-stage study, iteratively synthesizing re- sponses from both freelance workers and clients. We began with semi-structured interviews with workers to understand their use of AI applications for work, their current approaches to AI disclo- sure, and any outstanding concerns (Study 1.1). We then replicated and expanded emerging findings from the interviews through a survey study with a larger group of freelance workers (Study 1.2). In Study 2, we sought clients’ perspectives and expectations on workers’ AI use and disclosure through a survey. We designed the survey by mirroring questions and insights that we attained from Study 1.1 and Study 1.2 in order to make direct comparisons be- tween clients’ and workers’ responses. We also asked for clients’ current “AI policies” on whether and how workers should apply and disclose AI at work. Finally, we presented these AI policies to workers and sought their responses through another round of sur- vey (Study 3). The full study protocol was reviewed and approved by the authors’ Institutional Review Board (IRB). A.1Study 1.1: Interview with Freelance Workers A.1.1 Recruitment Approaches & Interview Participants. We re- cruited푁=41 freelancers from different industries across dif- ferent freelance platforms to participate in Study 1.1’s interviews. To diversify the types of freelancers recruited, we posted the same public recruitment message in multiple channels across five free- lance platforms (Upwork, Fiverr, Freelancer.com, People Per Hour, and Toptal) and let interested freelancers sign up to participate in the study organically. Below is the recruitment message: Looking for short-term research participants: Share your ex- perience as freelancer! We are a team of researchers at [Anonymous University]. We invite freelancers to share their experiences working and seek- ing opportunities on freelance platforms. You will participate in a short interview (30 40 minutes) where one of the researchers on our team will ask you questions about your freelance experiences. We will schedule a time for you to participate in the interview at your convenience. We will provide a video-conferencing link for the study session. Your contributions will help us understand the challenges and AI Disclosure in Freelance WorkCHI ’26, April 13–17, 2026, Barcelona, Spain successes faced by freelancers and will inform how we design technologies to address these challenges. This is an opportu- nity to provide valuable feedback to advance research in this topic area. If you have unique experiences or perspectives, we want to hear from you! We screened each candidate’s profile to ensure they indeed had prior freelance and industry experience and distributed an informed con- sent form to qualified participants. We report their demographics, freelance experience, and professional domains in Section B. A.1.2Interview Protocol. Interviews with worker participants con- sist of three major topics, including (1) understanding workers’ freelance experience and background, (2) understanding workers’ AI use: their motivations, concerns, and practice of using AI for work, and (3) understanding workers’ practice of AI disclosure: their decisions and rationale for disclosing (or not disclosing) AI use as well as their beliefs about the potential impact of such practice. All interviews were conducted online via Zoom. Each interview lasts around 45 minutes. Below, we enclose the interview protocol: Introduction: Please briefly describe your job and experience as a freelancer. •How long have you been working as a freelancer? At what capacity (e.g., full-time or part-time, how many hours per week)? • What types of work do you do as a freelance worker? AI Use: Do you use AI for any part of your work? • If yes, how and what do you use AI for? • How does the use of AI influence your work performance? AI Disclosure: Whether and how do you plan to disclose your use of AI for work? •(If you are not using AI at work yourself, do you see the need for people to disclose their use of AI for work to their clients?) • Please elaborate further: What encourages/discourages you from disclosing your use of AI for work? •Do you hold any concern about disclosing your use of AI for work? –Do you think disclosing the use of AI influences the types of clients who want to work with you and the types of opportunities you could cultivate on the platform? How so? –Do you think disclosing the use of AI influences your work relationship and trust-building with clients? How so? –Do you think disclosing the use of AI influences how you are treated and compensated on the platform? How so? A.1.3 Data Collection & Analysis. All interview sessions were video-recorded through Zoom. Researchers also took notes during each interview session, recording participants’ responses to our research questions, resulting in a total of 87 pages of notes. We used Otter.ai to transcribe all recorded interviews for data analysis. We adopted an open-coding approach to data analysis and did not use a predefined codebook to guide the coding process. We used a digital whiteboard (Miro) to facilitate the coding process. For each participant, we put down short summaries of their responses (with corresponding quotes) to each of our research questions on digi- tal sticky notes. For example, within each participant’s responses to their thoughts on AI disclosure, we extracted (a) their current practice, (b) their motivation to adopt such a practice, and (c) their expected consequences of AI disclosure and put them down on three separate sticky notes. For each research question, we then grouped participants’ responses by similarity. The lead author con- ducted an initial round of coding to produce preliminary clusters of notes for each research question. The research team then met weekly/bi-weekly to collaboratively review and discuss these notes, iteratively identify and form a working consensus on emerging patterns, themes, and categories across interviews. Through this inductive process and team-based reflection, we produce the five categories of AI disclosure practice in Table 1 and other themes in the findings. To reduce subjectivity during data analysis, we also actively sought feedback from fellow researchers outside our team through- out the data analysis process. Still, we acknowledge that our own backgrounds and experiences can influence analyses [42]. Hence, we share our positionality statement here for transparency: We formed our research team with three HCI researchers (one with primary training in computer science, two with primary train- ing in social and behavioral science), one AI/machine learning researcher, and one quantitative social science researcher. All re- searchers have both industry and academic research experience. Per our self-identified demographics, the team consists of four Asian females and one Asian male. We expect the interdisciplinary backgrounds of our research team to contribute to more diverse perspectives throughout the research process. A.2 Study 1.2: Survey with Freelance Workers A.2.1Questionnaire Development. We constructed a questionnaire by adapting interview prompts into survey items and adding follow- up questions based on emerging findings. For example, when asking how participants disclose their AI use, we included both a free-text response and a multiple-choice item with three options derived from Study 1.1 interviews: no disclosure, passive disclosure (e.g., disclos- ing only when clients ask), and active disclosure (e.g., proactively reporting how AI is used to clients). Because several interviewees also noted that their AI use depends on client policies, we added questions (Items 7–9) on whether participants had encountered such guidelines and how they responded. The survey was admin- istered via Qualtrics, took approximately 30 minutes to complete. Below, we enclose the full survey: 1.Do you use AI for any part of your professional work? [Yes/No] 2.Please elaborate your response further. Why and what do you use AI for? (Or why not?) [Free text response] 3.Among the ways that you use AI, would you disclose any of them to your employers? • Yes, I would proactively disclose to my employers. • Yes, I would disclose if my employers ask about it. • No, I would not disclose at all. 4.If you answer "Yes" to the question above, please specify what and how you would disclose to your clients. [Free text response] CHI ’26, April 13–17, 2026, Barcelona, SpainHwang et al. 5.Do you have clients or employers explicitly stating their "AI policies" in their job descriptions? (e.g., statements about whether and how employees can use AI for work) [Yes/No] 6.If you answer "Yes" to the question above, please copy or summarize the statement(s) that you saw from clients and/or employers. [Free text response] 7. How do you interpret the clients’ expectation for AI use based on the "AI policy" that you mentioned above? • I cannot use AI for any part of my work. • I can use AI but only to support minor parts of my work, such as: [Free text response] •I can use AI to support key job functions for my work, such as: [Free text response] • I can use AI as much as possible to facilitate my work. • I can use AI to automate my work. 8. In reality, how would you use AI in response to the "AI policy" that you mentioned above? • I would not use AI for any part of my work. • I would use AI to support minor parts of my work, such as: [Free text response] •I would use AI to support key job functions for my work, such as: [Free text response] • I would use AI as much as possible to facilitate my work. • I would use AI to automate my work. 9.Consider all the "AI policies" that you have encountered so far, are you able to tell clients and/or employers expectations for using AI at work? • None of them (around 0%) • A few of them (around 25%) • About half of them (around 50%) • Many of them (around 75%) • Most of them (around 100%) 10.Do you think employers can tell whether you use AI for work? • Never (around 0% of time) • Rarely (around 25% of time) • Sometimes (around 50% of time) • Often (around 75% of time) • Always (around 100% of time) A.2.2 Survey Participants. We adopted the same recruitment and screening strategies and expanded to recruit a larger sample size 푁=100 for the survey study. We report participants’ demographics and freelance experiences in Section B. A.3 Study 2: Survey with Clients on Perspectives and Policies for AI Use and Disclosure A.3.1 Questionnaire Development. In Study 2, we mirrored the worker survey from Study 1.2 by asking clients on freelance plat- forms about their expectations for workers’ AI use. Specifically, we invited them to share any “AI policies” they had in place and to elaborate on how they expected workers to align their AI use with these guidelines. Below, we enclose the full questionnaire from Study 2: 1. Do you allow workers to use AI for work? [Yes/No] 2.Please elaborate on your response further. Why or what not? [Free text response] 3. What are the forms of AI use that you allow (e.g., editing text, drafting emails, generating ideas)? Please specify and list out as many as possible. [Free text response] 4.In what cases would you encourage workers to use AI for work? [Free text response] 5.In what cases would you be concerned about workers using AI for work? [Free text response] 6.Based on your own estimate, do you think your freelance workers use AI for work? • None of them (around 0%) • A few of them (around 25%) • About half of them (around 50%) • Many of them (around 75%) • Most of them (around 100%) 7. Based on your own estimate, do you think your freelance workers disclose their use of AI for work? • None of them (around 0%) • A few of them (around 25%) • About half of them (around 50%) • Many of them (around 75%) • Most of them (around 100%) 8. Based on your own estimate, can you tell whether your freelance workers use AI for work? • Never (around 0% of time) • Rarely (around 25% of time) • Sometimes (around 50% of time) • Often (around 75% of time) • Always (around 100% of time) 9. Whether and how should workers disclose their use of AI for work? • Yes, they have to proactively disclose their use of AI. • Yes, they have to disclose their use of AI if I ask. • No, I don’t mind whether they disclose their use of AI. • Not applicable. My workers don’t use AI at all. 10. Please further describe how you ask your workers to dis- close AI use, if at all. What information do you ask them to provide? [Free text response] 11.Do you explicitly state an “AI policy” (i.e., statements about whether and how workers can use AI for work) in your job descriptions? [Yes/No] 12.Please copy your AI policy here. If you don’t currently have an AI policy, please write one that reflects how you would like your workers to use AI. [Free text response] 13.Based on this "AI policy," how do you expect workers to use AI for work, if at all? • Workers cannot use AI for any part of their work. • Workers can use AI, but only to support minor parts of their work, such as [Free text response]. •Workers can use AI to support primary, key job functions for their work, such as [Free text response]. •Workers can use AI as much as possible to facilitate their work. • Workers can use AI to fully automate their work. AI Disclosure in Freelance WorkCHI ’26, April 13–17, 2026, Barcelona, Spain A.3.2Participants. We adopted the same recruitment and screen- ing strategies and expanded to recruit a larger sample size푁=145 for the survey study. We report their demographics and freelance experiences in Section B. A.4 Study 3: Survey with Workers on Interpretation of AI Policies In the previous study, clients shared their AI policies and indicated the intended scope of AI use (i.e., No AI use, Use AI for minor tasks, Use AI for major tasks, Use AI as much as possible, or Use AI to fully automate tasks). We grouped these policies into the five corresponding categories based on clients’ own responses and adopted them as probes for Study 3: Specifically, in this follow-up study, workers evaluated these policies through a survey. Each worker was randomly assigned to rate five policies, one from each of the five categories. A.4.1 Questionnaire Development. Besides informed consent and demographic questions, the Study 3 survey consisted of two main components: Workers’ attitudes toward AI use and disclosure. Building on the same measures used in Study 1.1 and Study 2, we asked workers about their practices and perspectives related to AI in freelance work. These questions addressed: (1) their own adoption and use of AI, (2) their perceptions of clients’ expectations for AI use, and (3) prior experiences with clients’ AI policies and how they interpreted them. Workers’ responses to clients’ AI policies. We then presented workers with client-defined AI policies (collected in Study 2) and asked them to evaluate each. For every policy, workers rated: (1) how they interpreted the client’s expectations for AI use and disclosure, (2) how they would actually use and disclose AI under that policy, and (3) whether specific AI use cases (these are permitted use cases listed by clients in Study 2) were permitted. Below, we enclose the full questionnaire from Study 3: [Read AI Policy] Below is an "AI policy" that outlines an employer’s expectations and guidelines for AI use in the workplace. Please read through it carefully and answer the following questions. 1.Based on this "AI policy," how do you interpret the employer’s expectation for AI use for work? • Workers cannot use AI for any part of their work •Workers can use AI, but only to support minor parts of their work •Workers can use AI to support primary, key job functions for their work •Workers can use AI as much as possible to facilitate their work • Workers can use AI to fully automate their work 2.Given this "AI policy," how might you use AI for work in reality? • I would not use AI for any part of my work • I would use AI, but only to support minor parts of my work •I would use AI to support primary, key job functions for my work • I would use AI as much as possible to facilitate my work. • I would AI to fully automate my work. 3. Based on this "AI policy," how do you interpret the client’s expectation for disclosure of AI use in the workplace? • Workers should proactively disclose their AI use. • Workers should disclose their use of AI when the client asks about it. • Workers do not need to disclose their use of AI. • Not applicable. Workers should not use AI at all. 4. Given this "AI policy," how might you disclose AI use in reality? • I would proactively disclose my AI use. • I would disclose my AI use when the client asks about it. • I would not disclose my AI use. • Not applicable. I would not use AI at all. 5. Based on this "AI policy," are the following forms of AI use permitted in the workplace? • (Read AI use cases) • Rate AI use given the AI policy by: (a) Allowed; (b) Not allowed; (c) It depends; (d) Need manager’s approval; (e) I’m not sure. A.4.2Participants. Following the same recruitment approach from previous studies, we recruited푁=100 freelance workers to partic- ipate in Study 3. B Participants’ Demographics DemographicsStudy 1.1Study 1.2Study 2Study 3 Gender Male46.3%53%51.38%50.00% Female46.3%39%33.33%48.96% Non-binary/third gender4.9%1%1.39%1.04% Prefer not to disclose2.4%7%2.78%0% Age40.35±11.2936.4±9.842.31±11.2037.29±9.88 Ethnicity White/Caucasian61%57.1%61.11%48.41% African/Black American 14.6%21.4%9.72%23.81% Asian4.9%8.2%6.94%7.14% Hispanic, Latino, Latinx4.9%6.1%4.17%3.17% Multi-racial or multi-cultural7.3%6.1%12.50%15.87% Native American0%1%0%0% Prefer not to disclose7.3%0%2.78%0.79% CFreelance Service Offered/Sought by Workers and Clients Note: Each worker can provide more than one type of service. CHI ’26, April 13–17, 2026, Barcelona, SpainHwang et al. Freelance Service TypeStudy 1.1Study 1.2Study 2Study 3 Healthcare and Medicine12%7%16%14% Information Technology (IT) and Engineering 23%27%23%22% Education and Training8%5%6%8% Sales and Marketing15%12%13%14% Finance and Accounting5%18%13%10% Human Resources (HR) and Recruitment 15%2%16%1% Administration and Support13%3%11%7% Manufacturing and Produc- tion 5%3%9%4% Legal Services5%1%2%3% Hospitality and Tourism7%7%9%18% Creative Arts and Media6%2%8%3% Real Estate and Property Man- agement 2%4%3%4% Transportation and Logistics2%1%2%1% Science and Research3%7%3%1% Public Sector and Government12%5%6%1% Other17%17%14%9%