How Do Users Perceive Deepfake Personas? Investigating the Deepfake User Perception and Its Implications for Human-Computer Interaction Ilkka Kaate University of Turku, Finland iokaat@utu.fi Joni Salminen University of Vaasa; and Turku School of Economics, Finland joolsa@utu.fi Soon-Gyo Jung Qatar Computing Research Institute, Qatar sjung@hbku.edu.qa Hind Almerekhi Qatar Computing Research Institute, Qatar hialmerekhi@hbku.edu.qa Bernard J. Jansen Qatar Computing Research Institute, Hamad Bin Khalifa University, Qatar jjansen@acm.org ABSTRACT Although deepfakes have a negative connotation in human- computer interaction (HCI) due to their risks, they also involve many opportunities, such as communicating user needs in the form of a “living, talking” deepfake persona. To scope and better un- derstand these opportunities, we present a qualitative analysis of 46 participants’ think-aloud transcripts based on interacting with deepfake personas and human personas, representing a potentially beneficial application of deepfakes for HCI. Our qualitative analy- sis of 92 think-aloud records indicates five central user deepfake themes, including (1) Realism, (2) User Needs, (3) Distracting Prop- erties, (4) Added Value, and (5) Rapport. The results indicate various challenges in deepfake user perception that technology developers need to address before the potential of deepfake applications can be realized for HCI. CCS CONCEPTS • Human-centered computing → Human computer interaction (HCI). KEYWORDS Deepfakes, user perceptions, user experience, HCI applications ACM Reference Format: Ilkka Kaate, Joni Salminen, Soon-Gyo Jung, Hind Almerekhi, and Bernard J. Jansen. 2023. How Do Users Perceive Deepfake Personas? Investigating the Deepfake User Perception and Its Implications for Human-Computer Interaction. In 15th Biannual Conference of the Italian SIGCHI Chapter (CHI- taly 2023), September 20–22, 2023, Torino, Italy. ACM, New York, NY, USA, 12 pages. https://doi.org/10.1145/3605390.3605397 This work is licensed under a Creative Commons Attribution International 4.0 License. CHItaly 2023, September 20–22, 2023, Torino, Italy © 2023 Copyright held by the owner/author(s). ACM ISBN 979-8-4007-0806-0/23/09. https://doi.org/10.1145/3605390.3605397 1 INTRODUCTION Human-computer interaction (HCI) is affected by Artificial Intelli- gence (AI) as more and more information systems are integrating AI components, impacting various aspects of people’s everyday lives [58]. The remarkable progress of AI technologies, often leveraging machine learning (ML) innovations, has introduced opportunities for empowering users in all sectors, including consumer-oriented information systems [2, 7, 10, 18], and professional domains like medical informatics [68]. One of the promising technologies for enhancing user interaction with information systems is deepfake technology, which creates photorealistic human representations, typically in the form of videos [45]. While much research on deepfakes has thus far focused on their risks and negative implications, such as manipulation, misinforma- tion, and fake news [19, 22, 38], it is also important to acknowledge deepfakes’ positive opportunities for HCI [15, 45]. Deepfake avatars or personas could enhance multiple aspects of user experience (UX), such as improving avatar quality in Metaverse applications [65], increasing the realism of customer service and sales agents from chatbots to immersive agents, and acting as human replacements to educate or inform people about various topical matters. Therefore, deepfakes can provide instrumental value in improving user self- expression and interaction quality between organizations and their customers, as well as potential other benefits for design; for exam- ple, through the creation of personas, i.e., fictitious presentations of various user groups of interest [13, 28]. However, a central antecedent to realizing these potential bene- fits is that the deepfakes are experienced positively by the end-users. If users find deepfakes, for example, scary, dull, or creepy - as per the uncanny valley effect [42] - then users would likely resist adopt- ing and using the deepfakes and instead prefer other interaction techniques. Therefore, how users perceive deepfakes is a central com- ponent in integrating deepfakes into real information systems. Alas, we still know little of this important human factor, so deepfake user perception requires more research than what is currently available in HCI literature. Without empirically oriented research informing us of the crucial dimensions of how deepfakes are perceived and why, it is difficult to ascertain the pros and cons of implementing deepfakes in real information systems towards positive net effects for UX and organizations interested in integrating deepfakes into https://doi.org/10.1145/3605390.3605397 https://creativecommons.org/licenses/by/4.0/ https://creativecommons.org/licenses/by/4.0/ https://doi.org/10.1145/3605390.3605397 http://crossmark.crossref.org/dialog/?doi=10.1145%2F3605390.3605397&domain=pdf&date_stamp=2023-09-20 CHItaly 2023, September 20–22, 2023, Torino, Italy Ilkka Kaate et al. their offerings. Hence, user studies focused on user perception of deepfakes are needed. To this end, our study aims to address this knowledge gap by exploring central themes in users’ deepfake user perception. We focus on the research question, “How do people perceive deepfakes for a design task?”. To address this question, we conduct an exper- imental study with 46 participants in a lab setting. We apply the think-aloud method [48] to record and transcribe users ‘cognitive walkthroughs’ of using deepfake personas (i.e., personas created using deepfake technology) for a design task relative to real human personas (i.e., actors expressing user needs). Our findings shed light on the nature of the novel concept of deepfake user perception, offering avenues for theorization and further empirical work on understanding human-deepfake interaction in greater detail, pro- viding implications on how AI technologies can be integrated into information systems to improve UX-related goals. The remainder of this work is organized as follows. Section 2 summarizes the existing research on deepfake user perceptionwhile justifying the need for studying deepfake user perception within the HCI domain. In Section 3, we explain the study methodology, including our experimental design, participant recruitment, data collection workflow, and how we conducted the analysis. This is followed by presenting the results in Section 4, which contain multiple themes of deepfake perception based on our analysis. We then discuss the results, including their implications for HCI. We finish by pointing out limitations and directions in Section 5 for future work regarding deepfake perception. 2 LITERATURE REVIEW As the potential implications of deepfakes have become more evi- dent due to progress in AI technology, there has been an increase in research studies focused on deepfakes [45]. While most studies focus on deepfake detection [38], i.e., developing algorithms and models for this task, there has been a gradual rise in the number of studies exploring how deepfakes are perceived. Despite being a relatively novel field of research, deepfake perception studies have covered this topic from various angles using a wide range of techniques [44]. Table 1 presents the central themes in the cur- rent literature, particularly illustrating the strong role of detection studies. Understanding how users perceive deepfakes is vital since user perceptions can have deciding consequences for using deepfakes in various applications, including virtual reality environments, virtual assistants, educational applications, and so on [59]. However, if deepfakes are to be used in these applications, users must have a positive experience with the deepfakes. Should deepfakes be per- ceived as untrustworthy or confusing, this may result in a negative UX and diminished acceptance of the technology. Understanding how people perceive deepfakes might thus assist designers in cre- ating more effective and user-friendly systems and applications that improve the overall UX [19, 31]. Thus, deepfake user percep- tions are important as they can influence how users use and view such technology. For example, people may be more distrustful of a video’s content if they know it is a deepfake. However, those who cannot differentiate between real and faked videos may be more prone to believe and spread incorrect information [22]. While deepfakes have received much attention due to their ability to manipulate images, videos, and audio, which has raised concerns about their possible misuse [69, 73], when used responsibly, deep- fakes have several potential benefits that may give users a positive experience [14]. For example, deepfakes can assist people with dis- abilities by generating artificial sign language, facial emotions, and recreating the voice of those who cannot speak [11]. Deepfakes can also enhance players’ experience in gaming through in-gaming aids [73]. Additionally, deepfakes can be utilized for educational purposes by enhancing learning experiences through innovative ways [14]. Deepfakes can improve education and provide a more personalized learning experience by producing educational content with characters that students are more accustomed to [61]. Such applications can make deepfakes less scary and more engaging [11]. Deepfake technology can also aid in rehabilitating people with addictions, such as smoking. The World Health Organization has created “Florence,” an AI-based solution that assists people with tobacco addiction. Users can engage in a virtual dialogue with “Flo- rence” to boost their confidence in quitting smoking by developing a strategy to track their progress [49]. Deepfake technology can also be utilized in the art field to critique public figures and celebri- ties and by activists to convey their message innovatively [66]. So, deepfakes can enhance UX in the HCI context by providing more engaging, personalized, and immersive interfaces. A deepfake in- terface that uses users’ faces and/or voices to create videos, avatars, and other content may provide a personalized experience [74]. So, there are both pros and cons associated with deepfake tech- nology. From the HCI perspective, the cons are more researched, leaving a gap for research on the benefits of deepfakes. One promis- ing application area is personas; this is because personas are, by definition, not real but realistic people [5]. Additionally, personas communicate details about the user group they represent to de- signers; this communication could take place in a talking deepfake character which could, possibly, yield amore focused and immersive medium for designers to learn about the persona’s needs. Finally, as personas have been applied in multiple domains from commer- cial to non-profit [54, 56, 57], increasing the potential impact of deepfake personas for design practice. 3 METHODOLOGY 3.1 Experiment Design The experiment followed a between-subjects design, which involves dividing the participants into two or more groups, which are then assigned to a treatment condition. In our study, one group was instructed to pay attention to any abnormal features in the video (i.e., glitches) – this was the Guided condition. The other group was not told anything about the possibility of glitches in the videos – this was the Non-guided condition. As we tested one male and one female deepfake, along with one real male and real female (who were hired actors), there were eight groups into which the study participants were divided (2 x 2). The experiment was pilot tested by three participants who were not included in the analysis of the results. Two deepfake persona videos and two videos with real people acting the same content as in the two deepfake videos were used in the user study (see Table 2). The two deepfake videos used in How Do Users Perceive Deepfake Personas? Investigating the Deepfake User Perception and Its Implications for Human-Computer Interaction CHItaly 2023, September 20–22, 2023, Torino, Italy Table 1: Research on deepfake user perception, categorized based on the article’s emphasis as Harmful (focusing on negative implications of deepfakes, n = 6), Detecting (focusing on deepfake detection, n = 16), Consequences (focusing on deepfakes’ ramifications, n = 9) and Attributes (focusing on attributes of deepfakes, n = 13). References Categories Harmful Detecting Consequences Attributes [4] [21] [8] [46] [44] [51] [12] [60] [41] [63] [36] [35] [53] [25] [6] [75] [3] [17] [70] [62] [64] [26] [32] [34] [71] [33] [67] the user study were created in a deepfake video creation system called Synthesia1 (Synthesia, 2022). The personas were chosen from a study by Carey et al. (2019). One female (Fiona) and one male (James) persona were chosen for the study for balanced gender representation. The two chosen personas used in Carey et al. (2019) were transformed into a narrative form, a script in text format, and uploaded to Synthesia as a script to be used in the deepfake video production. The same script was given to the human actors for recording the acted videos. 3.2 Participants A total of 46 participants carried out the user study, of whom 16 were female (34.8%) and 30 were male (65.2%). The average age of the participants was 37.1 years (SD = 10.4). Participant nationalities were multiple, including Qatari, British, Pakistani, Filipino, Tan- zanian, and Nepalese. Each study administrator kept notes about 1https://app.synthesia.io/ noteworthy observations of user behavior concerning deepfakes. In addition, the think-aloud of each session was recorded and tran- scribed, yielding 92 transcriptions of the participants explaining how they perceived the videos they were exposed to. Our analysis focuses on these think-aloud transcripts, and we leave the other datasets for future work. However, to give the reader a proper un- derstanding of the study procedure, we describe the entire data collection procedure in the following section. 3.3 Data Collection Recruiting participants for the user study took place via email. In the recruitment email, the invitees were told that we are conducting a user study about the impact of video quality on marketing tasks, not to reveal the real purpose of the study. The study was carried out on university premises. Two identical workstations were used consisting of two laptops, an eye-tracking headset, a mouse, a Sony voice recorder, and a separate 24” display. This study focuses on https://app.synthesia.io/ CHItaly 2023, September 20–22, 2023, Torino, Italy Ilkka Kaate et al. Table 2: Still shots of (a) deepfake Fiona, (b) human Fiona, (c) deepfake James, and (d) human James from the videos used in the user study. All videos are available in the supplementary material. (a) Deepfake Fiona (Df) (b) Human Fiona (Hf) (c) Deepfake James (Dj) (d) Human James (Hj) analyzing the think-aloud records, leaving the analysis of the eye- tracking data for future work. The videos containing deepfakes and real humans were uploaded to YouTube and run with METRIC, which is a real-time user study and analytics system [40], equipped with in-built eye-tracking ca- pabilities. The videos, including the deepfakes and the real humans, are available as supplementary material2. The user study worksta- tions were conducted by three researchers with previous experience in conducting user studies. To ensure consistency for participants, a detailed script was pre- pared for the study administrators to be used in each user study session. The script included detailed instructions on what was to be said to the user study participants and what was to be done by the administrator at each stage of the user study. Instructions were read to the participants based on their condition groups. Study par- ticipants were invited to the study according to a premade schedule (participants could choose their preferred time as they registered), after which they were seated at the workstation. Then, the par- ticipant was provided with a consent form, and after reading and signing the consent form, the participant was read the overall study procedure based on their condition group. Each participant was either guided to pay attention to glitches in the video (guided group) or not (non-guided group). Then, the first video shown to a participant was either a male deepfake (James) video, a video in which a real human male (James) performed, a 2https://drive.google.com/drive/folders/1Ys4VJOf74kc1By7Rm343zgCaNI9oumdc? usp$=$sharing female deepfake (Fiona) video, or a video in which a real human female (Fiona) performed. Assigning a condition group to a partici- pant was decided based on a spreadsheet where all eight conditions were repeated in the same order for new participants. Before watching the first video, the eye-tracking device was cali- brated, after which the participant watched the first video. After viewing the first video and finishing the first task, the participant was guided to answer the survey where they could complete the design task and answer questions about the video. The task and sur- vey sessions were recorded and later transcribed. The participants were encouraged to elaborate their thought process by speaking out loud during the task completion and survey as much as possible; this approach follows the ‘think-aloud survey’ method that aims at increasing HCI understanding by asking people to voice their thoughts while answering usability/UX questionnaires [48]. After finishing the survey, the participant was read the same instructions as the first time according to the participant’s condition group. Then, the participant watched the second video, completed the task for the second video, and answered the survey. After re-completing the survey, the participant was thanked for participation, and asked how familiar he/she had been with deepfakes before this study session on a scale of 1-5, one being not familiar at all and five being extremely familiar. The answer was noted, and the participant was given a gift card as thanks. https://drive.google.com/drive/folders/1Ys4VJOf74kc1By7Rm343zgCaNI9oumdc?usp$=$sharing https://drive.google.com/drive/folders/1Ys4VJOf74kc1By7Rm343zgCaNI9oumdc?usp$=$sharing How Do Users Perceive Deepfake Personas? Investigating the Deepfake User Perception and Its Implications for Human-Computer Interaction CHItaly 2023, September 20–22, 2023, Torino, Italy Table 3: Themes and subthemes arising from the transcribed design task audio recordings. Themes Realism (the lifelikeness of the character in the video User Needs (understanding the needs and attitudes of the character in the video) Distracting Properties (properties of the character in the video that distract the user) Added Value (value offered by deepfakes in the design process) Rapport (ties, trust, or distrust towards the character in the video) Subthemes Gender-related properties of the character in the video Presentation of needs by the character in the video or the participant understanding the needs The character distractions were so severe the participant was not concentrating on what the character was saying Making environmental thinking and actions rewarding for the app or game user The general distrust of information presented in a video form The cultural background of the character in the video Expressing needs of the character on the video is muffled by the lack of connection between the character in the video and the participant The character in the video delivers speech emotionless and monotonically Entitling the app or game user to a discount Human appearance makes the character believable and trustworthy Robotic, unnatural, or human-like features and the expressiveness of the character in the video The character in the video had weird eyes Perceived convincingness and friendliness increase trust 3.4 Data Analysis Overall, we obtained 92 voice recordings, each containing a think- aloud session of a participant explaining their thinking during the use of the personas (either human or deepfake) in the design task. The transcribing was done using an AI-based tool (i.e., we uploaded the audio files and received the transcripts as text outputs); based on a manual review of the obtained text outputs, the AI tool performed exceedingly well on this task. On any unclear instances, we revisited the original audio files to understand the participant’s meaning better; however, the text material was overwhelmingly adequate for analyzing the think-aloud records. In addition to the think-aloud transcripts, we used observation notes made during the user study sessions by each administrator. We compiled these notes into an analysis document and organized them by themes. After this, we supplemented the notes with observations from the think-aloud transcripts. This was done by systematically reviewing the content of each participant-session transcript twice: (a) first by reading the text and highlighting noteworthy passages in a different color (i.e., color coding), and (b) then by re-reading the material again to ensure we did not miss any central points. After this, (c) the highlighted passages were added to the analysis document, where (d) they were analyzed together with the notes in an inductive fashion, i.e., to identify conceptually meaningful subthemes that we considered to be relevant to our research question, i.e., the deepfake user perception. Themes and subthemes were formed by one researcher and validated by another. The validation took place by reviewing the theme name, definition, and its relevance for the study purpose. Minor adjustments to the themes were made through iterative dialogue between the two researchers in charge of this stage to reach a taxonomy consensus [24]. The subthemes were formed based on thematic codes corresponding to each subtheme, and the codes were implemented in the transcription data where applicable. The coding after initial theme formation (first time reading) served as an iterative way to clarify the themes and form the subthemes after the second reading. 4 RESULTS 4.1 Discovered Themes Table 3 illustrates the themes that emerged from the transcript analysis. Each theme is defined in the table’s header section, and each subtheme is described in the following subsections. 4.2 REALISM: Realism of the character in the video 4.2.1 Gender-related properties. Gender-related properties of the character in the video include any properties or features that seemed off to the participants considering the realism of the character in the video. One such could be, for example, that the character acted in a way that was not seen as traditional for a female by the participants. An interesting notion of the role of gender in the expressiveness of the characters came up with one answer. The gender-related facial and vocal expressiveness was put forth by a participant: CHItaly 2023, September 20–22, 2023, Torino, Italy Ilkka Kaate et al. • “Did you notice anything strange in the person? Not really. I mean he didn’t have too many emotions. As we women do, some men don’t express themselves too much.” -Female(F)48, Project Coordinator The role of gender in the differences in expressiveness is an interesting idea with scientific background [1, 72]. Girls are encour- aged to express emotion, while with boys expressing emotions is usually suppressed except for anger and pride. In that sense, the participants’ notion of the almost strangeness of the character is what could be expected. In addition, it is surprising that no other participant paid attention to this feature. 4.2.2 Cultural background of the character in the video. The cultural background of the character in the video includes any properties or features that the participants observed seemed unfitting to the character’s cultural background. One example would be that the character acted in a way that seemed too expressive to the participant considering the character’s country of origin. From a cultural standpoint, there could be cultural variation in the expressiveness of the character. The differences in the expres- siveness between different cultures were noted by one participant: • “His eye movement. I don’t know if it’s his personality that contributes to his eye movement or. . . You know? He doesn’t have much facial expression. Yeah, hmm. Is this cultural? Or just personal? You know, I don’t know. Because if you look at the populations up in the north. They’re right here. If you are looking at the population in the South. . . They’re very high [in expressiveness], but yeah, so that’s the obvious difference I can see, yeah.” F53, Business Development Manager. By the comment, the participant is referring to the different ways of expressing feelings and emotions in different cultures, giving special weight to the North-South division of cultures. Cultural differences have been found in prior research [22, 45]. 4.2.3 Robotic, unnatural, or human-like features and the expressive- ness of the character in the video. Robotic, unnatural, or human-like features and the expressiveness of the character in the video includes any property or feature that was seen as robotic by the participants, such as robotic voice or robotic gestures. Robotic features in the deepfakes and human-like features in the human-played characters were the most noted features by the participants. Notions on the “realness” were made directly using the word robotic but also with leading terms like monotonic voice or other terms describing unnatural behavior such as mechanical or unnatural. On the human-like side of the spectrum, terms like that’s a human thing and notions on the natural way of speech were made by the participants while recognizing human characters. The deepfake videos were mostly recognized as deepfakes by the participants before or after seeing the human video: • “This person specifically was much more obvious to tell that it was not a real person. I feel like the way she was speaking was robotic.” -Male(M)34, Researcher. • “Wait, her voice was very monotonic almost sounding like a robotic-like voice. Yeah, yeah, it sounds like that. And she was just very stable staring at the screen. She wasn’t really moving that much, so. I don’t, I don’t. Yeah, I say I disagree. This person displaying emotion? No, I don’t think she displayed any emotions. I didn’t notice anything.” -M28, Research Assistant The lack of emotion separated the deepfakes from the human videos. The lack of emotions was noticed by several participants in several ways: • “[. . .] she doesn’t display any emotions.” -F30, Post Doctoral Researcher. • “I must say that it was very emotionless.” -M54, Professor. • “The eyes like. Like there are no emotions in the face, no emotions.” -F34, Post Doctoral Researcher. • “I mean expression is just we show, emotion is something coming from inside and sometimes we just express, but does not feel, yes, that expression and we do not really feel.” -M42, Research Consultant The unnatural eye movement and not blinking or blinking in- frequently were noted by many. It was mentioned as an unnatural feature by several participants. Abnormal blinking patterns have also been used in automatic deepfake detection [37]. Hairmovement was also catching the eye of some participants with its unnatural patterns: • “The person displays emotion, no. Just because. . . The major thing is that she has no eye movement at all, and it looks rather unreal. The hair looks unrealistic. Especially the hair.” -M38, Assistant. • “Yeah, so for me the hair was the most thing and since I was focusing on the face, the eyes were a little bit like that.” -M42, Software Engineer. • “I didn’t notice her blinking her eyes even once, I think, but I do think it was very unnatural.” -F34, Research Associate In the videos with the human actors, participants noted features such as swallowing naturally, speaking naturally, and essentially a gut feeling that the person was a real human being. Swallowing and speaking patterns were mentioned, for example: • “Though they swallowed too much, but I mean, that’s just a human thing, so it’s not really a thing.” -F33, Research Associate. • “And the person on the video was speaking naturally, yes, and the person seemed like a real person.” -F30, Post Doctoral Researcher. • “Real person, yeah, I guess. I guess she was a real person.” -M24, Research Assistant. • “The person seems like a real person. She is a real person, I believe not a fake one.” -M42, Research Consultant In the videos with the human actors, one participant also men- tioned that the character was a real human, but the expression of feelings might not be the same as feeling that way in real life, as in acting something out: How Do Users Perceive Deepfake Personas? Investigating the Deepfake User Perception and Its Implications for Human-Computer Interaction CHItaly 2023, September 20–22, 2023, Torino, Italy • “So, in terms of like, I would say the person was showing. . . She was expressing herself yes, but I can’t say if she was actually feeling, yes.” -M42, Research Consultant. Contradicting comments were heard where the participant could not decide whether the character in the video was a real person or not, although she recognized the character as a real human. She also mentioned that in the age of AI it is hard to tell things apart and believe what is real: • “The person displays emotion. No, neither agree nor disagree. Not much emotion. The person seems like a real person. With the AI I cannot judge, so neither agree nor disagree. The question was askingme if the person seems like a real person. But with artificial intelligence, I can’t judge. That’s why I neither agree nor disagree.” -F53, Business Development Manager. 4.3 USER NEEDS: Understanding the needs and attitudes of the character in the video 4.3.1 Presentation of needs by the character in the video or the participant understanding the needs. Presentation of needs by the character in the video or the participant’s understanding of the needs includes any uncertainties that the participants had towards the needs of the character in the video. For example, the character said something on the video but it was unclear to the participant what the character meant. Overall, the needs of the character were presented in each video but not considered equally well by all participants. There were also interpretations and extrapolations of the needs presented in the videos by the participants meaning that the participants took the ideas presented by the character and prolonged them to something new that was not explicitly mentioned by the character, which gives the impression that there has been some deeper inspiration in the participant based on the information given by the character. • “The person in the video provided enough information for me to understand his or her needs. I would say somewhat agree here again because she mentioned she was unhappy with a lot of like the packaging for instance. But like what is the alternative she envisions, right? Because maybe no packaging would be something she’s unhappy with.” -M23, Software Engineer • “I think we need to find a game that kind of caters to those needs of saving money while also helping the environment at the same time to show them that you can save money and help the environment without, and they’re not mutually exclusive basically.” -M22, Research Assistant • “Logistics and supply chain, which also emits a lot of carbon. This kind of information is not there, but at least she’s saying that she is very focused on sustainability” -M38, Assistant Professor • “Like I think people similar care about the amount of pack- aging and things like that.” -M23, Software Engineer 4.3.2 Expressing needs of the character on the video muffled by the lack of connection between the character in the video and the participant. Expressing the needs of the character in the video muffled by the lack of connection between the character in the video and the participant includes any potential misunderstanding or uncertainty perceived by the participant considering the needs of the character in the video and the lack of connection between the character and the participant. E.g., Understanding the needs of the character was hard due to a lack of connection with him/her. The lack of connection between the participant and the character in the video was mentioned by participants as a feature prohibiting the participants from understanding the needs of the character. Lack of emotional linkage or not feeling the way of the character inhibited participants from understanding the needs of the charac- ter. • “Again, I didn’t feel like I connected with this person. I didn’t believe that this was an actual person and I found it hard to kind of listen to her.” -F34, Research Associate • “I couldn’t make a connection to the person, so that’s why I’m saying somewhat agree [to understand this person] because the idea was good.” -M42, Software Engineer • “Even her motions were very repetitive. Much more obvi- ously so than the person before her, so I felt like it was hard to connect with the person and too hard to understand her needs.” -F34, Research Associate Some participants mentioned that the lack of emotions, facial expressions, or the general “didn’t pay attention” by the character made it hard to assimilate to the character and understand the needs: • “I would have been cool if I could infer something from her like patterns like or she walks the dogs if she does something, but I wasn’t able to make those connections unfortunately.” -M23, Software Engineer • “The person in the video provided enough information to understand her needs. I don’t think so. I think she was mostly just saying that she wants to be environmentally friendly and that there’s a lot of packaging. . .. I guess I somehow missed some information that she was saying.” -M24, Re- search Assistant A participant mentioned the lack of facial expression as a source for not understanding the needs: • “I feel like I understood the person. I don’t think so. Because like for me, having some kind of emotions on the face of the person help me connect with the person and to understand.” -M42, Software Engineer 4.4 DISTRACTING PROPERTIES: Distracting properties of the character in the video 4.4.1 The character distractions were so severe the participant was not concentrating on what the character was saying. CHItaly 2023, September 20–22, 2023, Torino, Italy Ilkka Kaate et al. The character distractions were so severe that the participant was not concentrating on what the character was saying, including distorting features in the video observed by the participants. For example, the character on the video was moving his/her mouth in such a weird and unsynchronized way that the participant was paying more attention to that than listening to the character. There were distracting visual and auditory properties in the characters in the videos that were considered distracting by the participants. There were visual distortions in the hair and mouth and general twitching movements were noticed by some partici- pants. The way the deepfakes spoke was sometimes out of sync with the lip movement, which was also found distracting. • “They lack emotion, they are monotone, some of them have glitches like you saw the twitching.” -M68, Director HSSE. • “Sometimes the videos were like I said about the other video but the mouth of the person speaking and his lip movement. They were not syncing to each other.” -F48, Project Coordi- nator • “This guy is like he. . . Just his face is just frozen and locked in space.” -M68, Director HSSE. • “In this case, the glitches affected my task.” -M42, Software Engineer • “I would say that was she a dead, like a deadpan, you know, just talking head or was she, you know, emotionally, deliver- ing what she was saying.” -M23, Research Assistant. • “The hair is completely static.” -M49, Engineer. 4.4.2 The character in the video delivers speech emotionless and monotonically. The character in the video delivering speech emotionless and monotonically means that the participant found it distorting that the character on the video was speaking with no or low emotion and with a monotonous voice. The way of speech was noticed by many participants to be dis- tracting in the deepfakes. The monotonic and unnatural way of speaking, as well as the emotionless delivery of speech, were the most commonly noticed features. Also, the way the deepfake acted while speaking was found distracting, such as staring straight at the camera and not moving at all while speaking. • “The video I think he was. . . He was saying a lot of things and I didn’t see anything strange about the way he spoke, but it was very kind of monotonic. So, he did, he wasn’t very kind of interactive or reactive in a kind of a strong way. So, I would say he didn’t really display much emotion, so maybe disagree.” -M22, Research Assistant • “The glitches, like it very much looked like a human, just a strange human, was quite emotionless. And the glitches affecting my task completion of designing. Actually, I agree because I noticed at some points while I was watching the person speak it felt like they were creepy and they don’t really focus on the task that you’re trying to think about you’re focusing on.” -M23, Research Assistant • “They were just staring into the camera while speaking and not moving too much. Single spot. And speaking to cameras and not another human.” -M23, Research Assistant. • “Display emotion, in fact, I felt it was a computer-generated personality because of his lip movements and stuff.” -F28, Research Associate 4.4.3 The character in the video had weird eyes. The character in the video had weird eyes means that there was something unnatural in the eyes of the character in the video that was noted by the participants and found distorting. E.g., the character in the video did not blink or blinked more sparsely than expected. The eyes of the deepfakes were mentioned by many participants as distracting features. • “You knowwhen you have the eye, she blinks, but her eyeball doesn’t move. Has no focus.” -F53, Business Development Manager. • “The person in the video seems strange. Yeah, seemed very strange because the voice didn’t match the face and the top half of the face wasn’t moving at all. Like the eyes did not move at all.” -M22, Research Assistant • “Seemed strange eyes. In what I saw was blinking his eyes yeah suddenly eyes, yes.” -M42, Software Engineer 4.5 ADDED VALUE: Added value of the app or the game 4.5.1 Making environmental thinking and actions rewarding for the app or game user. Making environmental thinking and actions rewarding for the app or game user means that the participant could find something to motivate the character in the video to think about the environment. For example, an application that could help the participant save money by collecting trash. Added value recognized by some participants was considering the multipurpose use of the application or the game or the ways the app or game could reward the user for being more environmen- tally conscious. Training to be environmentally friendly and using monetary incentives to educate the user on environmental thinking came up as possibilities. • “How, how, how to have a business model out of this and I’m just thinking that there could be some training programs and multipurpose items that could be purchased.” -M39, Doctoral Researcher. • “With this idea also, I think that it will make them view sustainability as a sort of a rewarding thing, because then they’ll be able to get money in exchange and maybe over time it starts to become a habit rather than just purely for money, right?” -M22, Research Assistant. How Do Users Perceive Deepfake Personas? Investigating the Deepfake User Perception and Its Implications for Human-Computer Interaction CHItaly 2023, September 20–22, 2023, Torino, Italy • “It could be a mobile app or a game that basically shows him different locations where they can go, for instance, and you know, recycle differently, let’s say plastic. . . Plastic bottles or plastic material and get money in return.” -M22, Research Assistant. 4.5.2 Entitling the app or game user to a discount. Entitling the app or game user to a discount means that the participant could develop an idea for an app or a game that could offer a monetary reward for the character if he/she acts in an environmentally oriented way. For example, a reward system for the character if he/she uses his/her own packing material at the store. To promote environmental thinking in the app or game users, participants devised ways to use discounts and loyalty programs as motivators. • “For example, if you bring your own bags, they can advertise that on specific days. If you’re bringing your own reusable bag, you might get a certain discount or some money off of certain products. I think that would be a good way to handle it.” -F34, Research Associate. • “And my loyalty program. Power discounts. Yeah, for the client.” -M30, Software Engineer • “So, I would say for people like him, we could design an application where we have a variety of products, weight, prices and especially highlighting the discount. So that kind of application will give him confidence that, OK, I’m getting a discount. . . Getting 50% right off, so this kind of yeah incentive, yes.” -M42, Research Consultant. 4.6 RAPPORT: Ties, trust, or distrust toward the character in the video 4.6.1 General distrust of information presented in a video form. The general distrust of information presented in a video form means that the participant expressed a general distrust of any information shown to him/her in a video form. One participant expressed strong reluctance towards information distributed in a video format. Her attitude towards videos had deteriorated because of the volume of daily social media usage and, apparently, the quality of videos on social media. • “I trust the information given by the person. Neither agree nor disagree. It’s just a video. I don’t believe what video you show me. It is my nature. Too much social media these days.” -F53, Business Development Manager. 4.6.2 Human appearance makes the character believable and trust- worthy. Human appearance makes the character believable and trustworthy means that the participants expressed that the way the character looks influences the character’s trustworthiness. For example, a clearly human-like character is more trustworthy than a character looking like a computer-generated model. If the character in the video was seen as human-like, it was easier for the participants to trust what the character was saying. • “I mean no, no. No, no, no ties. No, no like I didn’t feel any connection to the video.” -M47, Scientist • “I would say I trust the information that was given by him purely because it wasn’t anything kind of out of the ordinary.” -M22, Research Assistant. • “But if trust refers to if they’re realistic, then kind of yeah. I feel like I understood the person.” -F33, Research Associate. • “I trust the information given by the person. Well, because the voice doesn’t really match the person. And they seemed very robotic. I would have to disagree with this. Because it seemed like it could be maybe like something made-up of her speaking, but it’s not her voice so yeah.” -M22, Research Assistant. • “He expressed a lot of emotion relating to certain aspects and the extra details kind of humanized him and made him seem like a real person with real concerns and with also interest other than the environment.” -F34, Research Associate. • “I trust the guy, he looked like a real person.” -M28, Software Engineer. 4.6.3 Perceived convincingness and friendliness increase trust. Perceived convincingness and friendliness increase trust means that the participants implied that it is easier to believe a character that speaks in a friendly yet convincing way. Trust seemed to build on the way the character was speaking. Friendly and convincing manners of speech increased trust in the design information provided among the participants. • “I trust the information given by that person because I do think the information was communicated well.” -F34, Re- search Associate. • “I’m very empathetic and like expressive and self but no, and he didn’t show emotion. Just the information given by the person. Actually, I disagree, I didn’t feel like it was trustwor- thy.” -F28, Research Associate. • “I trust the information given by the person. Yes strongly, I agree that she was very convincing and very friendly talking to me.” -M49, Principal scientist. 5 DISCUSSION AND CONCLUSION The progress of AI technologies creates a demand for new and improved interaction. Deepfakes pose opportunities for enhancing UX in many user-facing information systems. However, realizing these opportunities requires that deepfakes are well received by the end-users, as any resistance can mitigate the theoretical or potential benefits. CHItaly 2023, September 20–22, 2023, Torino, Italy Ilkka Kaate et al. Our study introduces the concept of deepfake user perception and explores the relationship of this concept with UX bringing deep- fakes to the core of UX. We identify several impactful themes about deepfake user perception. Our results show many features affecting the deepfake user perception, including the human-likeness of and distractions in deepfakes. Some of these features have also been found limiting to the adoption of deepfakes in previous research, such as unnatural eye movement [37] or as a feature that has not been discussed in the deepfake literature, such as gender-related behavior patterns [1, 72], but our research extends these findings by providing actual user explanations of how they experience deep- fakes. According to our findings, the user perception of deepfakes depends on the perceived realness and human-likeness of the deep- fakes, which are again dependent on the manners, ways of speech, perceived trust, emotional expressions, vocal properties, and lack of perceived connection between the character and the participant. Our study adds to prior research in which similar themes have been studied on human likeness [30], trust [20, 27], glitches and distortions [29], and empathy (i.e., the connection between the par- ticipant and the deepfake) [55]. Similarly, previous research has found that deepfakes are most likely recognized by the users [21] as they were recognized based on the distortions and unnaturalness, in our study. These themes have been found to lower the deepfake user perception in the past; our results support and expand the results of previous research [4, 9, 12, 21, 43, 46, 52, 60] by offering qualitative insights. Regarding design implications, the UX of information systems can be improved by ensuring deepfakes are integrated ethically and transparently. Users need to know when and how deepfakes are employed and have control over their personal information [16]. Since deepfake technology is utilized responsibly and safely, it can help keep it from becoming a tool for nefarious actors. This in- cludes putting safeguards in place to prevent deepfakes from being used for fraud or other criminal acts and building powerful detec- tion and verification tools to detect and remove harmful deepfakes [39]. Therefore, when creating deepfake-based applications, HCI designers and developers must consider user perception. They must ensure that their deepfakes are authentic and do not deceive people. Also, they must ensure that the deepfakes improve the overall UX. By doing this, deepfake applications can positively influence UX, increasing user engagement and trust [50]. In our research, we utilized think-aloud data to study deepfake persona perceptions. However, to dig deeper into the deepfake persona perceptions, the eye-tracking data we collected could be used to study users’ eye movement. This could give an interesting insight into the user behavior and focus while observing deepfake personas, which has not been done in prior research. In the future, if industry and scholars are willing to develop deep- fake technology in a user-friendly, more human-centered direction, more emphasis should be put on the issues put forth in our research. The human-likeness and realism of deepfakes is the first link of the chain that builds trust and usability of deepfakes towards their users, and while there are problems in those links, it is hard to see how deepfakes are going to be the technology that they have been predicted to be. Humans utilize deepfakes, and their usability di- rectly depends on deepfakes’ ability to mimic human beings. Users did quite well at discovering the abnormalities of deepfakes in our study. Not meeting the expectations of users towards deepfakes lowers the perception of deepfakes. REFERENCES [1] Steven Luria Ablon, Daniel P. Brown, Edward J. Khantzian, and John E. Mack (Eds.). 2015. Human feelings: explorations in affect development and meaning (First issued in paperback ed.). Routledge, Taylor & Francis Group, New York London. [2] Simone Agostinelli, Federica Battaglini, Tiziana Catarci, Federica Dal Falco, and Andrea Marrella. 2019. Generating Personalized Narrative Experiences in Inter- active Storytelling through Automated Planning. In CHITALY’19–Proceedings of the 13th Biannual Conference of the Italian Conference SIGCHI Chapter Designing the next interaction, Padova, 23–25. [3] Saifuddin Ahmed. 2021. Fooled by the fakes: Cognitive differences in perceived claim accuracy and sharing intention of non-political deepfakes. Personality and Individual Differences 182, (2021), 111074. [4] Saifuddin Ahmed, Sheryl Wei Ting Ng, and Adeline Bee Wei Ting. 2023. Under- standing the role of fear of missing out and deficient self-regulation in sharing of deepfakes on social media: Evidence from eight countries. Frontiers in Psychology 14, (2023), 609. [5] J. An, H. Kwak, S. Jung, J. Salminen, M. Admad, and B. Jansen. 2018. Imaginary People Representing Real Numbers: Generating Personas from Online Social Media Data. ACM Trans. Web 12, 4 (November 2018), 1–26. DOI:https://doi.org/ 10.1145/3265986 [6] Soubhik Barari, Christopher Lucas, and Kevin Munger. 2021. Political deepfakes are as credible as other fake media and (sometimes) real media. (2021). [7] Barbara Rita Barricelli and Daniela Fogli. 2021. Virtual assistants for personalizing iot ecosystems: Challenges and opportunities. In CHItaly 2021: 14th Biannual Conference of the Italian SIGCHI Chapter, 1–5. [8] Sergi D. Bray, Shane D. Johnson, and Bennett Kleinberg. 2022. Testing Human Ability To Detect Deepfake Images of Human Faces. arXiv preprint arXiv:2212. 05056 (2022). [9] Sergi D. Bray, Shane D. Johnson, and Bennett Kleinberg. 2022. Testing Human Ability To Detect Deepfake Images of Human Faces. arXiv preprint arXiv:2212. 05056 (2022). [10] Fabio Catania, Pietro Crovari, Eleonora Beccaluva, Giorgio De Luca, Erica Colombo, Nicola Bombaci, and Franca Garzotto. 2021. Boris: a Spoken Con- versational Agent for Music Production for People with Motor Disabilities. In CHItaly 2021: 14th Biannual Conference of the Italian SIGCHI Chapter, 1–5. [11] Bobby Chesney and Danielle Citron. 2019. Deep fakes: A looming challenge for privacy, democracy, and national security. Calif. L. Rev. 107, (2019), 1753. [12] Keaunna Cleveland. 2022. Creepy or Cool? An Exploration of Non-Malicious Deepfakes through Analysis of Two Case Studies. M.S. University of Maryland, College Park, United States – Maryland. Retrieved August 23, 2022 from https: //www.proquest.com/docview/2681852015/abstract/D819BB5EC1F54D76PQ/1 [13] Alan Cooper. 1999. The Inmates are Running the Asylum. In Software-Ergonomie ’99, Udo Arend, Edmund Eberleh and Knut Pitschke (eds.). Vieweg+Teubner Verlag, Wiesbaden, 17–17. DOI:https://doi.org/10.1007/978-3-322-99786-9_1 [14] Emily Cruse. 2006. Using educational video in the classroom: Theory, research and practice. Library Video Company 12, 4 (2006), 56–80. [15] Valdemar Danry, Joanne Leong, Pat Pataranutaporn, Pulkit Tandon, Yimeng Liu, Roy Shilkrot, Parinya Punpongsanon, Tsachy Weissman, Pattie Maes, and Misha Sra. 2022. AI-Generated Characters: Putting Deepfakes to Good Use. In CHI Conference on Human Factors in Computing Systems Extended Abstracts, ACM, New Orleans LA USA, 1–5. DOI:https://doi.org/10.1145/3491101.3503736 [16] Nicholas Diakopoulos and Deborah Johnson. 2021. Anticipating and addressing the ethical implications of deepfakes in the context of elections. New Media & Society 23, 7 (2021), 2072–2098. [17] Tom Dobber, Nadia Metoui, Damian Trilling, Natali Helberger, and Claes de Vreese. 2021. Do (microtargeted) deepfakes have real effects on political attitudes? The International Journal of Press/Politics 26, 1 (2021), 69–91. [18] David Harrison Ii Harrison Ferrell, Giorgio Grando, and Massimo Zancanaro. 2021. The AI Style Experience: design and formative evaluation of a novel phygital technology for the retail environment. In CHItaly 2021: 14th Biannual Conference of the Italian SIGCHI Chapter, 1–4. [19] Dilrukshi Gamage, Piyush Ghasiya, Vamshi Bonagiri, Mark E. Whiting, and Kazutoshi Sasahara. 2022. Are Deepfakes Concerning? Analyzing Conversations of Deepfakes on Reddit and Exploring Societal Implications. In CHI Conference on Human Factors in Computing Systems, 1–19. [20] Ella Glikson and Anita Williams Woolley. 2020. Human Trust in Artificial In- telligence: Review of Empirical Research. ANNALS 14, 2 (July 2020), 627–660. DOI:https://doi.org/10.5465/annals.2018.0057 [21] Matthew Groh, Ziv Epstein, Chaz Firestone, and Rosalind Picard. 2022. Deepfake detection by human crowds, machines, and machine-informed crowds. Proc. Natl. Acad. Sci. U.S.A. 119, 1 (January 2022), e2110013119. DOI:https://doi.org/10.1073/ pnas.2110013119 https://doi.org/10.1145/3265986 https://doi.org/10.1145/3265986 arXiv:2212.05056 arXiv:2212.05056 arXiv:2212.05056 arXiv:2212.05056 https://www.proquest.com/docview/2681852015/abstract/D819BB5EC1F54D76PQ/1 https://www.proquest.com/docview/2681852015/abstract/D819BB5EC1F54D76PQ/1 https://doi.org/10.1007/978-3-322-99786-9_1 https://doi.org/10.1145/3491101.3503736 https://doi.org/10.5465/annals.2018.0057 https://doi.org/10.1073/pnas.2110013119 https://doi.org/10.1073/pnas.2110013119 How Do Users Perceive Deepfake Personas? Investigating the Deepfake User Perception and Its Implications for Human-Computer Interaction CHItaly 2023, September 20–22, 2023, Torino, Italy [22] Jeffrey T. Hancock and Jeremy N. Bailenson. 2021. The Social Impact of Deepfakes. Cyberpsychology, Behavior, and Social Networking 24, 3 (March 2021), 149–152. DOI:https://doi.org/10.1089/cyber.2021.29208.jth [23] Shlomo Hareli, Konstantinos Kafetsios, and Ursula Hess. 2015. A cross-cultural study on emotion expression and the learning of social norms. Front. Psychol. 6, (October 2015). DOI:https://doi.org/10.3389/fpsyg.2015.01501 [24] Lorenz Harst, Lena Otto, Patrick Timpel, Peggy Richter, Hendrikje Lantzsch, Bastian Wollschlaeger, Katja Winkler, and Hannes Schlieter. 2022. An empirically sound telemedicine taxonomy–applying the CAFEmethodology. Journal of Public Health 30, 11 (2022), 2729–2740. [25] SeanHughes, Ohad Fried, Melissa Ferguson, CiaranHughes, RianHughes, Xinwei Yao, and Ian Hussey. 2023. Deepfaked Online Content is Highly Effective in Manipulating Attitudes & Intentions. [26] Yoori Hwang, Ji Youn Ryu, and Se-Hoon Jeong. 2021. Effects of disinformation us- ing deepfake: The protective effect of media literacy education. Cyberpsychology, Behavior, and Social Networking 24, 3 (2021), 188–193. [27] Alon Jacovi, Ana Marasović, Tim Miller, and Yoav Goldberg. 2021. Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, ACM, Virtual Event Canada, 624–635. DOI:https://doi.org/10.1145/ 3442188.3445923 [28] Bernard J. Jansen, Soon-Gyo Jung, Lene Nielsen, Kathleen W. Guan, and Joni Salminen. 2022. How to Create Personas: Three Persona Creation Methodologies with Implications for Practical Employment. Pacific Asia Journal of the Association for Information Systems 14, 3 (2022). DOI:https://doi.org/10.17705/1pais.14301 [29] Jihyeon Kang, Sang-Keun Ji, Sangyeong Lee, Daehee Jang, and Jong-Uk Hou. 2022. Detection Enhancement for Various Deepfake Types Based on Residual Noise and Manipulation Traces. IEEE Access 10, (2022), 69031–69040. DOI:https: //doi.org/10.1109/ACCESS.2022.3185121 [30] Jan Kietzmann, Adam J. Mills, and Kirk Plangger. 2021. Deepfakes: perspectives on the future “reality” of advertising and branding. International Journal of Advertising 40, 3 (April 2021), 473–485. DOI:https://doi.org/10.1080/02650487. 2020.1834211 [31] Felix Kleine. Perception of Deepfake Technology. [32] Nils C. Köbis, Barbora Doležalová, and Ivan Soraperra. 2021. Fooled twice: People cannot detect deepfakes but think they can. Iscience 24, 11 (2021), 103364. [33] Pavel Korshunov and Sébastien Marcel. 2020. Deepfake detection: humans vs. machines. arXiv preprint arXiv:2009.03155 (2020). [34] Matthew B. Kugler and Carly Pace. 2021. Deepfake privacy: Attitudes and regu- lation. Nw. UL Rev. 116, (2021), 611. [35] YoungAh Lee, Kuo-Ting (Tim) Huang, Robin Blom, Rebecca Schriner, and Carl A. Ciccarelli. 2021. To Believe or Not to Believe: Framing Analysis of Content and Audience Response of Top 10 Deepfake Videos on YouTube. Cyberpsychology, Behavior, and Social Networking 24, 3 (March 2021), 153–158. DOI:https://doi.org/ 10.1089/cyber.2020.0176 [36] Andrew Lewis, Patrick Vu, and Areeq Chowdhury. 2022. Do content warnings help people spot a deepfake? Evidence from two experiments. (2022). [37] Yuezun Li, Ming-Ching Chang, and Siwei Lyu. 2018. In ictu oculi: Exposing ai created fake videos by detecting eye blinking. In 2018 IEEE international workshop on information forensics and security (WIFS), IEEE, 1–7. [38] Siwei Lyu. 2020. Deepfake detection: Current challenges and next steps. In 2020 IEEE international conference on multimedia & expo workshops (ICMEW), IEEE, 1–6. [39] Edvinas Meskys, Julija Kalpokiene, Paulius Jurcys, and Aidas Liaudanskas. 2020. Regulating deep fakes: legal and ethical considerations. Journal of Intellectual Property Law & Practice 15, 1 (2020), 24–31. [40] Metric. 2023. About METRIC. Retrieved March 15, 2023 from https://metric.qcri. org/about [41] Jaron Mink, Licheng Luo, Natã M. Barbosa, Olivia Figueira, Yang Wang, and Gang Wang. 2022. {DeepPhish}: Understanding User Trust Towards Artificially Generated Profiles in Online Social Networks. 1669–1686. Retrieved March 24, 2023 from https://www.usenix.org/conference/usenixsecurity22/presentation/ mink [42] Masahiro Mori, Karl MacDorman, and Norri Kageki. 2012. The Uncanny Valley [From the Field]. IEEE Robot. Automat. Mag. 19, 2 (June 2012), 98–100. DOI:https: //doi.org/10.1109/MRA.2012.2192811 [43] Nicolas M. Müller, Karla Pizzi, and Jennifer Williams. 2021. Human Perception of Audio Deepfakes. (2021). DOI:https://doi.org/10.48550/ARXIV.2107.09667 [44] Nicolas M. Müller, Karla Pizzi, and Jennifer Williams. 2022. Human perception of audio deepfakes. In Proceedings of the 1st International Workshop on Deepfake Detection for Audio Multimedia, 85–91. [45] Mekhail Mustak, Joni Salminen, Matti Mäntymäki, Arafat Rahman, and Yogesh K. Dwivedi. 2023. Deepfakes: Deceptions, mitigations, and opportunities. Journal of Business Research 154, (January 2023), 113368. DOI:https://doi.org/10.1016/j. jbusres.2022.113368 [46] Yu-LeungNg. 2022. An errormanagement approach to perceived fakeness of deep- fakes: The moderating role of perceived deepfake targeted politicians’ personality characteristics. Curr Psychol (August 2022). DOI:https://doi.org/10.1007/s12144- 022-03621-x [47] Paula M Niedenthal, Magdalena Rychlowska, and Adrienne Wood. 2017. Feelings and contexts: socioecological influences on the nonverbal expression of emotion. Current Opinion in Psychology 17, (October 2017), 170–175. DOI:https://doi.org/ 10.1016/j.copsyc.2017.07.025 [48] Lene Nielsen, Joni Salminen, Soon-Gyo Jung, and Bernard J. Jansen. 2021. Think- Aloud Surveys. In IFIP Conference on Human-Computer Interaction, Springer, Cham, 504–508. [49] World Health Organization. 2020. Quit tobacco today. Publisher Full Text (2020). [50] Chandra Kishor Pandey, Vinay Kumar Mishra, and Neeraj Kumar Tiwari. 2021. Deepfakes: when to use it. In 2021 10th International Conference on System Model- ing & Advancement in Research Trends (SMART), IEEE, 80–84. [51] Ethan Preu, Mark Jackson, and Nazim Choudhury. 2022. Perception vs. Reality: Understanding and Evaluating the Impact of Synthetic Image Deepfakes over College Students. In 2022 IEEE 13th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON), IEEE, 0547–0553. [52] Ethan Preu, Mark Jackson, and Nazim Choudhury. 2022. Perception vs. Reality: Understanding and Evaluating the Impact of Synthetic Image Deepfakes over College Students. In 2022 IEEE 13th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON), IEEE, New York, NY, NY, USA, 0547–0553. DOI:https://doi.org/10.1109/UEMCON54665.2022.9965697 [53] Jiameng Pu, Neal Mangaokar, Lauren Kelly, Parantapa Bhattacharya, Kavya Sundaram, Mobin Javed, Bolun Wang, and Bimal Viswanath. 2021. Deepfake Videos in the Wild: Analysis and Detection. In Proceedings of the Web Conference 2021, ACM, Ljubljana Slovenia, 981–992. DOI:https://doi.org/10.1145/3442381. 3449978 [54] Joni Salminen, Soon-Gyo Jung, Shammur Chowdhury, Sercan Sengün, and Bernard J. Jansen. 2020. Personas and Analytics: A Comparative User Study of Efficiency and Effectiveness for a User Identification Task. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, ACM, Honolulu HI USA, 1–13. DOI:https://doi.org/10.1145/3313831.3376770 [55] Joni Salminen, Soon-Gyo Jung, João M. Santos, Ahmed Mohamed Sayed Kamel, and Bernard J. Jansen. 2021. Picturing It!: The Effect of Image Styles on User Perceptions of Personas. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, ACM, Yokohama Japan, 1–16. DOI:https://doi.org/ 10.1145/3411764.3445360 [56] Joni Salminen, Ilkka Kaate, Ahmed Mohamed Sayed Kamel, Soon-Gyo Jung, and Bernard J. Jansen. 2021. How Does Personification Impact Ad Performance and Empathy? An Experiment with Online Advertising. International Journal of Human–Computer Interaction 37, 2 (January 2021), 141–155. DOI:https://doi.org/ 10.1080/10447318.2020.1809246 [57] Joni Salminen, Lene Nielsen, Soon-Gyo Jung, Jisun An, Haewoon Kwak, and Bernard J. Jansen. 2018. “Is More Better?”: Impact of Multiple Photos on Percep- tion of Persona Profiles. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, ACM, Montreal QC Canada, 1–13. DOI:https: //doi.org/10.1145/3173574.3173891 [58] Albrecht Schmidt. 2021. The End of Serendipity: Will Artificial Intelligence Re- move Chance and Choice in Everyday Life? In CHItaly 2021: 14th Biannual Conference of the Italian SIGCHI Chapter, 1–4. [59] Mike Seymour, Kai Riemer, Lingyao Yuan, and Alan Dennis. 2021. Beyond deep fakes: Conceptual framework, applications, and research agenda for neural ren- dering of realistic digital faces. (2021). [60] Farhana Shahid, Srujana Kamath, Annie Sidotam, Vivian Jiang, Alexa Batino, and Aditya Vashistha. 2022. ” It Matches My Worldview”: Examining Perceptions and Attitudes Around Fake Videos. In CHI Conference on Human Factors in Computing Systems, 1–15. [61] Jessica Silbey and Woodrow Hartzog. 2018. The upside of deep fakes. Md. L. Rev. 78, (2018), 960. [62] Stefan Sütterlin, Torvald F. Ask, Sophia Mägerle, Sandra Glöckler, Leandra Wolf, Julian Schray, Alaya Chandi, Teodora Bursac, Ali Khodabakhsh, and Benjamin J. Knox. 2021. Individual Deep Fake Recognition Skills Are Affected by Viewers’ Political Orientation, Agreement with Content and Device Used. (2021). [63] John Ternovski, Joshua Kalla, and Peter Aronow. 2022. The negative consequences of informing voters about deepfakes: Evidence from two survey experiments. Journal of Online Trust and Safety 1, 2 (2022). [64] Nyein Nyein Thaw, Thin July, Aye Nu Wai, Dion Hoe-Lian Goh, and Alton YK Chua. 2021. How Are Deepfake Videos Detected? An Initial User Study. In HCI International 2021-Posters: 23rd HCI International Conference, HCII 2021, Virtual Event, July 24–29, 2021, Proceedings, Part I 23, Springer, 631–636. [65] Pier Paolo Tricomi, Federica Nenna, Luca Pajola, Mauro Conti, and Luciano Gamberi. 2023. You can’t hide behind your headset: User profiling in augmented and virtual reality. IEEE Access 11, (2023), 9859–9875. [66] Binderiya Usukhbayar and Sean Homer. 2020. Deepfake Videos: The Future of Entertainment. (2020). [67] Cristian Vaccari and Andrew Chadwick. 2020. Deepfakes and disinformation: Exploring the impact of synthetic political video on deception, uncertainty, and trust in news. Social Media+ Society 6, 1 (2020), 2056305120903408. https://doi.org/10.1089/cyber.2021.29208.jth https://doi.org/10.3389/fpsyg.2015.01501 https://doi.org/10.1145/3442188.3445923 https://doi.org/10.1145/3442188.3445923 https://doi.org/10.17705/1pais.14301 https://doi.org/10.1109/ACCESS.2022.3185121 https://doi.org/10.1109/ACCESS.2022.3185121 https://doi.org/10.1080/02650487.2020.1834211 https://doi.org/10.1080/02650487.2020.1834211 arXiv:2009.03155 https://doi.org/10.1089/cyber.2020.0176 https://doi.org/10.1089/cyber.2020.0176 https://metric.qcri.org/about https://metric.qcri.org/about https://www.usenix.org/conference/usenixsecurity22/presentation/mink https://www.usenix.org/conference/usenixsecurity22/presentation/mink https://doi.org/10.1109/MRA.2012.2192811 https://doi.org/10.1109/MRA.2012.2192811 https://doi.org/10.48550/ARXIV.2107.09667 https://doi.org/10.1016/j.jbusres.2022.113368 https://doi.org/10.1016/j.jbusres.2022.113368 https://doi.org/10.1007/s12144-022-03621-x https://doi.org/10.1007/s12144-022-03621-x https://doi.org/10.1016/j.copsyc.2017.07.025 https://doi.org/10.1016/j.copsyc.2017.07.025 https://doi.org/10.1109/UEMCON54665.2022.9965697 https://doi.org/10.1145/3442381.3449978 https://doi.org/10.1145/3442381.3449978 https://doi.org/10.1145/3313831.3376770 https://doi.org/10.1145/3411764.3445360 https://doi.org/10.1145/3411764.3445360 https://doi.org/10.1080/10447318.2020.1809246 https://doi.org/10.1080/10447318.2020.1809246 https://doi.org/10.1145/3173574.3173891 https://doi.org/10.1145/3173574.3173891 CHItaly 2023, September 20–22, 2023, Torino, Italy Ilkka Kaate et al. [68] Stefano Valtolina and Liliana Hu. 2021. Charlie: A chatbot to improve the elderly quality of life and to make them more active to fight their sense of loneliness. In CHItaly 2021: 14th Biannual Conference of the Italian SIGCHI Chapter, 1–5. [69] Liansheng Wang, Lianyu Zhou, Wenxian Yang, and Rongshan Yu. 2022. Deep- fakes: a new threat to image fabrication in scientific publications? Patterns 3, 5 (2022), 100509. [70] Soyoung Wang. 2021. How will users respond to the adversarial noise that prevents the generation of deepfakes? (2021). [71] Christopher Welker, David France, Alice Henty, and Thalia Wheatley. 2020. Trading faces: Complete AI face doubles avoid the uncanny valley. DOI:https: //doi.org/10.31234/osf.io/pykjr [72] Stephen R. Wester, David L. Vogel, Page K. Pressly, and Martin Heesacker. 2002. Sex Differences in Emotion: A Critical Review of the Literature and Implications for Counseling Psychology. The Counseling Psychologist 30, 4 (July 2002), 630–652. DOI:https://doi.org/10.1177/00100002030004008 [73] Mika Westerlund. 2019. The emergence of deepfake technology: A review. Tech- nology innovation management review 9, 11 (2019). [74] Lucas Whittaker, Kate Letheren, and Rory Mulcahy. 2021. The Rise of Deepfakes: A Conceptual Framework and Research Agenda for Marketing. Australasian Marketing Journal 29, 3 (August 2021), 204–214. DOI:https://doi.org/10.1177/ 1839334921999479 [75] Chloe Wittenberg, Ben M. Tappin, Adam J. Berinsky, and David G. Rand. 2021. The (minimal) persuasive advantage of political video over text. Proceedings of the National Academy of Sciences 118, 47 (2021), e2114388118. https://doi.org/10.31234/osf.io/pykjr https://doi.org/10.31234/osf.io/pykjr https://doi.org/10.1177/00100002030004008 https://doi.org/10.1177/1839334921999479 https://doi.org/10.1177/1839334921999479 Abstract 1 INTRODUCTION 2 LITERATURE REVIEW 3 METHODOLOGY 3.1 Experiment Design 3.2 Participants 3.3 Data Collection 3.4 Data Analysis 4 RESULTS 4.1 Discovered Themes 4.2 REALISM: Realism of the character in the video 4.3 USER NEEDS: Understanding the needs and attitudes of the character in the video 4.4 DISTRACTING PROPERTIES: Distracting properties of the character in the video 4.5 ADDED VALUE: Added value of the app or the game 4.6 RAPPORT: Ties, trust, or distrust toward the character in the video 5 DISCUSSION AND CONCLUSION References