Emilia Niemistö Exploring Employee Expectations and Acceptance of a Generative AI HR Chatbot A Case Study on Digital HR Services Vaasa 2025 School of Technology and Innovations Master’s Thesis in Computing Sciences Master’s Programme in Information Systems 2 UNIVERSITY OF VAASA School of Technology and Innovations Author: Emilia Niemistö Title of the thesis: Exploring Employee Expectations and Acceptance of a Generative AI HR Chatbot: A Case Study on Digital HR Services Degree: Master of Science in Economics and Business Administration Discipline: Information Systems Science Supervisor: Timo Mantere Year: 2025 Pages: 102 ABSTRACT: Generative artificial intelligence has increasingly grown in popularity during the past few years and its use in organizations has become more valuable. Generative artificial intelligence chat- bots are a prominent example of the use of artificial intelligence that can interact with users in a way that resembles human interaction. In organizations, generative artificial intelligence chat- bots can streamline many business processes including human resources processes and increase employees’ user experience. Human resource chatbots can handle repetitive tasks, allowing hu- man resource teams to focus on more strategic work as well as provide employees with auton- omy to access information and support quickly and efficiently. This research is a case study ex- ploring employee perceptions and expectations regarding digital human resource services, fo- cusing on the integration of a generative artificial intelligence powered assistant. Through inter- views this research aims to identify which human resource tasks could be self-service for em- ployees and which must remain as tickets. In addition, the aim is to understand in which situa- tions employees seek human resource assistance and determine the role and reliability of the artificial intelligence assistant and whether it is welcomed by employees. The research is a qualitative case study. Data was gathered by interviewing human resource per- sonnel and employees of the case company. The interviews focus on employees’ expectations and perceptions of the digital human resource services and artificial intelligence assistant as well as their possibilities from a self-service viewpoint. The interviews were transcribed and analyzed by using thematic analysis. The main findings of the interviews with employees show that employees have varying expecta- tions regarding the artificial intelligence assistant. Most interviewees expect fast support and easy access to human resource information. The interviewees have concerns about privacy and response reliability. Acceptance of the artificial intelligence assistant highly depends on accurate responses, trust and peer-encouragement. The main findings of the interviewees with human resource personnel show that the artificial intelligence assistant is believed to save both employ- ees time and reduce human resource personnels workload. The importance of training to effec- tively use the artificial intelligence assistant and communicating about its purpose was empha- sized. However, while artificial intelligence was seen as helpful, human touch was still recognized as an essential part of human resource support. Based on the results it is recommended to make sure that employees know how to use the artificial intelligence assistant and consider user con- cerns to enhance trust. It is also important to ensure that the artificial intelligence assistant works properly and provides high quality answers to avoid frustration among users. Feedback should be used to continuously improve the assistant. Additionally, maintaining the option to discuss with human resource personnel is valuable. KEYWORDS: Chatbot, generative AI, human resources, large language model, user experience 3 VAASAN YLIOPISTO Tekniikan ja innovaatiojohtamisen akateeminen yksikkö Tekijä: Emilia Niemistö Tutkielman nimi: Työntekijöiden odotusten ja hyväksynnän tutkiminen generatiivista tekoälyä hyödyntävää HR-chatbottia kohtaan: Tapaustutkimus digi- taalisista henkilöstöhallinnon palveluista Tutkinto: Kauppatieteiden maisteri Opintosuunta: Tietojärjestelmätiede Työn ohjaaja: Timo Mantere Valmistumisvuosi: 2025 Sivumäärä: 102 TIIVISTELMÄ: Generatiivisen tekoälyn suosio on kasvanut viime vuosina nopeasti ja sen käyttö organisaatioissa on tullut yhä arvokkaammaksi. Generatiiviseen tekoälyyn perustuvat chatbotit ovat merkittävä esimerkki tekoälyn käytöstä, sillä ne voivat olla vuorovaikutuksessa käyttäjien kanssa tavalla, joka muistuttaa ihmisten välistä vuorovaikutusta. Organisaatioissa generatiiviseen tekoälyyn pe- rustuvat chatbotit voivat tehostaa monia liiketoimintaprosesseja, mukaan lukien henkilöstöhal- linnon prosessit, sekä parantaa työntekijöiden käyttäjäkokemusta. Henkilöstöhallinnon chatbo- tit voivat hoitaa toistuvia tehtäviä, jolloin henkilöstöhallinnon tiimit voivat keskittyä strategisem- piin töihin sekä tarjota työntekijöille itsenäisen pääsyn tietoihin ja tukeen nopeasti ja tehok- kaasti. Tämä tutkimus on tapaustutkimus, joka tutkii työntekijöiden käsityksiä ja odotuksia digi- taalisista henkilöstöhallinnon palveluista, keskittyen generatiiviseen tekoälyyn perustuvan avus- tajan käyttöönottoon. Haastattelujen avulla tavoitteena on tunnistaa, mitkä henkilöstöhallinnon tehtävät voisivat olla itsepalveluna työntekijöille ja mitkä tulisi edelleen käsitellä tikettijärjestel- män kautta. Lisäksi tavoitteena on ymmärtää, millaisissa tilanteissa työntekijät hakevat henki- löstöhallinnon tukea, sekä selvittää tekoälyavustajan rooli ja luotettavuus ja se, otetaanko se työntekijöiden keskuudessa hyvin vastaan. Tutkimus on kvalitatiivinen tapaustutkimus. Aineisto kerättiin haastattelemalla henkilöstöhallin- non henkilöstöä ja tapausyrityksen työntekijöitä. Haastattelut keskittyvät työntekijöiden odo- tuksiin ja käsityksiin digitaalisista henkilöstöhallinnon palveluista ja tekoälyavustajasta sekä nii- den mahdollisuuksiin itsepalvelun näkökulmasta. Haastattelut litteroitiin ja analysoitiin temaat- tisen analyysin avulla. Haastattelujen tärkeimmät tulokset työntekijöiden kanssa osoittavat, että työntekijöillä on vaih- televia odotuksia tekoälyavustajaa kohtaan. Suurin osa haastatelluista odottaa nopeaa tukea ja helppoa pääsyä henkilöstöhallinnon tietoihin. Haastateltavia huolettavat yksityisyys ja vastaus- ten luotettavuus. Tekoälyavustajan hyväksyntä riippuu vahvasti vastausten tarkkuudesta, luot- tamuksesta ja vertaisten rohkaisusta. Henkilöstöhallinnon henkilöstön haastattelujen tärkeim- mät tulokset osoittavat, että tekoälyavustajan uskotaan säästävän sekä työntekijöiden aikaa että vähentävän henkilöstöhallinnon työkuormaa. Koulutuksen tärkeyttä tekoälyavustajan te- hokkaaseen käyttöön sekä sen tarkoituksen viestimistä korostettiin. Vaikka tekoäly nähtiin hyö- dyllisenä, inhimillinen kosketus tunnistettiin silti keskeiseksi osaksi henkilöstöhallinnon tukea. Tulosten perusteella suositellaan varmistamaan, että työntekijät tietävät, miten tekoälyavusta- jaa käytetään, ja huomioimaan käyttäjien huolenaiheet luottamuksen lisäämiseksi. On myös tär- keää varmistaa, että avustaja toimii kunnolla ja antaa korkealaatuisia vastauksia turhautumisen välttämiseksi. Palautetta tulisi käyttää avustajan jatkuvaan kehittämiseen. Lisäksi mahdollisuus keskustella henkilöstöhallinnon henkilöstön kanssa on arvokasta säilyttää. AVAINSANAT: Chatbot, generatiivinen tekoäly, henkilöstöhallinto, laaja kielimalli, käyttäjäko- kemus 4 Contents 1 Introduction 8 1.1 Purpose and methodology 9 1.2 Scope 11 1.3 Structure 11 2 Literature review 12 2.1 Chatbots 12 2.1.1 Generative AI chatbots 14 2.1.2 Large language models 17 2.1.3 Benefits and applications 21 2.1.4 Challenges 25 2.2 Technology acceptance 28 2.2.1 Attitude towards using 31 2.2.2 Perceived behavioral control 33 2.2.3 Subjective norms 33 2.2.4 Trust 34 2.2.5 Satisfaction 35 2.3 Digital employee experience 35 2.4 User experience 37 3 Research methodology 40 3.1 Research philosophy 40 3.2 Case study 42 3.3 Data collection and analysis 43 3.3.1 Data collection from interviews 43 3.3.2 Thematic analysis 46 4 Research findings 50 4.1 Findings from PSH interviews 50 4.1.1 Self-service for employees 51 4.1.2 Restrictions on self-service 56 5 4.1.3 Preventing self-service 58 4.1.4 Benefits of the AI assistant 61 4.1.5 Concerns and challenges of the AI assistant 62 4.1.6 Requirements for the AI assistant 64 4.1.7 Acceptance of the AI assistant 65 4.2 Findings from employee interviews 67 4.2.1 Expectations and benefits of the AI assistant 68 4.2.2 Concerns and challenges of the AI assistant 71 4.2.3 Requirements for the AI assistant 76 4.2.4 Training 80 4.2.5 Acceptance of the AI assistant 81 5 Discussion 83 5.1 Conceptual model of the AI assistant 86 5.2 Recommendations 88 5.3 Reliability and validity 91 5.4 Future research suggestions 92 References 94 Appendices 101 Appendix 1. Interview questions: PSH members 101 Appendix 2. Interview questions: Employees 102 6 Figures Figure 1. General AI-based chatbot architecture (Adapted from Casheekar et al., 2024, p. 3). 16 Figure 2. LLMs background (Adapted from Raiaan et al., 2024, p. 26848). 19 Figure 3. Key benefits of AI chatbots (Based on Feng et al., 2024; Khennouche et al., 2024; Stone et al., 2024). 22 Figure 4. Main challenges of AI chatbots (Based on Feng et al., 2024; Khennouche et al., 2024; Routray et al., 2023). 25 Figure 5. Factors that affect chatbot acceptance (Adapted from Brachten et al., 2021, p. 8). 31 Figure 6. Thematic analysis process (Adapted from Braun & Clarke, 2006, p. 87). 47 Figure 7. Conceptual model of the HR AI assistant. 87 Tables Table 1. Interview details. 45 Table 2. Main themes and codes from PSH interviews. 48 Table 3. Main themes and codes from employee interviews. 48 Table 4. PSH interview themes and sub-themes. 51 Table 5. Employee interview themes and sub-themes. 68 Abbreviations AI Artificial intelligence DEX Digital employee experience DL Deep learning DTPB Decomposed theory of planned behavior GPT Generative pretrained transformer HR Human resources IS Information systems 7 LLM Large language model LM Language model ML Machine learning NLG Natural language generation NLP Natural language processing NLU Natural language understanding PPO Proximal policy optimization PSH People support hub RAG Retrieval-augmented generation RL Reinforcement learning RLHF Reinforcement learning based on human feedback RQ Research question Seq2Seq Sequence-to-sequence SFT Supervised fine-tuning TAM Technology acceptance model UI User interface UTAUT Unified theory of acceptance and use of technology UX User experience 8 1 Introduction Artificial intelligence (AI) is one of the biggest technological trends now (Stone et al., 2024, p. 1) and the use of AI keeps growing both in everyday life and in organizations. Many organizations adopt AI because it can alter and streamline various business pro- cesses, including human resource (HR) management processes (Stone et al., 2024, p. 1). AI can develop HR by, for example, improving employee services and transforming how organizations recruit, hire, train and manage employees (Stone et al., 2024, p. 1). Chatbots are a well-known example of the use of AI that can support users and answer questions automatically (Fauzulhaq & Bachtiar, 2023, p. 112). Generative AI chatbots have gained a lot of attention due to their interaction resembling human to human in- teraction (Ruamsuk et al., 2024, p. 464). The increased use of AI chatbots can lead to improvements in user experience (UX) (Casheekar et al., 2024, p. 2) and employee expe- rience in businesses (Zel & Kongar, 2020, p. 177). Generative AI-based chatbots have been studied across various fields, showing both practical applications and critical challenges. In education and customer service, chat- bots have been used to enhance learning and support interactions (Adam et al., 2021; Prather et al., 2023). In healthcare they have shown potential in improving medical as- sistance (Mohammad-Rahimi et al., 2024; Nandini Prasad et al., 2023). Several studies have addressed challenges, ethical concerns and epistemic risks of generative AI, along with strategies for reducing these challenges (Bang et al., 2023; Casheekar et al., 2024; Fischer, 2023; Hannigan et al., 2024; Khennouche et al., 2024). Technical aspects such as prompt engineering and chatbot capabilities have been researched to optimize perfor- mance (Naik et al., 2023; Park et al., 2024; Routray et al., 2023). In organizational contexts, AI’s impact on HR processes, digital employee experience and user acceptance have been researched (Brachten et al., 2021; Stone et al., 2024; Zel & Kongar, 2020). Studies have also explored chatbot adoption frameworks, design consid- erations and creative potential (Haase & Hanel, 2023; Ruamsuk et al., 2024; Urbani et 9 al., 2024). Factors affecting user loyalty and user experience regarding chatbots have also been studied (Følstad & Taylor, 2021; Zeng et al., 2023). Broader business implications and paradoxes of generative AI have been studied at multiple organizational levels high- lighting both opportunities and complexities (Feng et al., 2024; Ferraro et al., 2024). It is important for organizations to know what generative AI chatbots can offer and how they can change organizations and their processes, services and competition. This re- search will focus on generative AI chatbots from a self-service perspective in HR context in the research’s case company. The case company is a global technology company that highlights innovation in sustainable solutions, and it is striving for employee self-service work. The integration of an internal HR AI assistant is a part of this development. Gener- ative AI chatbots develop fast and interest in them has grown in recent years, but there is still a need for more research on how they transform employees’ work and capabilities in organizations (Ramaul et al., 2024, p. 2). Therefore, this research is relevant both in general and for the case company. 1.1 Purpose and methodology HR chatbots handle repetitive tasks, allowing HR teams to focus on strategic work, knowledge management and continuous improvement initiatives. Chatbots shift HR models towards self-service, empowering employees to find information and resolve is- sues independently. Additionally, HR chatbots provide the case company’s employees with autonomy, quick answers and desired independence they have asked for based on a previous thesis done for the company. This enhances employees’ overall experience and satisfaction by enabling them to access information and support swiftly and effi- ciently. The chatbot is also believed to bring cost savings and a competitive advantage to the company in the future. The objective of this research is to explore employee perceptions and expectations re- garding digital HR services, focusing on the integration of a generative AI-powered HR AI 10 assistant. The AI assistant is designed to assist the case company’s employees with HR- related inquiries by answering frequently asked questions, guiding employees through HR processes and providing support based on users’ specific needs. This research is a case study. Through semi-structured interviews with the case com- pany’s HR’s PSH (people support hub) members, this study aims to identify the propor- tion of tasks that could be self-service (action-based), self-service (information retrieval), require HR but could still be some kind of service, and those that must remain as tickets. Additionally, by interviewing the case company’s employees, this study tries to under- stand the situations in which employees seek HR assistance, assess the effectiveness of the HR portal, investigate the role and reliability of the AI assistant, evaluate the impact of a self-service HR model, and gather practical feedback on the HR AI assistant. By ana- lyzing the data, the research aims to determine whether the HR AI assistant is welcomed by employees and how the current setup of the HR portal supports its integration. This research aims to answer the following research question (RQ) “How do employees perceive and accept the integration of a generative AI-powered HR AI assistant within digital HR services, and what are their expectations and preferences for its role in the future HR service model?” The findings will show that participants express both interest and hesitation in the ac- ceptance of the AI assistant. Trust, response quality and peer support affect acceptance. Most participants want quick and efficient information provision from the AI assistant, but some expect more advanced functionalities too. They also emphasized the im- portance of training, human oversight, and maintaining privacy and confidentiality in in- creasing acceptance. Based on the findings a conceptual model of the AI assistant was developed to demonstrate how it could work within HR services. The AI assistant must be a supportive tool to reinforce HR services, not a replacement for HR experts. 11 1.2 Scope This research is a case study that focuses on a technology company that introduced an internal HR AI assistant for employee use. This research focuses on LLM-based chatbots particularly from user perspective. Chatbots and LLMs technical functioning are dis- cussed only to the extent to give a basic understanding of them. The number of inter- views due to the overall scope of a master’s thesis work also constrain the research. 1.3 Structure The first chapter is an introduction to the research. Chapter two discusses the theoretical framework containing an overview of chatbots, LLMs, their advantages and challenges, as well as technology acceptance, digital employee experience and user experience from chatbots standpoint. Research methodology is detailed in chapter three. Research find- ings are presented in chapter four. Lastly, discussion, conclusions and recommendations for future research are discussed in chapter five. 12 2 Literature review Employees can have high expectations for a generative AI chatbot. It is important to know what generative AI chatbots are, what they can offer for employees and what chal- lenges or limitations they can have so that organizations know what they can be used for and what kind of expectations are realistic (Ferraro et al., 2024, p. 4; Routray et al., 2023, p. 4). It is also important to know how employees accept a chatbot within HR services because otherwise it might decrease job satisfaction and become unused (Brachten et al., 2021, p. 2). Therefore, it is crucial to know what factors affect chatbot acceptance. Employees’ perceptions and use of a chatbot can also affect digital employee experience and user experience (Følstad & Taylor, 2021, p. 5; Zel & Kongar, 2020, p. 178) which is why they are important to take into consideration when implementing a chatbot in an organization. These aspects will be discussed in more detail in this chapter. 2.1 Chatbots Generative AI chatbots can be used in various areas such as e-business (Khennouche et al., 2024, p. 1), customer service (Ferraro et al., 2024, p. 2), medical field (Mohammad- Rahimi et al., 2024), entertainment, education (Fauzulhaq & Bachtiar, 2023, p. 112) and human resources (Stone et al., 2024, p. 4). They can be used as virtual assistants to au- tomate processes and make tasks more efficient, ensure efficient communication across different platforms (Ruamsuk et al., 2024, p. 464), provide information and analyze data (Ferraro et al., 2024, p. 2) among other applications. Chatbots are computer programs that imitate human conversation (Casheekar et al., 2024, p. 6) via text or voice input (Hannigan et al., 2024, p. 3). Chatbots can be referred to with different names such as conversational agents (Casheekar et al., 2024, p. 1; Følstad et al., 2021, p. 2918; Ruamsuk et al., 2024, p. 464), chat dialogue systems (Khennouche et al., 2024, p. 1), conversational software agents (Adam et al., 2021, p. 427) or intelligent agents (Fauzulhaq & Bachtiar, 2023, p. 112). Common with the terms 13 is that they refer to conversations or chats, and systems or agents which are central to chatbots. In this research the term chatbot or generative AI chatbot will be used. The case company’s generative AI chatbot will be referred to as AI assistant. There are many different types of chatbots besides AI-based chatbots. Different types of chatbots have their own distinct capabilities, models, benefits and limitations, and it is important to understand the differences between chatbots to be able to choose the most useful one for an organization’s needs (Khennouche et al., 2024, p. 5). Chatbots can be categorized for example by the technology they use to rule-based, re- trieval-based and generative AI chatbots (Khennouche et al., 2024, p. 5). Rule-based chatbots are not adaptive to questions that are not in their predetermined dataset, but they can be useful due to their consistent and easy to anticipate responses (Khennouche et al., 2024, p. 6). Retrieval-based chatbots cannot answer too complicated questions or questions that are out of their database’s scope, but they are rather easy to develop and manage and can be useful, for example, for common questions in customer service (Khennouche et al., 2024, p. 8). Chatbots can also be categorized by their purpose or role, for example to task-oriented, social and knowledge-based chatbots (Casheekar et al., 2024, p. 8). Task-oriented chat- bots have a goal that they are created to fulfill such as making a booking (Khennouche et al., 2024, p. 9). Social chatbots are meant for casual interaction, for example for en- tertainment, whereas knowledge-based chatbots typically provide information and an- swers to users by using natural language processing (NLP) techniques (Casheekar et al., 2024, p. 8). Nowadays chatbots often use artificial intelligence to process and use natural language (Adam et al., 2021, p. 427). Therefore, they can have interactive and human-like conver- sations with users (Casheekar et al., 2024, p. 1). In this research the focus is on generative 14 AI and specifically LLM-based chatbots, which will be introduced more closely in the next section. 2.1.1 Generative AI chatbots Generative AI chatbots are normally trained on large datasets, such as websites and ar- ticles, to determine numerous possible intents and user inputs (Casheekar et al., 2024, p. 2). Intent refers to user’s desired action (Nandini Prasad et al., 2023, p. 2). Generative AI chatbots do not have predefined rules or answers to users’ questions, which makes them interactive, personalized and adaptable to different and complex situations (Khennouche et al., 2024, p. 9). A key difference to previous chatbots is that generative AI chatbots can create new ideas, not only compile existing knowledge, and provide personalized answers to users (Ramaul et al., 2024, pp. 4, 7). Generative AI chatbots typically use AI, natural language processing methods (Ruamsuk et al., 2024, p. 464), and deep learning (DL) or machine learning (ML) techniques to automatically learn from data to generate answers and new content (Fer- raro et al., 2024, p. 2; Zeng et al., 2023, p. 397). Generative AI chatbots can typically be used to create text, images, videos or audio content (Naik et al., 2023, p. 2). AI means computer systems that can perform complex tasks that typically require human intelligence, such as making decisions, learning, solving problems, and reasoning (Dala- lah & Dalalah, 2023, p. 1). AI can create text that resembles human generated text be- cause NLP and ML have substantially advanced machines’ text generation capabilities (Dalalah & Dalalah, 2023, p. 3). ML is an AI technique in which algorithms are trained on large datasets to recognize pat- terns in text and to make predictions on their own without specifically being pro- grammed to carry out tasks (Dalalah & Dalalah, 2023, pp. 1-2). It makes computers able to gain and combine information from datasets and improve over time by learning new 15 information (Stone et al., 2024, p. 1). Neural networks are ML models that can process data and learn over time in a way that is inspired by the human brain (Nandini Prasad et al., 2023, p. 3). Generative AI chatbots use neural networks to learn patterns in human language (Hannigan et al., 2024, p. 5). Dalalah and Dalalah (2023, p. 1) explain that ML algorithms can be for example super- vised, unsupervised or reinforcement learning algorithms. According to them supervised learning means that an algorithm is trained on labeled data to produce a preferred re- sponse that it has learned to match to an input. Unsupervised learning, on the other hand, means that an algorithm is trained on unlabeled data, and it learns patterns in the data without specific instructions, and the produced response is not predictable. NLP is a subfield of AI that gives computers the ability to comprehend and process nat- ural language, such as text or speech, in a similar way that humans do by learning from prior data and adapting to new situations (Haase & Hanel, 2023, p. 1; Naik et al., 2023, p. 2). Therefore, NLP enables humans and computers to interact in natural language (Khennouche et al., 2024, p. 15). Named entity recognition, intent classification and sen- timent analysis are common NLP techniques (Khennouche et al., 2024, p. 1). Named en- tity recognition permits chatbots to identify and extract specific entities such as names, places and dates from user inputs and improve natural language understanding and re- sponse accuracy (Khennouche et al., 2024, p. 16). Sentiment analysis means identifying opinions and feelings in a text (Qamili et al., 2018, p. 82). NLP algorithms are trained on large datasets to make software intelligent (Stone et al., 2024, p. 3) and learn patterns in the data (Park et al., 2024, p. 1188). Natural language understanding (NLU) and natural language generation (NLG) are subfields of NLP (Naik et al., 2023, p. 2). NLU recognizes patterns in human language and that way understands the meaning of human inputs, whereas NLG is responsible of generating responses in natural language with its knowledge base (Naik et al., 2023, p. 2). The techniques have 16 improved and become more accurate over time by using other techniques in addition, such as DL, transfer learning, fine-tuning and data augmentation (Naik et al., 2023, p. 2). Generative AI chatbot functioning is illustrated in figure 1. First a user submits an input which the chatbot can comprehend due to the NLU module (Casheekar et al., 2024, p. 2). Next, the NLU module analyzes the input and breaks down the entity and the user’s intent which is used to choose a suitable answer (Khennouche et al., 2024, p. 4). Deter- mining the intent means that the chatbot analyzes if the input is, for example a question or a request or if the user provides information (Casheekar et al., 2024, p. 2). Dialogue manager controls the conversation between the user and the chatbot, recording the conversation’s status, user’s previous queries and the chatbot’s answers (Casheekar et al., 2024, p. 2). Lastly, the NLG module generates an appropriate, understandable and contextually relevant answer to the input in natural language by utilizing the knowledge base and information from the dialogue manager (Casheekar et al., 2024, p. 2). Figure 1. General AI-based chatbot architecture (Adapted from Casheekar et al., 2024, p. 3). 17 Generative AI chatbots can handle many tasks such as providing information or making a booking (Nandini Prasad et al., 2023, p. 1). However, they need a substantial amount of data and computational resources to be built and trained (Khennouche et al., 2024, p. 9). Responses are based on the training data which means that biases and limitations in the data can be seen in the answers (Bang et al., 2023, p. 111). Generative chatbots might provide inaccurate responses because they use training data to predict a suitable response to a query but do not actually understand the meaning of the words or user inputs (Prather et al., 2023, p. 159). Also, without predefined responses, new responses might be inappropriate or unrelated to the query (Khennouche et al., 2024, p. 9). There- fore, testing and controlling is needed to ensure that a chatbot provides high quality answers (Khennouche et al., 2024, p. 9). Additionally, training data is usually limited by its date (Hannigan et al., 2024, p. 5) so it needs to be updated to stay up to date. Generative AI chatbots can be built using different models, for example sequence-to- sequence (Seq2Seq) model or large language model (LLM). Seq2Seq models are tradi- tionally based on recurrent neural networks to process sequential data, such as text, and create responses (Khennouche et al., 2024, p. 10). LLMs, on the other hand, are princi- pally built on transformer architecture (Routray et al., 2023, p. 3) which is a type of neu- ral network that processes sequential data (Naik et al., 2023, p. 2). For example, a gen- erative pretrained transformer (GPT) is an LLM based on transformer architecture that is typically used in chatbots to create human-like text and new content or carry out spe- cific tasks (Nandini Prasad et al., 2023, p. 3). For example, ChatGPT is a well-known GPT and generative AI model that can interact with users in a human-like manner (Park et al., 2024, p. 1190). In this study the focus is on LLMs which will be discussed in the next section. 2.1.2 Large language models Language model (LM) is a machine learning model that makes machines able to compre- hend and analyze patterns and structures in natural language (Naik et al., 2023, p. 2). 18 LMs can be statistical LMs or neural LMs, and they can carry out many language-based tasks such as conversing with humans in natural language, responding to questions, sum- marizing text or writing for example long essays or emails (Naik et al., 2023, pp. 2-3). An advanced version of an LM is a large language model (LLM) (Naik et al., 2023, p. 3). LLMs are one of the most notable generative AI models (Mohammad-Rahimi et al., 2024, p. 508) and a big advancement in NLP. In LLMs “large” refers to the number of its param- eters which are variables that LLMs modify to predict the following word in a sequence and to improve its performance (Routray et al., 2023, p. 3). LLMs can have millions of parameters, and the more parameters, the more complicated connections, language patterns and structures they can learn within a text (Park et al., 2024, p. 1188). LLMs are pretrained on large datasets of text in order to understand and create human- like responses in natural language (Routray et al., 2023, p. 1). The training data can be collected for example from books, websites and articles (Routray et al., 2023, p. 1) that contain different topics and contexts (Hannigan et al., 2024, p. 3). The quality and volume of training data can affect LLMs response quality and appropriateness (Naik et al., 2023, p. 3). If there is not enough data the responses might be inaccurate or repetitive (Hanni- gan et al., 2024, p. 6). After data collection, the data is preprocessed, unnecessary text is deleted and errors are corrected (Hannigan et al., 2024, p. 3). Then the data is tokenized which means that the text is split into tokens which can range from one character to a word (Hannigan et al., 2024, p. 3). From the tokenized data a transformer learns the structure of data and makes predictions without supervision (Hannigan et al., 2024, p. 3). Figure 2 shows LLMs training and working process. 19 Figure 2. LLMs background (Adapted from Raiaan et al., 2024, p. 26848). LLMs are statistical and probabilistic models (Routray et al., 2023, p. 2). They use self- supervised learning to recognize statistical patterns in the dataset and statistical distri- bution of words to organize the words into a text based on the probability of their ap- pearance and choose the most likely word combination as a response (Routray et al., 2023, p. 3). However, because LLMs are statistical models they are not intelligent and do not actually understand the meaning or truthfulness of their decisions and outputs but simply generate sentences based on learned patterns (Hannigan et al., 2024, p. 7). There- fore, they may have difficulties comprehending irregularities in text (Casheekar et al., 2024, p. 20). Prompting and fine-tuning enables LLMs to learn from and adjust to new data (Naik et al., 2023, p. 3). Prompt engineering means text commands that gradually improve the results of generative AI models by enhancing the results further in a specific context (Ra- maul et al., 2024, p. 7). It can help LLMs perform better by altering the input structure (Park et al., 2024, p. 1187). LLMs can also be personalized and fine-tuned for different users and needs (Casheekar et al., 2024, p. 12). Fine-tuning means the throughout refinement of an LLM so that the 20 chatbot performs better in a particular context (Ramaul et al., 2024, p. 7). LLM’s re- sponses can be fine-tuned through attention to make them more contextually appropri- ate (Routray et al., 2023, p. 3). Attention means that the LLM focuses on important parts of the input text to accurately define the conversation’s context (Naik et al., 2023, p. 3). Attention focuses on a sequence to connect and align information between two se- quences. LLMs can also use self-attention to concentrate on one input sequence’s parts to provide contextually relevant responses (Naik et al., 2023, p. 4). Reinforcement learning (RL) based feedback can make LLMs more accurate and effective (Routray et al., 2023, p. 1). RL means that an algorithm gradually learns to make better decisions by being rewarded or punished for the decisions it makes (Dalalah & Dalalah, 2023, p. 2). Reinforcement learning based on human feedback (RLHF) can be used to train and improve LLMs by taking into account humans’ requirements for quality (Routray et al., 2023, p. 3). The training data can be improved with feedback from the overall quality and appropriateness of the responses which in turn improves the quality and contextual relevance of the following responses (Ramaul et al., 2024, p. 8). This way a feedback loop can be achieved, and the model can continue to improve (Ramaul et al., 2024, p. 8). RLHF consists of supervised fine-tuning (SFT), training a reward model, and proximal policy optimization (PPO). Supervised fine-tuning of an LLM means that humans compile a set of old data for demonstration, choose a group of prompts and specify desired outputs for them (Han- nigan et al., 2024, p. 3). It aims to enhance the LLM outputs to be preferable for humans but due to the massive amount of data it is not desirable for humans to check every possible prompt output (Hannigan et al., 2024, p. 6). Training a reward model means that human experts score the outputs for human pref- erences because the SFT model can create various outputs for each prompt, but it can- not determine how good the outputs are (Hannigan et al., 2024, p. 3). The ranking is done with set quality guidelines and is used to make reward model which is a version of 21 LLM (Hannigan et al., 2024, p. 7). Proximal policy optimization is used to optimize the fine-tuned model’s outputs to provide high quality outputs (Naik et al., 2023, p. 5). PPO uses a value function to calculate the variation between the desired and the existing output (Hannigan et al., 2024, p. 3). Updating the model can improve a chatbot’s performance (Bang et al., 2023, p. 111) but also be expensive and time-consuming. Retrieval-augmented generation (RAG) is a methodology to improve LLMs performance by integrating reliable external sources of data in addition to the original training data (Ruamsuk et al., 2024, p. 465). RAG can tailor outputs efficiently, for example, by utilizing an organization’s internal knowledge base (Ruamsuk et al., 2024, p. 465). By using an organization’s internal databases, LLMs can stay up to date with information ensuring that the outputs are reliable and accurate (Ruamsuk et al., 2024, p. 465). When implementing RAG, the model does not need to be retrained completely, and therefore it is time- and cost-effective (Ruamsuk et al., 2024, p. 465). LLMs can be used for many language processing tasks such as language translation, text summarization, question answering, sentiment analysis and text classification, and to automate repetitive tasks and processes (Routray et al., 2023, pp. 1, 4). Next LLM bene- fits and applications in organizations will be discussed in more detail. 2.1.3 Benefits and applications It is important to know what benefits and applications generative AI chatbots can have in an organization and in HR to make informed decisions about the usage and implemen- tation of a chatbot (Ferraro et al., 2024, p. 4). Figure 3 illustrates potential benefits of chatbots. Implementing a generative AI chatbot in a business can have impacts on indi- vidual, organizational and industrial level (Feng et al., 2024, p. 3). Generative AI chatbots can also affect digital employee experience which will be discussed in chapter 2.3. 22 Figure 3. Key benefits of AI chatbots (Based on Feng et al., 2024; Khennouche et al., 2024; Stone et al., 2024). One of the biggest benefits of generative AI chatbots in an organization is automating processes and services and making work more self-directed. Chatbots should not be used solely as search engines but also to get practical and usable recommendations and automate tasks that are done manually (Ramaul et al., 2024, p. 14). Especially routine or repetitive tasks can be automated but generative AI can also handle more compli- cated questions and tasks (Ferraro et al., 2024, p. 5). Automation is good because man- ually performed tasks typically decrease productivity and increase response times (Qamili et al., 2018, p. 79). Generative AI chatbots can optimize and automate various HR tasks, business processes and services, and assist with customer support (Ferraro et al., 2024, pp. 5-6). Automation can make tasks more efficient, and reduce costs, transaction times, manual labor and workload in an organization (Brachten et al., 2021, p. 2; Khennouche et al., 2024, pp. 4, 19; Routray et al., 2023, p. 4; Stone et al., 2024, p. 7). It can also enhance employee services and simplify administrative processes, for example by scanning job applications which can save time as hundreds of applications can be sent to large companies (Stone et al., 2024, pp. 1-2). 23 Chatbots can also reduce the need for human intervention and help with the busiest times by offering quick and personalized answers which can be time- and cost-effective (Khennouche et al., 2024, p. 19). They can also handle multiple queries at the same time, so users do not have to wait long for responses (Ferraro et al., 2024, p. 5). AI can auto- mate data collection and processing which can help with job analysis that can normally be slow, and help with interviewing, recruiting and hiring people (Stone et al., 2024, pp. 2-3). This can save time and costs, make processes more efficient and decrease the need for human work (Stone et al., 2024, pp. 2-3). However, humans should always check the recommendations from AI and make the final decisions because the LLM might have been trained on biased data (Stone et al., 2024, p. 3). AI can be used for chatbots to train employees and provide information and instructions to users (Stone et al., 2024, p. 4). They can be personalized to fit employees’ needs and role to increase productivity (Feng et al., 2024, p. 4). They can have functions such as data analysis or file upload that can decrease the time used for specific tasks, enabling employees to focus on more demanding tasks (Feng et al., 2024, p. 4). Generative AI chatbots can improve training efficiency to enhance employees’ abilities, make training more personalized for specific employee needs, roles and requirements, which increases productivity, satisfaction and retention in work and reduces training time and cost because only necessary courses can be produced (Stone et al., 2024, p. 4). They can also generate training and learning material, reducing costs and time to make them, and give feedback more often on training and learning processes (Stone et al., 2024, pp. 4-5). AI can also enable interactive virtual simulations to further improve skills (Stone et al., 2024, p. 4). AI can also help evaluate employees by gathering data from multiple sources and a longer time period in contrast to only one manager doing the evaluation on one time point, decreasing potential biases (Stone et al., 2024, p. 5). Generative AI chatbots can make knowledge work faster, enhance productivity and cre- ativity and generate new ideas for both creative and routine tasks (Ramaul et al., 2024, 24 pp. 4-6). This can reduce cognitive load by making suggestions and creating new ideas for products and services by analyzing data and recognizing patterns (Feng et al., 2024, p. 4) and therefore leave more time for more demanding tasks which can increase productivity and enhance work quality (Ramaul et al., 2024, p. 6). Generative AI chatbots can also affect organizations more than just on employee-level. AI can create job descriptions that emphasize the main skills and competencies required for a role (Stone et al., 2024, p. 2). It can also assist with recruitment and hiring, helping to find and attract a wide range of people with the right skills, saving time on manual work (Stone et al., 2024, p. 2). AI chatbots can enhance operational efficiency by auto- mating tasks such as organizational strategy, processes, structure, customer communi- cations and repetitive internal tasks such as document management or scheduling (Feng et al., 2024, pp. 3, 5). They can also increase competition by improving and creating new business models and changing, for example, customer service to be more personalized and available around the clock (Feng et al., 2024, p. 5). Chatbots can assist with customer service by providing quick answers and figuring out customer needs by asking questions and analyzing the answers, but they might not be as empathetic as humans and the answers might be too generic (Ferraro et al., 2024, p. 6). These issues can be addressed by training the chatbot on large datasets on previous questions to provide the most suitable answers even for complicated questions and adapt to customers sentiments using NLP capabilities (Ferraro et al., 2024, p. 6). Chat- bots can make customers more connected to the brand, but on the other hand, the lack of human interaction can reduce the connection (Ferraro et al., 2024, p. 5). This can be fixed by using chatbots to work along with humans, not replacing them completely for example by directing a user to a human if the chatbot cannot respond to a question (Ferraro et al., 2024, p. 5). Organizations should use generative AI chatbots to their full potential to find new ideas and ways to do business (Ramaul et al., 2024, p. 13). Employees should also be trained 25 to know how and for what a chatbot should be used to improve work quality and produc- tivity (Ramaul et al., 2024, p. 14). Even though chatbots have various benefits, they do not come without challenges which will be discussed next. 2.1.4 Challenges Generative AI chatbots do not always function as expected or respond accurately, which can make users less excited to use them or follow their recommendations (Adam et al., 2021, p. 429). Unhelpful interactions with chatbots can lead to skepticism and unwilling- ness to use them (Adam et al., 2021, p. 439) and ignoring chatbot generated ideas can lead to missed opportunities (Feng et al., 2024, p. 6). It is important to know what limi- tations LLMs have to know what they can and cannot be used for (Routray et al., 2023, p. 4). Potential challenges are illustrated in figure 4. Figure 4. Main challenges of AI chatbots (Based on Feng et al., 2024; Khennouche et al., 2024; Routray et al., 2023). Implementing and training an LLM requires large amounts of financial (Bang et al., 2023, p. 111) and computational resources as well as time as the training process can take multiple days (Routray et al., 2023, pp. 1, 5). After implementation, the chatbot requires resources to be monitored, maintained and kept up to date with new information and 26 to work seamlessly with other systems (Khennouche et al., 2024, p. 20). Even so, chat- bots might have some technological challenges such as integration into existing technol- ogy, fast advancements and lack of regulations (Feng et al., 2024, p. 7). These challenges can be mitigated by carefully considering the integration of the chatbot into existing pro- cesses and systems and thinking how it can improve the processes (Feng et al., 2024, p. 9). Generative AI chatbots can have ethical issues that might make people distrust the reli- ability of a chatbot (Hannigan et al., 2024, p. 2; Khennouche et al., 2024, p. 16). Ethical issues can be AI using user’s data without permission for training (Prather et al., 2023, p. 136), being biased and stereotypical, plagiarizing, or generating misinformation (Fischer, 2023, pp. 2, 5). Lack of transparency can also reduce trust (Routray et al., 2023, p. 5) and therefore it is important to be transparent and accountable when implementing a chat- bot and utilize feedback to recognize ethical issues and improve it (Khennouche et al., 2024, p. 17). Ethical issues can also be mitigated by integrating regulations and policies for training and using LLMs (Casheekar et al., 2024, p. 20). Privacy and security challenges can also concern users (Routray et al., 2023, p. 5). Some- times anonymous data needs to be collected so that an LLM can perform well (Casheekar et al., 2024, p. 20) or be personalized (Ferraro et al., 2024, p. 4). It is important to keep user information private by, for example, making sure that personal information is not saved or used for training a chatbot (Casheekar et al., 2024, p. 20), being transparent about how user data is collected and used, and giving users the option to control their data (Ferraro et al., 2024, p. 7). Chatbots can reduce the need for human interaction but if a user prefers interacting with a human, they might be unwilling to use a chatbot (Khennouche et al., 2024, p. 20). LLMs may not understand for example the context, nuances of language or user emotions, or be good at reasoning or logic (Routray et al., 2023, p. 5) the way humans are which can make them less pleasant to use. Users can be frustrated if the chatbot cannot answer 27 their question, especially if they need help with personal or complex issues and there- fore it is important to assure that they can interact with a human if the chatbot cannot answer their question (Ferraro et al., 2024, pp. 4, 7). A better collaboration between humans and AI can improve performance and user satisfaction, and it can be achieved by using chatbots to handle routine and humans addressing more intricate queries (Feng et al., 2024, p. 8). People’s jobs might be threatened if they are automated but generative AI chatbots might also create new jobs, or employees could be reskilled to reduce job losses (Ferraro et al., 2024, p. 5). It may be difficult for employees to adapt to using a generative AI chatbot in their work, they might use it incorrectly or over-rely on it which can decrease their perception of responsibility (Feng et al., 2024, p. 7). Therefore, employees should be trained to use AI chatbots properly (Ferraro et al., 2024, pp. 6, 9). Chatbot performance depends largely on the scale and quality of the training data (Khennouche et al., 2024, p. 20). If the training data is, for example, biased, inaccurate, unfair or outdated, the response quality can be affected, especially in organizations where information can change quickly (Routray et al., 2023, p. 5). Biases can be learned from the training data if it is not large or diverse enough, contains misrepresentations or irrelevant data or uses third-party datasets (Casheekar et al., 2024, pp. 18, 21). Language bias might also be a challenge because the majority of LLMs use training data that is in English and therefore, might not be applicable to other languages (Casheekar et al., 2024, p. 21). Users who are not proficient in the language can have difficulties using and benefiting from the chatbot (Urbani et al., 2024, p. 599). The chal- lenge of bias can be solved by screening the training data sources, using a wide and di- verse range of datasets, having prepared responses to certain queries (Casheekar et al., 2024, p. 19) and monitoring and filtering the data for unfavorable content (Ferraro et al., 2024, p. 8). 28 Another common challenge for LLMs is hallucinating which means that a chatbot gener- ates incorrect answers that appear correct (Casheekar et al., 2024, p. 20). Hallucinations can happen for example during data collection if the data is biased, outdated or inaccu- rate, during data preprocessing when errors might be added or useful content removed, during data tokenization if text context or meaning is misunderstood, or during unsuper- vised learning when LLM learns to predict but does not understand the text’s meaning (Hannigan et al., 2024, p. 3). Hallucinations can be managed, for example, by humans improving inputs and managing outputs (Routray et al., 2023, p. 5). Chatbots might also generate inaccurate answers if the question is out of chatbots’ scope (Ruamsuk et al., 2024, p. 464) or if they do not understand the context, meaning or user’s sentiments of an input (Khennouche et al., 2024, p. 20). LLMs cannot verify that the created answers are accurate or the used sources trustworthy and therefore it is im- portant to evaluate the content properly before applying it to real tasks (Routray et al., 2023, pp. 5-6). A chatbot should also be evaluated regularly to prevent inaccurate or harmful content (Feng et al., 2024, p. 8). However, generative AI chatbots are black boxes which means that it is not fully understood how they work and choose answers (Khennouche et al., 2024, p. 21) and therefore outputs cannot be fully controlled (Routray et al., 2023, p. 5). It is important to continuously manage and update a chatbot so that it can stay relevant and accurate in its responses (Ruamsuk et al., 2024, p. 464). Chatbots that work properly and have few issues are more likely to be used and accepted, but chatbot acceptance is dependent on various factors which will be discussed in the next chapter. 2.2 Technology acceptance Chatbots can have various benefits in an organization, so it is important to understand what makes employees accept and adopt them so that they do not result in wasted time 29 and resources (Brachten et al., 2021, pp. 1-2). If user needs are not considered and un- derstood, satisfaction and acceptance can decrease. In this section factors that affect chatbot acceptance in organizations will be discussed. Technology acceptance model (TAM) is largely used within information systems (IS) in many research areas including generative AI (Zeng et al., 2023, p. 398). TAM was pro- posed by Davis (1989) and the purpose of it is to predict and explain acceptance and adoption of new technology focusing on perceived usefulness and perceived ease of use. It is still a relevant and adaptable framework for predicting adoption of new technologies and behavioral intentions (Urbani et al., 2024, p. 601). Generative AI chatbots might present new challenges compared to other technologies, such as working with personal information or interoperability with existing roles and technologies, which can affect user experience and chatbot adoption (Urbani et al., 2024, pp. 600-601). Therefore, Urbani et al. (2024) enhanced TAM to be more specific for gen- erative AI chatbots in organizations by applying subjective norms, compatibility, facilitat- ing conditions and trust as additional factors. The decomposed theory of planned behavior (dTPB) is also generally a good model to evaluate intention to use a chatbot in an enterprise context (Brachten et al., 2021, p. 11). DTPB was proposed by Taylor and Todd (1995), and it is an extension of the theory of planned behavior by Ajzen (1991). Theory of planned behavior focuses on three con- structs which are attitude, subjective norms and behavioral control (Ajzen, 1991). DTPB decomposes the constructs to clarify and to better understand their relationships, and its purpose is to explain behavior and factors that influence the adoption and use of new technologies (Taylor & Todd, 1995). It can provide better understanding and explanation of the factors that determine behavioral intention than TAM (Brachten et al., 2021, p. 6). Since the introduction of TAM and dTPB, several extended models have been developed to better understand technology acceptance. TAM2 and TAM3 expand the original TAM 30 by including additional factors such as individual differences, system characteristics, so- cial influence, facilitating conditions, computer self-efficacy, output quality and job rele- vance (Venkatesh & Bala, 2008; Venkatesh & Davis, 2000). Similarly, the Unified theory of acceptance and use of technology (UTAUT) and its extension UTAUT2 combine multi- ple acceptance models and highlight the role of facilitating conditions, effort expectancy, performance expectancy and social influence, as well as hedonic motivation, price value and habit in order to predict technology use and acceptance (Venkatesh et al., 2003; Venkatesh et al., 2012). While these models provide broader perspectives, this research focuses on extended TAM and dTPB because they are suitable for analyzing and under- standing employees’ attitudes, trust and behavioral intention toward chatbots. In addi- tion, they include some of the factors presented in the newer models such as facilitating conditions, subjective norms, compatibility and self-efficacy. The factors of extended TAM by Urbani et al. (2024) and dTPB will be discussed to get a broader view of both personal and contextual factors and both understand the predic- tion and explain the factors of technology acceptance in an organization. As shown in figure 5, employees’ intention to use a generative AI chatbot in an organization is mainly affected by their attitude towards using, subjective norms and perceived behavioral con- trol (Brachten et al., 2021, p. 6). The continuation of use is heavily affected by trust and satisfaction (Urbani et al., 2024, pp. 601, 604). These factors consider technical, opera- tional, social and psychological aspects of accepting new technology (Urbani et al., 2024, p. 604). 31 Figure 5. Factors that affect chatbot acceptance (Adapted from Brachten et al., 2021, p. 8). 2.2.1 Attitude towards using Attitude towards using a chatbot has the strongest impact on employees’ intention to use it (Brachten et al., 2021, p. 9). This suggests that internal motivation affects the in- tention to use more than external motivation (Brachten et al., 2021, p. 10). External mo- tivation means that an individual’s behavior can be impacted by external rewards or sanctions, obligations, pressure or guilt or their own values (Zeng et al., 2023, pp. 398- 399). Therefore, it is important to focus on employees’ internal motivations by, for ex- ample, making sure that employees understand the usefulness and practical benefits of a chatbot (Brachten et al., 2021, pp. 1, 10) so that they have a positive attitude towards using it. It is also important to consider employees’ worries, such as job loss, because they might make employees hesitant to use a chatbot (Urbani et al., 2024, p. 603). Attitude is affected substantially by perceived usefulness, perceived ease-of-use and compatibility (Brachten et al., 2021, p. 8) as figure 5 shows. However, perceived ease-of- use and compatibility do not impact it as much as perceived usefulness, possibly because 32 in a digital work environment new technology is an expectation (Brachten et al., 2021, p. 9) and employees often can already use technology and possibly learn to use new ones quickly. Perceived usefulness is one of the most important determinants for chatbot acceptance and the intention to use them in an organization (Brachten et al., 2021, pp. 3-4). Per- ceived usefulness is employees’ perception of how well chatbots can help them to im- prove their performance and efficiency at work (Brachten et al., 2021, p. 4). Perceived usefulness can be impacted, for example, by perceived ease-of-use and chatbot’s avail- ability around the clock which makes it quick and efficient at supporting users (Urbani et al., 2024, pp. 599, 603). However, users can be discontented and perceived usefulness might decrease if the chatbot generates useless responses, does not understand emo- tions or users cannot interact with a person (Urbani et al., 2024, p. 603). Employees’ intention to use chatbots is affected by perceived ease-of-use (Brachten et al., 2021, p. 4) which means how easy it is to learn and use a chatbot properly and effi- ciently without having to put too much effort into learning it (Zeng et al., 2023, p. 398). Chatbot is more likely to be used if it is perceived effortless to learn and use (Brachten et al., 2021, p. 5). Perceived ease-of-use is affected positively by facilitating conditions, chatbot’s user-friendliness, fast interaction and usability for specific tasks such as data analysis (Urbani et al., 2024, p. 603). Chatbots should be designed to be accessible for everyone taking into account diverse user needs and backgrounds because poor design can decrease ease-of-use and make users discontent and less productive (Urbani et al., 2024, pp. 599, 603). Compatibility affects also employees’ intention to use chatbots (Brachten et al., 2021, p. 4). Compatibility is more than just technological fit, considering also existing business processes, operations and strategies as well as organizational culture and employees’ work styles and expectations (Urbani et al., 2024, p. 603). New technology needs to work seamlessly with existing technology and be compatible with employees’ roles and goals 33 (Brachten et al., 2021, p. 4). Good compatibility can increase user satisfaction and de- crease response times, whereas poor compatibility can make work more effortful (Ur- bani et al., 2024, p. 603). Therefore, compatibility is important to take into consideration when implementing a chatbot in an organization. 2.2.2 Perceived behavioral control Perceived behavioral control affects employees’ intention to use a chatbot and it refers to an individual’s personal beliefs of internal and external influences beyond their control (Brachten et al., 2021, p. 5-6). Facilitating conditions and self-efficacy impact perceived behavioral control (Brachten et al., 2021, p. 8). Self-efficacy means that a user is more likely to attempt a task, such as using a new chatbot, if they believe that they can do it (Brachten et al., 2021, p. 5). Facilitating conditions refer to training and support that employees can receive to learn to use new technology, making them more likely to adopt and use the technology (Ur- bani et al., 2024, p. 603). Lack of training or support on the other hand can cause issues in implementing a chatbot (Urbani et al., 2024, p. 603). Facilitating conditions include also users’ perceptions of the technology and cultural norms (Urbani et al., 2024, p. 604). Therefore, it is important to consider user demographics and cultural expectations, and design chatbot communication style accordingly when implementing it (Urbani et al., 2024, p. 604). 2.2.3 Subjective norms Subjective norms refer to an individual’s perception of social expectations to behave a certain way and they affect employees’ intention of using a chatbot (Brachten et al., 2021, p. 5). Subjective norms can be applied by encouraging employees to use the chatbot on their own interactions with the organization which can build a culture that accepts new 34 technologies, by creating positive examples and promoting success stories of how the chatbot can impact employee experience, and by continuously improving the chatbot from user feedback (Urbani et al., 2024, pp. 602-603). Social influence and organizational culture affect technology acceptance because if some employees have successfully adopted a new technology, others might perceive it as more useful and easier to use as well (Urbani et al., 2024, p. 602). Subjective norms are impacted by peer influence and superior influence (Brachten et al., 2021, p. 8). Both peer and superior influence affect chatbot adoption in an organization but peer influence has a stronger effect which might be because a superior may not be perceived as more knowledgeable about a new chatbot than a peer (Brachten et al., 2021, pp. 5, 9). Therefore, the focus should be on offering peer-based help such as a community-based approach (Brachten et al., 2021, p. 11). 2.2.4 Trust Trust means that users can trust that a chatbot can reach a specific result (Brachten et al., 2021, p. 4), the responses are accurate, and users’ private information is safe. Ac- ceptance, adoption and the intention to continue using a chatbot are significantly af- fected by trust (Urbani et al., 2024, p. 604), and perceived usefulness and perceived ease-of-use, whereas lack of trust can make users skeptical of new technology (Brachten et al., 2021, pp. 4, 8) and less willing to use it. Trust can increase if users are convinced of the chatbot’s efficiency, reliability and safety whereas data security or privacy issues can decrease trust (Urbani et al., 2024, p. 604). Trust is critical especially when chatbots handle personal or confidential information, and therefore it is important to focus on security, privacy and data protection and user concerns, and be transparent on how user data is handled and what purposes it is col- lected for (Urbani et al., 2024, pp. 598, 604). Users should also have control over their data and be able to refuse data collection (Urbani et al., 2024, p. 598). 35 2.2.5 Satisfaction User satisfaction can refer to user’s feelings and their changes during the use of a chatbot, or to the difference between the user’s expectations and the actual experience (Zeng et al., 2023, p. 398). Satisfaction impacts significantly users’ intention to continue to use a chatbot (Urbani et al., 2024, p. 601). User satisfaction with generative AI is affected more by internal than external motivations, perceived usefulness, perceived ease-of-use (Zeng et al., 2023, p. 400), user interface (UI) and user experience (Casheekar et al., 2024, p. 11). Employee satisfaction and UX might decrease if the chatbot usage is not voluntary but rather an obligation (Brachten et al., 2021, p. 2). Therefore, there should be a balance between giving freedom and making sure that employees use the chatbot (Brachten et al., 2021, p. 11). Technology acceptance affects employees’ engagement with chatbots and therefore also the overall digital employee experience (DEX). Positive digital employee experience can increase employee satisfaction and therefore it is important to consider how chatbots can affect digital employee experience, which will be discussed in the next section. 2.3 Digital employee experience Digitalization and remote work have transformed employees’ expectations at work and therefore it is important to make employees feel more engaged in their work and make their job easier by offering them personalized and user-friendly applications (Zel & Kon- gar, 2020, p. 176). Digital employee experience is a result of organizations’ digital trans- formation, and it can affect organizations’ competitiveness and productivity (Gheidar & ShamiZanjani, 2020, p. 132). Employee experience and digital employee experience are not separate but rather affect each other (Ameu et al., 2024, p. 1289). 36 Employee experience is a combination of an employee’s feeling of the meaningfulness of their work and their perceptions, interactions and reactions to organizational culture and practices throughout their employment (Zel & Kongar, 2020, p. 176). Good em- ployee experience makes employees more engaged and committed at work, which makes customer experience better as well when they interact with the employees (Zel & Kongar, 2020, p. 176). Digital employee experience refers to employees’ overall expe- rience in a digital workplace and the performance and UX of devices and systems (Ameu et al., 2024, p. 1289). Additionally, interacting with information technologies to improve, for example, productivity, communication and collaboration, learning, and self-service in HR systems is a part of DEX (Ameu et al., 2024, p. 1289). User-friendly and properly designed AI chatbots can enhance employee experience and make employees more will- ing to use them (Zel & Kongar, 2020, p. 178). Self-service chatbots and DEX are complementary because self-service chatbots can make employees’ digital work more efficient and that way improve DEX. In turn, good DEX makes employees more willing to use chatbots. Self-service chatbots can be advan- tageous, for example by improving service quality by being flexible, personalized and accessible around the clock and reducing transaction times and costs (Adam et al., 2021, p. 429). For example, self-service chatbots in HR can give employees quick access to HR policies which can affect employee experience (Ameu et al., 2024, p. 1289). In self-service it is important that a chatbot feels socially present, human-like and under- stands the user’s emotions so that employees use it and follow its recommendations (Adam et al., 2021, p. 437). Chatbots can appear more socially present and human-like, for example by greeting the user, engaging in small-talk and showing empathy by react- ing to user inputs appropriately (Adam et al., 2021, p. 434). In a good conversation it is important to comprehend the other person and respond appropriately (Adam et al., 2021, p. 433). 37 Chatbots can also increase DEX by personalizing and modernizing experiences for em- ployees and responding quickly to problems, but it is important to ensure that they have a purpose and fit the organizational culture, needs and strategy because unnecessary tools can become wasted expenses and decrease employee experience (Zel & Kongar, 2020, pp. 177-178). AI can give HR personnel more time to focus more on employees’ needs while chatbots respond fast to employees’ HR-related questions (Zel & Kongar, 2020, pp. 176-177). AI tools can also assist organizations in making internal culture more inclusive (Zel & Kongar, 2020, p. 178). These benefits can enhance DEX. Successfully adopting new technologies takes time, planning, gradually managing change and preparing in an organization (Zel & Kongar, 2020, p. 178). When using AI tools for enhancing employee experience, they can create a lot of data that can be ana- lyzed and used in decision making (Zel & Kongar, 2020, p. 178). Therefore, HR should have skills to manage and use the data effectively to ensure successful implementation (Zel & Kongar, 2020, p. 178). However, there might also be challenges that decrease DEX when using chatbots in an organization which were discussed in chapter 2.1.4. Therefore, employee concerns should be addressed, and tools should be secure to ensure success- ful adoption (Zel & Kongar, 2020, p. 178). User engagement and satisfaction are highly dependent on UX (Casheekar et al., 2024, p. 11) and therefore factors that affect UX are important to consider too when thinking about DEX. Next, chatbot UX will be discussed. 2.4 User experience User experience is a result of user’s inner state such as expectations, mood and motiva- tion, the system’s characteristics such as its functions, complexity and usability, and the interaction context such as organizational environment and voluntariness of use (Has- senzahl & Tractinsky, 2006, p. 95). UX focuses more on how a user feels when they use a system than the system itself, and therefore it is important to properly understand users’ requirements and needs and the context the system is designed for (Hassenzahl, 2008, pp. 12, 14) to have a good UX and ensure that the chatbot suits user needs and does not 38 become unused and wasted resources. Chatbot UX means how users react to chatbots, and how their UI, interaction process and content can be designed to control these re- actions (Følstad et al., 2021, p. 2924). Chatbot UX, efficiency and user-friendliness are affected by chatbots’ ability to process natural language, that is, understanding and providing accurate, helpful and efficient re- sponses to user inputs (Casheekar et al., 2024, p. 12). Dialogues must also be efficient which means that a user receives a desirable response within reasonable time and effort without having to rephrase questions multiple times (Følstad & Taylor, 2021, p. 7). Chat- bot UX can be affected by conversation length but there is no ideal length for the con- versation as the quality of a chatbot’s responses is more important than the length itself (Huang et al., 2024, p. 5). UX is subjective, dynamic and situation dependent (Hassenzahl & Tractinsky, 2006, p. 95) so it can change during a dialogue depending on the relevance, clarity and helpfulness of responses (Følstad & Taylor, 2021, p. 5). Therefore, it is im- portant that a chatbot predicts the intent correctly which can be more difficult the more complex and detailed user input is (Følstad & Taylor, 2021, pp. 5, 12). Chatbot usability affects UX and user-friendliness (Casheekar et al., 2024, p. 12). Usabil- ity means how easy a UI is to use (Nielsen, 2012) and it is part of usefulness which refers to whether a system can reach a desired objective (Nielsen, 1993, pp. 24-25). Usability typically contains five components which are UI learnability, efficiency, memorability, satisfaction and error rate (Nielsen, 1993, p. 26). UI that is easy to use is important as it enhances user commitment and chatbot efficiency (Khennouche et al., 2024, p. 5). Chat- bots communicate in natural language which makes the UI rather inclusive and user- friendly and therefore conversing with a chatbot tends to be easy (Ramaul et al., 2024, p. 8). Personalizing chatbot’s interaction based on user preferences or previous interac- tions can also improve UX (Khennouche et al., 2024, pp. 16, 21). Personalization can be for example tailored recommendations, contextual awareness or identifying user emo- tions and responding to them accordingly (Khennouche et al., 2024, p. 16). In addition, 39 it is important to ensure that a user is directed to a human if the chatbot cannot answer a query, or UX might suffer (Følstad & Taylor, 2021, p. 9). 40 3 Research methodology A research method is the strategy and process of designing research and gathering data, starting from philosophical assumptions (Myers, 1997). In this chapter research philoso- phy and methodology, as well as data collection and analysis are explained. It is important to consider ethical questions and follow good scientific practices when doing research (Hirsjärvi et al., 2013, p. 23). Ethical questions related to research include for example information gathering, taking part of research voluntarily and informing par- ticipants about research, not plagiarizing, being critical about results and reporting find- ings adequately (Hirsjärvi et al., 2013, pp. 25-26). In this research the firm’s and participants’ privacy and confidentiality were guaranteed by not using confidential information about the firm or participants. The participants were kept anonymous, and it was ensured that participation was voluntary, and partici- pants were informed how and where answers were used and what was expected from them. Interview recordings and transcriptions were kept secure and deleted after anal- ysis and finalizing the report. Naturally, plagiarism was avoided, and proper references were provided. The aim of this research was to explore employees’ perceptions, expectations and ac- ceptance of the integration of a generative AI-powered assistant within HR services. The answer was being explored with qualitative data from semi-structured interviews with participants from the case company. The interviews were analyzed using thematic anal- ysis. 3.1 Research philosophy Even when research is practical, it is grounded on underlying ontological and epistemo- logical assumptions (Rashid et al., 2019, p. 3). These assumptions concern what research 41 is valid and what research methods are suitable for research (Myers, 1997). Therefore, questions that are philosophical in nature need to be considered when doing research (Hirsjärvi et al., 2013, p. 129). Ontology presents questions about the nature of reality such as what is real or what is the nature of a researched phenomenon (Hirsjärvi et al., 2013, p. 130). Ontology is typi- cally realist or relativist, and qualitative case studies usually adopt relativist ontology, according to which everything is relative and subjective and constructed by humans (Ra- shid et al., 2019, p. 3). In addition, research is guided to some extent by the values and norms of the researcher (Rashid et al., 2019, p. 3). Epistemology concerns the origin of knowledge and nature, and knowledge formation (Hirsjärvi et al., 2013, p. 130). It presents questions like what the relationship between a researcher and the researched phenomenon is and what is the position of values in un- derstanding the phenomenon (Hirsjärvi et al., 2013, p. 130). Epistemology is typically objective, which means that knowledge is controlled by nature’s laws, or subjective, which means that knowledge is personal interpretations of phenomena (Rashid et al., 2019, p. 3). This research assumed relativist ontology and subjective epistemology which is typical for case studies (Rashid et al., 2019, p. 6). Chua (1986) classifies research paradigms into positivist, critical and interpretive. A par- adigm is “the basic belief system or worldview that guides the investigator” (Guba & Lincoln, 1994, p. 105). It determines what the research is about and what its limits are (Guba & Lincoln, 1994, p. 108). Positivist research is objective epistemology and views reality as objective and measurable, therefore using typically quantitative research methods (Rashid et al., 2019, pp. 3-4). It usually aims to test theory to better understand a phenomenon (Myers, 1997). Critical theory assumes that knowledge is political, and the aim is to criticize social systems and reveal society’s conflicts, inequality and injustice (Rashid et al., 2019, p. 4). 42 Interpretive research is subjective epistemology, and by using qualitative methods, it aims to understand research phenomenon through participants’ personal experiences and knowledge (Rashid et al., 2019, p. 4). Interpretive research tries to investigate the phenomenon in its usual context and thus does not assign preconceptions to the phe- nomenon (Orlikowski & Baroudi, 1991, p. 6). This research assumed interpretive para- digm which is usual for case studies (Rashid et al., 2019, p. 6). 3.2 Case study This research is qualitative research. Qualitative research often creates a lot of data that is not usable to be processed and analyzed as it is (Järvinen, 2001, p. 79). Therefore, when analyzing qualitative data, the objective is to understand and interpret a phenom- enon and find consistency and structure in the data (Järvinen, 2001, p. 79) not neces- sarily generalize the results (Rashid et al., 2019, p. 8). Qualitative research methods aim to help researchers comprehend people and their sur- roundings (Myers, 1997). Typical for qualitative research is that information is gathered from humans in natural and real situations or for example through interviews from a specifically chosen target group (Hirsjärvi et al., 2013, p. 164). Choosing the population controls the variation from outside and makes it easier to determine how generalizable the findings are (Järvinen, 2001, p. 78). This research is a case study which is a common qualitative method in information sys- tems research (Myers, 1997). The aim of a case study is to comprehensively research a particular case, describe real phenomena and find relevant elements and connections in phenomena (Rashid et al., 2019, p. 5). In a case study it is common to do broad research and gather empirical data about the case to analyze matters related to the phenomenon (Rashid et al., 2019, p. 5). Data gathering process is usually flexible in case studies (Jä- rvinen, 2001, p. 79). Nevertheless, a clear understanding of the research problem and wanted results are important so that it can be decided what, how and where to look for 43 required information (Rashid et al., 2019, p. 10). Therefore, the research question was defined early on to set a focus for the research and data collection. This research followed Rashid et al.’s (2019) steps of case study method. First, philosoph- ical considerations, research logic and inquiry were considered. Next, research methods were selected and case study protocols written. Then, research participants were con- tacted and interviews conducted. Lastly, the study was reported. 3.3 Data collection and analysis Basic data collection methods are questionnaire, interview, observation and the use of documents (Hirsjärvi et al., 2013, p. 192). Generally, it is recommended to use triangula- tion which means the use of various data collection methods or sources (Järvinen, 2001, p. 146; Yin, 2018, p. 126). A more comprehensive picture of the research phenomenon can be achieved by gathering different types of data because it can cover a wider per- spective of the phenomenon and make the validation of concepts more convincing (Jä- rvinen, 2001, p. 78). There are different types of triangulations such as using multiple methods, investigators, datasets or theories (Brannen, 1992, pp. 11-12). This research used qualitative data gathered through two kinds of interviews with slightly different perspectives. 3.3.1 Data collection from interviews Qualitative interviews were conducted to collect data, which is a good and common method for gathering data in IS research (Myers & Newman, 2007, p. 23). The purpose of interviews is to obtain specific information from interviewees (Järvinen, 2001, p. 146). Therefore, it is important to know the case, goals and participants before gathering em- pirical material to facilitate the research process (Rashid et al., 2019, p. 5). Interviews 44 were used as a systematic way of gathering information, with set goals to gain valuable, valid and reliable information. Research interviews can be divided into structured, semi-structured and unstructured interviews (Hirsjärvi et al., 2013, p. 208). In a structured interview questions and their order are completely determined before the interview (Hirsjärvi et al., 2013, p. 208). In semi-structured interviews interview themes or questions are known but the exact form and order of questions is not fully determined (Hirsjärvi et al., 2013, p. 208). Unstruc- tured interviews do not have a given structure, and the interviewer tries to find out in- terviewee’s thoughts, opinions and emotions as they emerge (Hirsjärvi et al., 2013, p. 208). Interviews are typically a flexible method of gathering information because the interviewer can adapt information gathering during the interview according to interview- ees’ responses (Hirsjärvi et al., 2013, pp. 204-205). The interviews were qualitative semi-structured interviews. Table 1 shows details of the interviews. A total of ten interviews were conducted, five with the case company’s PSH members and five with potential end user employees. The interviews were conducted during March-May 2025. The interviews were held in English and conducted online via video call and recorded for transcription purposes. The recordings were transcribed au- tomatically and afterwards checked manually for any errors. All personal information about interviewees, recordings and transcriptions was kept anonymous and secure. 45 Table 1. Interview details. Interviewee Date Duration Interviewee location Method Interviewee role Gen- der A 3/2025 40 min 15 s Pakistan, AS Online video call PSH member M B 3/2025 27 min 52 s Finland, EU Online video call PSH member F C 4/2025 26 min 35 s Sweden, EU Online video call PSH member F D 4/2025 29 min 52 s Indonesia, AS Online video call PSH member F E 4/2025 23 min 05 s Brazil, SA Online video call PSH member M F 4/2025 24 min 32 s Nigeria, AF Online video call Employee M G 4/2025 34 min 38 s Finland, EU Online video call Employee F H 4/2025 47 min 43 s Kenya, AF Online video call Employee M I 4/2025 24 min 46 s Finland, EU Online video call Employee F J 5/2025 26 min 26 s Finland, EU Online video call Employee F The interviewees had different positions in the company and different amounts of prior experience with generative AI chatbots, some being familiar with them and others hav- ing little to no experience. 60% of the interviewees were female (F) and 40% were male (M). The interviewees were from different countries, but most were from Finland (40 %). PSH members had a set of questions they were asked, and employees had a separate set of questions they were asked. The interview questions for PSH members can be found in appendix 1 and for employees in appendix 2. However, the order of questions or the 46 question structure may have changed during interviews depending on participant’s an- swers. Additional questions may have been asked or information provided if something relevant came up during an interview. Interviews with PSH members aimed to identify what tasks should be self-service for employees and what needs to be handled by HR, and the AI assistant’s effects on digital employee experience and its possibilities. Interviews with employees aimed to under- stand in which situations employees need HR support, what is the impact of a self-ser- vice HR model, assess the HR portal’s effectiveness and determine the role, reliability and acceptance of the AI assistant. Some interviewees also tested the AI assistant during the interview and others discussed their previous experience to give practical feedback on it. It should be noted that while interviews are a good method for data collection, they have challenges too (Myers & Newman, 2007, p. 4). Interview situations are not natural, and researcher and interviewee usually do not know each other which can make the inter- viewee distrust the researcher or the researcher might disrupt normal behavior (Myers & Newman, 2007, p. 4). Interviews also often have a time limit, and the questions might be misunderstood or only partially understood due to language being ambiguous (Myers & Newman, 2007, pp. 4-5), or interviewees might give only socially acceptable answers, reducing reliability (Hirsjärvi et al., 2013, p. 206). Answers are also context and situation bound, which can reduce generalizability (Hirsjärvi et al., 2013, p. 207). These challenges were minimized as much as possible to obtain the most reliable information possible. 3.3.2 Thematic analysis Interviews were analyzed using thematic analysis which is a common method for analyz- ing qualitative data (Naeem et al., 2023, p. 1). Thematic analysis is used in qualitative research to identify and analyze patterns in a dataset (Braun & Clarke, 2006, p. 79). The 47 analysis followed Braun & Clarke’s (2006) steps of thematic analysis as illustrated in fig- ure 6. Thematic analysis starts with transcribing interview recordings, and the data is familiarized with by reading it through and writing down ideas (Braun & Clarke, 2006, p. 87). Next, the data is assigned with codes, which are short sentences or words (Naeem et al., 2023, p. 4), and the codes are analyzed and compiled into initial themes (Braun & Clarke, 2006, p. 87). Then the themes are reviewed and edited to determine if they func- tion against selected extracts of the text and the whole dataset (Braun & Clarke, 2006, pp. 87, 91). Then, the themes are refined further, named and defined clearly and lastly, the analysis is finalized and the report produced (Braun & Clarke, 2006, p. 87). Figure 6. Thematic analysis process (Adapted from Braun & Clarke, 2006, p. 87). After conducting and recording the interviews, the interview recordings were tran- scribed automatically. Transcribing is the process of writing qualitative data clear word by word, and it is common for the process of analyzing interviews (Hirsjärvi et al., 2013, p. 222). The transcriptions were also checked manually against the recordings to correct any possible mistakes and verify accuracy. The data was coded with codes representing recurring patterns, connections and mean- ings in the data and important factors considering the aim of the research. Codes from the PSH interviews are listed in table 2 and codes from the employee interviews are listed in table 3. The codes are listed under the main themes that were created from the codes. 48 The initial codes were standardized across interviews to enhance consistency within the codes and reduce codes that have the same meaning. Table 2. Main themes and codes from PSH interviews. Theme Codes from PSH interviews Self-service for employees Automation Could be self-service Daily action-based tasks Daily tickets about possi- ble self-service Is already self-ser- vice/partially Partially self-service Self-service now Self-service percentage Restrictions on self-service Cannot be self-service Human needs to check Human touch Inputting sensitive infor- mation Must be tickets Require HR action Sensitive information is protected Updating on many sys- tems Preventing self- service Awareness of self-service Difficult to find from portal Do not want to use self- service Do not know how to use Educating Encouragement Feedback important Tickets are important Training for employees Benefits of the AI assistant AI has advantages Benefits Do not want to read arti- cles One source for infor- mation Useful for employees Useful for HR Concerns and challenges of the AI assistant AI has challenges Concerns Difficult to control AI use Do not input sensitive in- formation Integration issue Questions not always straightforward Understanding errors Wrong answers frustrating Requirements for the AI assistant Chatbot must work Content management First touchpoint Not needed Questions that are asked Reading attachments Requirement Acceptance of the AI assistant Acceptance Acceptance takes time Escalation to human First impression Habit of using More people should use Must know employee needs Negative experience Table 3. Main themes and codes from employee interviews. Theme Codes from employee interviews Expectations and benefits of the AI assistant Automate work Benefits Difficult to find from portal Expectations Helps with work One source for infor- mation Tickets can be slow Tickets been user- friendly Used for looking for in- formation Concerns and challenges of the AI assistant Chatbot cannot do every- thing Concerns Full self-service difficult to achieve How to use Long text Negative experience Preference to human in- teraction Unclarity about infor- mation date Functioning Unclear UI What can be asked Requirements for the AI assistant Must work Not needed Requirement User need Service level should stay the same Training Encouragement Promoting Training Acceptance of the AI assistant Continuous improvement Escalation to human Give feedback Good experience matters People should use more People were reluctant with HR portal Willingness to use 49 Quotations were chosen from the data to present findings, support claims and add cred- ibility to the results. Then the codes were compiled and used to create initial themes. The themes were developed to illustrate recurring patterns in the interview data and address the RQ and the aim of the study regarding employees’ perceptions, expectations and acceptance of the AI assistant and self-service potential in HR services. The themes were reviewed against the data to ensure coherence, consistency and accuracy to par- ticipants’ perspectives. Some initial themes were combined into one theme, refined, or removed if they were not relevant. The themes provide a structured explanation of how participants perceived the role of the AI assistant within HR services. The initial themes resulted in seven main themes in the interviews with PSH members and five main themes in the interviews with employees. The main themes of PSH mem- ber interviews are listed in table 2, and they consider employees’ self-service work, po- tential benefits, concerns and requirements for the AI assistant, and employees’ ac- ceptance towards the AI assistant. The main themes of employee interviews are listed in table 3, and they present employees’ expectations, concerns, requirements and ac- ceptance of the AI assistant. Certain themes were further divided into sub-themes to reflect nuances and varying perspectives and experiences in the data. The sub-themes add depth to the findings and highlight more specific aspects of employee and HR per- sonnel perspectives within the broader themes. 50 4 Research findings In this chapter the research findings will be presented. The analysis of interviews re- vealed diverse perspectives on employees’ expectations, perceptions and acceptance regarding the AI assistant. Additionally, self-service possibilities emerged from the PSH interviews. Several themes were determined from the interviews focusing on the aim of this research. The findings provide an understanding of how employees perceive and experience the integration of the AI assistant into HR services. The findings aim to an- swer the research question “How do employees perceive and accept the integration of a generative AI-powered HR AI Assistant within digital HR services, and what are their expectations and preferences for its role in the future HR service model?”. 4.1 Findings from PSH interviews The findings of the PSH interviews provide insights into how HR personnel perceive the integration of the AI assistant within digital HR services. Table 4 lists the themes and sub- themes of the PSH interviews. The themes are related to the study’s objective of explor- ing expectations, perceptions and acceptance of the AI assistant. They highlight im- portant aspects such as self-service potential, the importance of training and feedback, and benefits and concerns regarding the AI assistant. These are critical in understanding the acceptance of the AI assistant and its integration into HR services. The importance of each theme is briefly explained to illustrate their relevance to the research question and objective. The first theme, self-service for employees, highlights tasks that are already self-service or could potentially be self-service for employees. The second theme, restrictions on self- service, on the other hand clarifies boundaries on self-service emphasizing the need for human touch. These themes address employees’ autonomy and provide realistic expec- tations for the AI assistant. The theme preventing self-service identifies why self-service options may not be used and explains how employees could be encouraged to use them 51 more as employees have desired more autonomy. The theme benefits of the AI assistant demonstrate the perceived value and advantages of the AI assistant which is critical in its acceptance. Concerns and challenges of the AI assistant show barriers to trust and acceptance of the AI assistant which affect the perceptions and use of it. Requirements for the AI assistant describe conditions under which the AI assistant is perceived as useful, which influences both user expectations and acceptance. Acceptance of the AI assistant identifies what affects acceptance and considers the readiness to use it. Table 4. PSH interview themes and sub-themes. Theme Sub-themes Self-service for employees • General information • Updating personal information • Potential self-service in the future • Proportion of self-service Restrictions on self-service • Private or sensitive information • Importance of tickets Preventing self-service • Training • Feedback Benefits of the AI assistant • Fast and easy access to information • Effects on HR work Concerns and challenges of the AI assis- tant • Accuracy and reliability • Privacy and confidentiality Requirements for the AI assistant Acceptance of the AI assistant 4.1.1 Self-service for employees The interviews revealed that some HR-related tasks can be self-service for employees while others must remain as tickets or handled by HR personnel. In the interviews the current state as well as the possibilities and constraints on self-service options were brought up. 52 General information. All interviewees agreed that employees should have access to all general HR information and company policies as self-service without HR involvement. Interviewee B and Interviewee C for example mentioned that access to rules and regu- lations in a country, vacation policies and referral bonus policies are self-service. How- ever, they noted that it is important that the information is kept up to date. Anything that they can find as information should be self-service. Possibly the rules and regulations in a country or the policies that we have in the company, they should be able to find through an AI service like if they have questions on how many vacation days a year they will earn. Those are very standard, so it should be possi- ble to be answered by an AI instead of an actual human being, if the answer is the same for everyone. (Interviewee B) Interviewee D said also that employees should get as much information about guidance as self-service as possible such as guidance about processes and how to do tasks in the system. They said that this would help both employees, by being easier to access, and HR by reducing repetitive questions. However, Interviewee C noted that information-based questions are not always general or straightforward. Sometimes questions may be about, for example, the number of va- cation days or bonuses, for which employees can find policies and rules and, in that way, answer the question. However, if employees need additional information like specific calculations, that cannot be retrieved through the HR portal, a professional is needed to provide the calculations. Therefore, these types of information-based tasks can usually be partially self-service. I think the thing is they sometimes pose a question in a way that it’s maybe not straightforward. Understandable what they mean, but for example, if they ask about how many days of vacation they have left. Maybe it’s not a straightforward question. But the initial point of the question is, do you know what the policy is? And have you read the policy, because if you read the policy and know what the policy is and you know what you had for your vacation, you would know what you have left. (Interviewee C) 53 Updating personal information. All interviewees noted that many self-service processes are partially self-service, not fully. Employees can for example input data or open re- quests, but HR still needs to either approve something or complete tasks. As Interviewee A noted many HR processes must be legally compliant or are complicated and therefore require HR involvement. They thought that full automation or self-service could also in- crease the risk of errors. I would say, like, HR things are very critical. So obviously, you cannot go totally independent. So, there should be some checks, you cannot remove the human in between. There should be someone checking those things and then sending those forward … Most of the things of the HR must be legally compliant. (Interviewee A) All interviewees also noted that some actions are already self-service like updating some contact information or requesting name change. They mentioned that some changes can be made fully as self-service such as updating address and emergency contact number, while some require verification or action from HR such as changing name. Interviewee A, Interviewee B and Interviewee D also noted that line managers have access to updat- ing certain information about their team members, such as titles, with HR approving the changes. However, some changes require, for example, documentation that is usually done manually. Actions that cannot be fully self-service for employees are typically sen- sitive and must be legally compliant and therefore require HR’s approval or action. Potential fu