Association for Information Systems AIS Electronic Library (AISeL) PACIS 2025 Proceedings Pacific Asia Conference on Information Systems (PACIS) July 2025 Impact of Artificial Intelligence on Employee Strain and Insider Deviance in Cybersecurity Emmanuel Anti University of Vaasa, emmanuel.anti@uwasa.fi Duong Dang University of Vaasa, duong.dang@uwasa.fi Follow this and additional works at: https://aisel.aisnet.org/pacis2025 Recommended Citation Anti, Emmanuel and Dang, Duong, "Impact of Artificial Intelligence on Employee Strain and Insider Deviance in Cybersecurity" (2025). PACIS 2025 Proceedings. 19. https://aisel.aisnet.org/pacis2025/security/security/19 This material is brought to you by the Pacific Asia Conference on Information Systems (PACIS) at AIS Electronic Library (AISeL). It has been accepted for inclusion in PACIS 2025 Proceedings by an authorized administrator of AIS Electronic Library (AISeL). For more information, please contact elibrary@aisnet.org. Impact of AI on Employee Strain Pacific-Asia Conference on Information System, Kuala Lumpur 2025 1 Impact of Artificial Intelligence on Employee Strain and Insider Deviance in Cybersecurity Short Paper Emmanuel Anti University of Vaasa Wolffintie 32, 65200 Vaasa, Finland emmanuel.anti@uwasa.fi Duong Dang University of Vaasa Wolffintie 32, 65200 Vaasa, Finland duong.dang@uwasa.fi Abstract This paper examines the impact of AI technologies like Performance Monitoring Tools (PMTs) and Automated Decision-Making Systems (ADMSs) on employee strain and the development of insider deviant behavior. Drawing on General Strain Theory (GST), the study explores how workplace stressors exacerbated by AI-driven PMTs and ADMSs may increase the risk of deviant behaviors such as fraud, sabotage, and social engineering. This study employs a quantitative methodology, using surveys to gather data on employee perceptions of AI-driven PMTs and ADMSs on employee strain and insider deviance. We expect that the findings will show AI-induced stress and negative emotions increase the likelihood of insider deviance. This study aims to contribute to research on cybersecurity threats and provide practical insights for organizations implementing AI technologies by offering strategies to mitigate workplace stress and insider threats. Future research will explore the relationship between AI integration, employee strain, and organizational security vulnerabilities. Keywords: Artificial Intelligence, General Strain Theory, Insider Deviance Introduction The integration of AI technologies like Performance Monitoring Tools (PMTs) and Automated Decision- Making Systems (ADMSs) enhances efficiency, decision-making, and innovation, driving digital transformation (Benbya et al., 2020). For instance, Amazon employs AI systems to track employee performance across productivity, quality, safety, and behavior (Spilda et al., 2024). However, AI-driven automation also increases workplace surveillance, stress, and arbitrary disciplinary actions, leading employees to feel unfairly targeted (Spilda et al., 2024). Caminiti (2023) reports that a survey by CNBC and SurveyMonkey revealed that 42% of workers fear AI's effect on their roles, with those earning under $50,000 showing even higher concern. The survey further indicated that the more employees interact with AI, the more concerned they become about its impact on their jobs Employees may face pressure to meet AI-imposed benchmarks, heightening job dissatisfaction, stress, and turnover (Hou & Fan, 2024; Konuk et al., 2023; Mikalef et al., 2023). Fear of job loss, dehumanization, and diminished human interaction exacerbates psychological distress (Matsunaga, 2022; Nazareno & Schiff, 2021) increasing the likelihood of insider deviance (Dang, 2014; Green, 2014). Financial struggles, personality problems, and social isolation further contribute to retaliatory deviant behaviors, such as sabotage or system exploitation (Liang et al., 2023; Renaud et al., 2024). Although recent studies explore AI’s workplace impact (Chesley, 2014; Kola, 2023), there is limited research on the evolution of employee stress, coping strategies, and deviant behaviors under sustained AI interaction. As AI technologies become Impact of AI on Employee Strain Pacific-Asia Conference on Information System, Kuala Lumpur 2025 2 more integrated into the workplace, concerns about their psychological and behavioral impacts are growing (Chuang et al., 2025; Leong et al., 2025). Further investigation is needed into organizational interventions mitigating AI-related workplace stress and insider deviance. This motivates our study to investigate the correlation between AI technologies like PMTs and ADMSs and their impact on workplace strain and insider deviance. Further, we aim to bridge the gap between cybersecurity and deviant behavior research by focusing on AI-induced strain with cybersecurity risks, providing fresh insights into how employees' interactions with AI may lead to harmful actions within organizations. Thus, we focus on the following research question: How do AI-driven PMTs and ADMSs influence employee strain, and how does this strain contribute to insider deviance? We applied quantitative research approach with general strain theory (Agnew, 1985, 1992) as our theoretical background, using PMTs and ADMSs as applications to answer this question. General strain theory (Agnew, 1985, 1992) explains that negative emotions caused by stressful workplace conditions can lead to deviant behavior. This study aims to contribute to research on cybersecurity threats and provide practical insights for organizations implementing AI technologies by offering strategies to mitigate workplace stress and insider threats. This study offers a novel framework that applies General Strain Theory to AI implementation in cybersecurity by introducing AI-specific workplace stressors, providing new insights into how emerging technologies shape employee deviance. The insights from this study can advance theory and offer practical value to organizations aiming to balance technological efficiency with employee well-being and security. This paper is organized as follows: First, the literature review and theoretical background are presented. Next, the research methods are introduced. Finally, the paper concludes with discussions and conclusions. Literature Review and Theoretical Background Insider Deviance Insider deviance refers to the violation of organizational norms by trusted individuals (e.g., employees, contractors, or vendors) that threaten organizational well-being, involving compromise, manipulation, unauthorized access, or tampering with ICT and non-ICT systems, whether intentional or unintentional, to achieve personal or organizational outcomes through cognitive and physical processes (Anti & Vartiainen, 2024; Green, 2014). Common insider deviant behaviors include social engineering, fraud, and IT sabotage, influenced by personal predispositions, psychological traits, and workplace stressors (Luo et al., 2020). Factors like financial conflicts, dissatisfaction, and resistance to organizational changes contribute to deviance, while social frustrations, job instability, and resentment toward authority can escalate insider deviance, particularly in response to AI-driven technologies like PMTs and ADMSs (Intelligence & Subcommittee, 2017; Loureiro et al., 2023). For instance, AI-driven surveillance and automation can lead to fears of job replacement and loss of control, fostering anxiety, resentment, and potential hostility toward the organization, increasing insider deviance (Loureiro et al., 2023). These emotional responses align with General Strain Theory (GST), which posits that strain especially when perceived as unjust—can lead to deviant behavior as a coping mechanism (Agnew, 1992). Building on Dang’s (2014) integration of GST and organizational injustice, we argue that AI-induced workplace changes—when perceived as unfair or overwhelming—may significantly contribute to insider deviance. These insights are increasingly relevant as AI systems alter traditional work structures and employee control, often without corresponding changes to support systems or accountability structures (Dennehy et al., 2023; Liang et al., 2016) Implementation of AI Technologies AI implementation involves integrating artificial intelligence technologies into organizational operations, products, and services, combining AI with expertise, data, and strategies ((Alsheibani et al., 2018; McElheran et al., 2021). Technologies like PMTs and Automated ADMSs enhance productivity and decision-making while requiring substantial system modifications and restructuring (Agrawal et al., 2021). ADMSs shift decision-making to autonomous agents, keeping humans accountable for outcomes (Ivanov, 2023). AI-driven tools influence hiring, promotions, and disciplinary actions, raising concerns over privacy, technostress, and job insecurity (Wamba-Taguimdje et al., 2020). Additionally, misinformation, digital overdependence, and reduced social interaction impact work-life balance and employee morale (Loureiro Impact of AI on Employee Strain Pacific-Asia Conference on Information System, Kuala Lumpur 2025 3 et al., 2023; Nishant et al., 2020). For example, while AI can enhance efficiency and decision-making in the workplace, Chuang, Chiang, and Lin (2025); Ding et al. (2025); Leong et al. (2025); and Zhang et al. (2025) emphasize that it can also increase psychological strain when demands such as heightened cognitive load, performance pressure, and constant monitoring exceed the employee's available resources like organizational support, autonomy, or transparency. This imbalance may lead to AI-induced stress, which depletes emotional and psychological resources, resulting in fatigue, anxiety, and reduced work engagement (Hou & Fan, 2024). Without proper change management, AI integration may lead to job displacement, resistance, and heightened stress, fostering anger toward authority and increasing the risk of insider deviance (Intelligence & Subcommittee, 2017; Leong et al., 2025; Loureiro et al., 2023; Zhang et al., 2025). According to GST, such strains particularly when perceived as unjust or unavoidable can foster negative emotional states like anger or frustration, increasing the likelihood of deviant behavior, when AI is poorly managed. Hypotheses Development General Strain Theory This study adopts General Strain Theory (GST) to examine how AI-driven PMTs and ADMSs contribute to insider deviance by creating workplace stressors. GST, developed by Agnew (1985, 1992), explains how individuals engage in deviant behavior when exposed to stressors such as goal obstruction, loss of valued stimuli, or negative conditions. These stressors trigger negative emotions like anger, frustration, or depression, which can influence individuals' coping mechanisms and responses (Agnew, 1985, 1992). GST identifies three primary forms of strain: (1) Failure to achieve positively valued goals, often due to the disparity between expectations and actual achievements. (2) Removal of positive stimuli, such as job loss or career stagnation. (3) Presentation of negative stimuli, including workplace stress, excessive monitoring, and unfair treatment. Prior research has applied GST to workplace deviance, showing how perceived strain contributes to misconduct (Aseltine Jr et al., 2000; Broidy & Agnew, 1997; Moon & Morash, 2017). Studies by Agnew and White, (1992) and Wang et al. (2022) operationalized GST variables such as strain, coping mechanisms, and emotional responses to analyze employee corruption and delinquency. For example, Wang et al. (2022) identified various strains, including resource strain, deviant subcultural strain, economic strain, work- related strain, and political promotion strain. Building on these studies, we apply GST to AI-driven workplace stressors to examine how employees experience job insecurity, surveillance pressure, and unfair evaluations under PMTs and ADMSs. These AI- related stressors may heighten frustration and resentment toward management, increasing the likelihood of insider deviance. This study advances GST by adapting it to AI-induced workplace stressors characterized by AI-related uncertainty, perceived irreversibility of outcomes, and constant AI presence, thereby maintaining theoretical continuity, refining key variables, and enhancing reliability in explaining both malicious and non-malicious insider deviance. The devised variables, informed by these theoretical foundations, are outlined in Table 1 to illustrate the adaptation of GST in our research framework. Table 1 Developed Variables Constructs Concepts Selected Sources AI-induced work strain (AIW) Strain arising from persistent surveillance, decision- making displacement, job insecurity, and intensified performance pressures from the application of AI technologies in the workplace. Hou & Fan (2024); Kambur & Yildirim (2023); Winwood et al. (2007) AI-induced workload change (AIWC) Changes influenced by job demands, task complexity, and loss of control due to AI technology implementation. DiStaso & Shoss (2020); Hou & Fan (2024) AI-induced perceived inequity (AIPI) Perceptions of unfair treatment in terms of procedures, outcomes, and interpersonal interactions, decision- making from AI technologies. Lowry et al. (2015); Zhang et al. (2025) Impact of AI on Employee Strain Pacific-Asia Conference on Information System, Kuala Lumpur 2025 4 Employee Strain (ES) Strain caused by Internal and external pressures. D'Arcy & Teh (2019); Yazdanmehr et al. (2023) Insider Deviance (ID) Individuals with legitimate access to an organization's resources who intentionally or unintentionally misuse this access to cause harm, often for personal gain, revenge, or ideological reasons Dang (2014); Guo et al. (2011) AI-Induced Work Strains Winwood et al. (2007) defines work strain as arising from high workloads, tight deadlines, and workplace conflicts, which can negatively impact employees’ mental and physical well-being. However, Hou and Fan (2024) explain AI-induced work strain as the psychological stress and emotional burden experienced by employees or managers due to the application of AI technology in the workplace. This may include anxiety over job security, difficulties adapting to AI tools, or cognitive overload from managing complex AI-driven systems. AI-driven PMTs and ADMSs present distinct work strain challenges. AI technologies change decision-making, employment roles, and surveillance, creating psychological and structural pressures. (Kambur & Yildirim, 2023). Employees may suffer stress when artificial intelligence judgments replace human judgment without consultation or feel overwhelmed by the rapid pace of required developing their skills or experience. Hence, we propose the following hypotheses: H1: AI-Induced Work Strain Positively Affects Employee Strain. H2: AI-Induced Work Strain Positively Affects Insider Deviance AI- Induced Workload Changes Workload changes are dynamic and influenced by seasons and project demands (DiStaso & Shoss, 2020). High workloads contribute to psychological strain, emotional distress, and fatigue, with anticipated increases intensifying strain and decreases providing relief (DiStaso & Shoss, 2020). AI-induced workload changes affect job demands, task complexity, and control, increasing stress or reducing engagement (Hou & Fan, 2024). For example, AI technologies may reduce manual tasks by automating routine work, but they can also increase cognitive demands by requiring employees to interpret and act on AI outputs. AI-induced workload changes, especially if employees feel overloaded or under-challenged by AI, increase the likelihood of deviant behavior as a form of protest or reaction Such reactions from employees may be due to the perceived removal of positive stimuli such as control over tasks and decisions, social interactions, and career development, to mention a few. We therefore propose these hypotheses: H3: AI-induced workload Changes Positively Affect Employee Strain. H4: AI-induced workload Positively Affect Insider Deviance. AI-Induced Perceived Inequity According to Lowry et al. (2015), workplace inequity refers to employees' perception of unfair treatment in terms of procedures, outcomes, and interpersonal interactions, leading to negative behaviors like cyberloafing, organizational deviance, counterproductive actions, retaliation, and sabotage, as the fairness of treatment significantly influences these reactions. AI-driven systems can create perceptions of inequity in the workplace such as unequal access to AI resources, biased performance evaluations, lack of transparency in decision-making, or perceived favoritism in task allocation which may lead to dissatisfaction, mistrust, disengagement, and increased resistance to AI integration (Zhang et al., 2025). Perceived inequity represents a failure to achieve positively valued goals espoused by the GST. Hence, we propose the following hypothesis: H5: AI-induced Perceived Inequity Positively Affects Employee Strain. Impact of AI on Employee Strain Pacific-Asia Conference on Information System, Kuala Lumpur 2025 5 Employee Strain Internal and external pressures, like excessive workloads, deadlines, and conflicts between home and work, can provoke deviant outcomes, while disagreements may incite deviant behavior, jeopardizing autonomy (D’Arcy & Teh, 2019; Yazdanmehr et al., 2023). When employees feel constant surveillance by AI-driven PMTs and are negatively impacted by ADMS decisions, their stress may increase, leading to insider behaviors. We therefore propose this hypothesis: H6: Employee Strain Positively Affects Insider Deviance. Taken together, Figure 1 presents the conceptual framework of the study. The control variables incorporated in the model include job role, level of seniority, industry, educational attainment, and employment status. Figure 1. Research Model Research Methodology This study employs a quantitative research approach to examine the impact of AI-driven Performance Monitoring Tools (PMTs) and Automated Decision-Making Systems (ADMSs) on insider deviance. Quantitative methods allow for statistical analysis of relationships among measurable variables, ensuring objective and replicable findings (Kumar, 2018; Yilmaz, 2013). To achieve this, the study will use previously validated measures for all constructs, adapting them slightly to fit the research context. Data was collected through a structured survey designed to gather insights into AI-induced workplace strain, workload changes, perceived inequity, employee strain, and insider deviant behavior. The survey included standardized questions (Roopa & Rani, 2012) to ensure consistency in responses. To ensure diverse representation across job roles, industries, and organizational levels, a stratified random sampling approach were implemented. This method provided precise estimates and enhanced the generalizability of findings (Singh & Masuku, 2014). Participants were recruited from Europe and North America, specifically from the United States and Canada, to account for regional differences in AI adoption and workplace policies. The target respondents were employees who have direct experience with AI-driven PMTs and ADMSs in their workplace or regularly interact with AI tools as part of their job responsibilities in sectors such as technology, finance, healthcare, and manufacturing. A key consideration in the study was the Impact of AI on Employee Strain Pacific-Asia Conference on Information System, Kuala Lumpur 2025 6 sample size calculation to ensure statistical reliability. Based on a priori power analysis (Faul et al., 2007), it was determined that a minimum of 102 responses would be required to detect significant effects. However, to account for potential dropouts, response biases, or missing data, the study aimed to collect responses from at least 150 participants. This provided a more robust dataset for statistical analysis and increased the likelihood of capturing meaningful patterns related to AI-induced workplace strain and insider deviance. The criteria for selecting respondents ensured that participants were well-suited for the study's objectives. Suitable respondents include employees who have had direct exposure to AI-driven PMTs and ADMSs, those who frequently interact with AI tools in their work, and professionals from diverse industries and job roles. For example, the requirement for participants to have direct experience with AI tools was operationalized through screening questions confirming active use of AI-driven systems such as automated decision-making, monitoring, or recommendation tools. By incorporating a broad range of perspectives, the study will be able to account for industry-specific variations in AI adoption and workplace stressors. To control for potential confounding variables, the survey collected data on job role, seniority level, industry sector, company size, and employment type (full-time, part-time, contract, or remote work). This approach ensures that differences in workplace environments and organizational structures do not skew the results. To measure the constructs of AI-induced workload change, AI-induced perceived inequity, and AI-induced work strain, we utilized items from several validated scales, including technostress (Nisafani et al., 2020), organizational justice (Jang et al., 2021), and workplace strain (Anis & Emil, 2022). These items were adapted to fit the AI context, and content validity was ensured through expert review. The survey design consisted of Likert-scale questions, ranging from 1 = strongly disagree to 7 = strongly agree, to measure perceptions of AI-induced work strain, workload, perceived inequity, and insider deviant behaviors. Additionally, demographic and work-related questions were included to classify respondents based on relevant criteria. The survey also incorporated techniques to minimize response biases, such as randomizing question order to prevent priming effects, including attention-check questions to ensure valid responses, and ensuring participant anonymity and confidentiality to encourage honest feedback. To validate the effectiveness of the survey, a pilot study was conducted before full-scale data collection. The pilot phase included ten participants who tested the questionnaire for clarity and usability, two cybersecurity experts who provided content validation, and a group of doctoral students and faculty members who reviewed the methodological framework. Their feedback led to refinements in question wording and structure, improving the survey’s reliability and accuracy. Participants in the pilot study confirmed that the research design, survey content, and procedures were effective and aligned with the study's objectives. Following data collection, statistical techniques are being applied to assess the significance of AI-induced work strain in predicting insider deviance. Regression models are being used to test relationships between key variables, while robustness checks will ensure that findings remain valid across different job roles, industries, and organizational settings. By addressing the unique role of AI in workplace stressors and deviance, this study provides a methodologically rigorous approach to understanding AI’s impact on insider threats. Discussions and Conclusions We conducted a pilot study to refine our research approach, and the results indicate that both the research design and the questionnaire worked well. Then, the survey was distributed, and we received a total of 141 responses, which met our objectives. Our next step is to analyze data from the study. We expect the results to reveal a significant correlation between AI-induced workplace strains due to the implementation of AI- driven PMTs and ADMSs and the increase of insider deviant behaviors. Grounded in GST (Agnew, 1985, 1992), we anticipate that AI-induced strains may elicit negative emotional responses such as frustration, anxiety, resentment, and distrust, potentially resulting in deviant behaviors, including insider threats. In theory, the study's result extend GST by applying it to the context of AI adoption in cybersecurity, offering a fresh and underexplored perspective on how AI technologies intensify job stress and influence employee behavior. The results will guide organizations to the potential risks of using AI technologies such as PMTs and ADMSs without sufficient change management or employee support. The study will contribute information systems and organizational behavior literature by introducing and empirically validating AI- specific workplace stressors such as AI-induced work strain, perceived inequity, and workload change as predictors of insider deviance. The study also allows for a more nuanced examination of how AI Impact of AI on Employee Strain Pacific-Asia Conference on Information System, Kuala Lumpur 2025 7 technologies affect employees, particularly in terms of emotional well-being and behavioral responses by introducing and operationalizing three new constructs (AI-induced work strain (AIW), AI-induced workload change (AIWC), and AI-induced perceived inequity (AIPI), which are uniquely relevant to on- going digital transformation and automation processes in organizations. Practically, the findings of the study can guide organizations in the mitigation of insider threats that may arise from AI-driven environments, thereby enhancing cybersecurity resilience against cyber threats (Järveläinen et al., 2025). This study further encourages fair AI governance and transparent decision- making processes to address employee concerns around autonomy, fairness, and control across both technical and non-technical dimensions. This is crucial, as the existing literature on cybersecurity is predominantly technocentric (Dang & Vartiainen, 2024). Additionally, the study highlights the importance of investing in employee support systems, including targeted training programs and engagement initiatives (Dang et al., 2022), to reduce psychological strain and increase organizational trust. The findings in this study can assist organizations in formulating more effective crisis management strategies and in implementing AI integration processes that are both humane and transparent. In the context of smart cities, for example, stakeholders involved in such initiatives may leverage these insights to develop crisis management frameworks that prioritize citizen engagement and institutional transparency. By proactively addressing sociotechnical stressors such as technological unfamiliarity and management fashion trends, policymakers can enhance public trust and facilitate the smoother adoption of AI-enabled urban innovations (e.g., intelligent traffic management systems, digital services, and AI-assisted platforms) (Dang, 2025). Moreover, by highlighting the psychological costs of unchecked AI adoption such as anxiety, resentment, and perceived injustice, this study offers a framework for balancing technological efficiency with employee well-being and organizational resilience. The work remaining to complete the paper is as follows: We have developed the theoretical model, validated the survey instrument, and completed data collection, and are now conducting rigorous statistical analyses including regression to examine the relationship between AI-driven workplace strain and insider deviance, with plans to refine theoretical insights and generate actionable recommendations. This paper presents the conceptual framework, construct development, and research design as a foundation for empirical validation and practical implications, with full analysis and manuscript completion expected ending of May 2025. References Agnew, R. (1985). A revised strain theory of delinquency. Social Forces, 64(1), 151–167. Agnew, R. (1992). Foundation for a general strain theory of crime and delinquency. Criminology, 30(1), 47–88. https://doi.org/10.1111/j.1745-9125.1992.tb01093.x Agnew, R., & White, H. R. (1992). An empirical test of general strain theory. Criminology, 30(4), 475–500. https://doi.org/10.1111/j.1745-9125.1992.tb01113.x Agrawal, A. K., Gans, J. S., & Goldfarb, A. (2021). Ai adoption and system-wide change. National Bureau of Economic Research. 10.3386/w28811 Alsheibani, S., Cheung, Y., & Messom, C. (2018). Artificial Intelligence Adoption: AI-readiness at Firm- Level. PACIS, 4, 231–245. https://aisel.aisnet.org/pacis2018/37 Anis, M., & Emil, D. (2022). The impact of job stress on deviant workplace behavior: The mediating role of job satisfaction. American Journal of Industrial and Business Management, 12(1), 123–134. Anti, E., & Vartiainen, T. (2024). Explanations of Insider Deviant Behavior in Information Security: A Systematic Literature Review. Communications of the Association for Information Systems, 55(1), 4. https://doi.org/10.17705/1CAIS.05501 Aseltine Jr, R. H., Gore, S., & Gordon, J. (2000). Life stress, anger and anxiety, and delinquency: An empirical test of general strain theory. Journal of Health and Social Behavior, 256–275. https://doi.org/10.2307/2676320 Benbya, H., Davenport, T. H., & Pachidi, S. (2020). Artificial intelligence in organizations: Current state and future opportunities. MIS Quarterly Executive, 19(4). http://dx.doi.org/10.2139/ssrn.3741983 Broidy, L., & Agnew, R. (1997). Gender and crime: A general strain theory perspective. Journal of Research in Crime and Delinquency, 34(3), 275–306. https://doi.org/10.1177/00224278970340030 Caminiti, S. (2023). The more workers use AI, the more they worry about their job security, survey finds— Cnbc.com.https://www.cnbc.com/2023/12/19/the-more-workers-use-ai-the-more-they-worry- about-their-job-security.html Impact of AI on Employee Strain Pacific-Asia Conference on Information System, Kuala Lumpur 2025 8 Chesley, N. (2014). Information and communication technology use, work intensification and employee strain and distress. Work, Employment and Society, 28(4), 589–610. Chuang, Y.-T., Chiang, H.-L., & Lin, A.-P. (2025). Insights from the Job Demands–Resources Model: AI’s dual impact on employees’ work and life well-being. International Journal of Information Management, 83, 102887. https://doi.org/10.1016/j.ijinfomgt.2025.102887 Dang, D. (2014). Predicting insider’s malicious security behaviours: A general strain theory-based conceptual model. Proceedings of the International Conference on Information Resources Management (CONF-IRM 2014), 1–11. https://aisel.aisnet.org/confirm2014/10 Dang, D., Mäenpää, T., Mäkipää, J.-P., & Pasanen, T. (2022). The anatomy of citizen science projects in information systems. First Monday, 57(10). https://doi.org/10.5210/fm.v27i10.12698 Dang, D., & Vartiainen, T. (2024). Exploring Socio-technical Gaps in the Cybersecurity of Energy Informatics for Sustainability. In Adoption of Emerging Information and Communication Technology for Sustainability (pp. 288–304). CRC Press. Dang, D. (2025). Digital Innovation as a Management Trend: A Case Study on the Adoption of Smart City Initiatives. In N. H. Thuan, D.-P. Duy, H.-S. Le, & T. Q. Phan (Eds.), Information Systems Research in Vietnam, Volume 3: A Shared Vision and New Frontiers (pp. 149–163). Springer Nature. https://doi.org/10.1007/978-981-97-9835-3_10 D’Arcy, J., & Teh, P.-L. (2019). Predicting employee information security policy compliance on a daily basis: The interplay of security-related stress, emotions, and neutralization. Information & Management, 56(7), 103151. https://doi.org/10.1016/j.im.2019.02.006 Dennehy, D., Griva, A., Pouloudi, N., Dwivedi, Y. K., Mäntymäki, M., & Pappas, I. O. (2023). Artificial intelligence (AI) and information systems: Perspectives to responsible AI. Information Systems Frontiers, 25(1), 1–7. https://doi.org/10.1007/s10796-022-10365-3 Ding, X.-Q., Chen, H., Liu, J., Liu, Y.-Z., & Wang, X.-H. (2025). AI-induced behaviors: Bridging proactivity and deviance through motivational insights. Journal of Managerial Psychology. DiStaso, M. J., & Shoss, M. K. (2020). Looking forward: How anticipated workload change influences the present workload–emotional strain relationship. Journal of Occupational Health Psychology, 25(6), 401. https://doi.org/10.1037/ocp0000261 Faul, F., Erdfelder, E., Lang, A.-G., & Buchner, A. (2007). G* Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39(2), 175– 191. https://doi.org/10.3758/BF03193146 Green, D. (2014). Insider threats and employee deviance: Developing an updated typology of deviant workplace behaviors. Issues in Information Systems, 15(2), 185–189. Guo, K. H., Yuan, Y., Archer, N. P., & Connelly, C. E. (2011). Understanding nonmalicious security violations in the workplace: A composite behavior model. Journal of Management Information Systems, 28(2), 203–236. https://doi.org/10.2753/MIS0742-1222280208 Hou, Y., & Fan, L. (2024). Working with AI: The effect of job stress on hotel employees’ work engagement. Behavioral Sciences, 14(11), 1076. https://doi.org/10.3390/bs14111076 Intelligence, & Subcommittee, N. S. A. S. P. R. C. I. T. (2017). Assessing the Mind of the Malicious Insider: Using a Behavioral Model and Data Analytis to Improve Continuous Evaluation. Intelligence and National Security Alliance. Ivanov, S. H. (2023). Automated decision-making. Foresight, 25(1), 4–19. 10.1108/FS-09-2021-0183 Jang, J., Lee, D. W., & Kwon, G. (2021). An analysis of the influence of organizational justice on organizational commitment. International Journal of Public Administration, 44(2), 146–154. https://doi.org/10.1080/01900692.2019.1672185 Järveläinen, J., Dang, D., Mekkanen, M., & Vartiainen, T. (2025). Towards a framework for improving cyber security resilience of critical infrastructure against cyber threats: A dynamic capabilities approach. Journal of Decision Systems, 34(1), 2479546. https://doi.org/10.1080/12460125.2025.2479546 Kambur, E., & Yildirim, T. (2023). From traditional to smart human resources management. International Journal of Manpower, 44(3), 422–452. 10.1108/IJM-10-2021-0622 Kola, V. (2023). The Liberating Effect of AI in Organizations. 1ST Bengkulu International Conference on Economics, Management, Business and Accounting (BICEMBA 2023), 275–282. Konuk, H., Ataman, G., & Kambur, E. (2023). The effect of digitalized workplace on employees’ psychological well-being: Digital Taylorism approach. Technology in Society, 74, 102302. https://doi.org/10.1016/j.techsoc.2023.102302 Kumar, R. (2018). Research methodology: A step-by-step guide for beginners. Impact of AI on Employee Strain Pacific-Asia Conference on Information System, Kuala Lumpur 2025 9 Leong, A. M. W., Bai, J. Y., Rasheed, M. I., Hameed, Z., & Okumus, F. (2025). AI disruption threat and employee outcomes: Role of technology insecurity, thriving at work, and trait self-esteem. International Journal of Hospitality Management, 126, 104064. Liang, N., Biros, D. P., & Luse, A. (2016). An empirical validation of malicious insider characteristics. Journal of Management Information Systems, 33(2), 361–392. Liang, N., Biros, D. P., & Luse, A. (2023). An empirical comparison of malicious insiders and benign insiders. Journal of Computer Information Systems, 1–13. Loureiro, S. M. C., Bilro, R. G., & Neto, D. (2023). Working with AI: can stress bring happiness? Service Business, 17(1), 233–255. Lowry, P. B., Posey, C., Bennett, R. (Becky) J., & Roberts, T. L. (2015). Leveraging fairness and reactance theories to deter reactive computer abuse following enhanced organisational information security policies: An empirical study of the influence of counterfactual reasoning and organisational trust. Information Systems Journal, 25(3), 193–273. https://doi.org/10.1111/isj.12063 Luo, X. R., Li, H., Hu, Q., & Xu, H. (2020). Why individual employees commit malicious computer abuse: A routine activity theory perspective. Journal of the Association for Information Systems, 21(6), 5. 10.17705/1jais.00646 Matsunaga, M. (2022). Uncertainty management, transformational leadership, and job performance in an AI-powered organizational context. Communication Monographs, 89(1), 118–139. McElheran, K., Li, J. F., Brynjolfsson, E., Kroff, Z., Dinlersoz, E., Foster, L., & Zolas, N. (2021). AI adoption in America: Who, what, and where. Journal of Economics & Management Strategy. Mikalef, P., Lemmer, K., Schaefer, C., Ylinen, M., Fjørtoft, S. O., Torvatn, H. Y., Gupta, M., & Niehaves, B. (2023). Examining how AI capabilities can foster organizational performance in public organizations. Government Information Quarterly, 40(2), 101797. https://doi.org/10.1016/j.giq.2022.101797 Moon, B., & Morash, M. (2017). A test of general strain theory in South Korea: A focus on objective/subjective strains, negative emotions, and composite conditioning factors. Crime & Delinquency, 63(6), 731–756. https://doi.org/10.1177/0011128716686486 Nazareno, L., & Schiff, D. S. (2021). The impact of automation and artificial intelligence on worker well- being. Technology in Society, 67, 101679. https://doi.org/10.1016/j.techsoc.2021.101679 Nisafani, A. S., Kiely, G., & Mahony, C. (2020). Workers’ technostress: A review of its causes, strains, inhibitors, and impacts. Journal of Decision Systems, 29(sup1), 243–258. Nishant, R., Kennedy, M., & Corbett, J. (2020). Artificial intelligence for sustainability: Challenges, opportunities, and a research agenda. International Journal of Information Management, 53, 102104. Renaud, K., Warkentin, M., Pogrebna, G., & van der Schyff, K. (2024). VISTA: An inclusive insider threat taxonomy, with mitigation strategies. Information & Management, 61(1), 103877. Roopa, S., & Rani, M. S. (2012). Questionnaire designing for a survey. Journal of Indian Orthodontic Society, 46(4_suppl1), 273–277. https://doi.org/10.5005/jp-journals-10021-1104 Singh, A. S., & Masuku, M. B. (2014). Sampling techniques & determination of sample size in applied statistics research: An overview. International Journal of Economics, Commerce and Management, 2(11), 1–22. Spilda, F. U., Brittain, L., Cant, C., Cole, M., Mozzachiodi, R., & Graham, M. (2024). Fairwork Amazon Report 2024: Transformation of the Warehouse Sector through AI. Fairwork Amazon 2024 Report Launch: How Is AI Transforming the Warehouse Sector. Wamba-Taguimdje, S.-L., Fosso Wamba, S., Kala Kamdjoug, J. R., & Tchatchouang Wanko, C. E. (2020). Influence of artificial intelligence (AI) on firm performance: The business value of AI-based transformation projects. Business Process Management Journal, 26(7), 1893–1924. Wang, K., Ma, Z., & Xia, Y. (2022). General strain theory and corruption among Grassroot Chinese public officials: A mixed-method study. Deviant Behavior, 43(4), 472–489. Winwood, P. C., Bakker, A. B., & Winefield, A. H. (2007). An investigation of the role of non–work-time behavior in buffering the effects of work strain. Journal of Occupational and Environmental Medicine, 49(8), 862–871. 10.1097/JOM.0b013e318124a8dc Yazdanmehr, A., Li, Y., & Wang, J. (2023). Employee responses to information security related stress: Coping and violation intention. Information Systems Journal. https://doi.org/10.1111/isj.12417 Yilmaz, K. (2013). Comparison of quantitative and qualitative research traditions: Epistemological, theoretical, and methodological differences. European Journal of Education, 48(2), 311–325. https://doi.org/10.1111/ejed.12014 Zhang, R. Z., Kyung, E. J., Longoni, C., Cian, L., & Mrkva, K. (2025). AI-induced indifference: Unfair AI reduces prosociality. Cognition, 254, 105937. https://doi.org/10.1016/j.cognition.2024.105937