Consumer Perceptions of AI Ethics in E-Commerce : Exploring Hyper-Personalization, Inferred Data, and Algorithmic Bias in Culturally Diverse Contexts
Pysyvä osoite
Kuvaus
Opinnäytetyö kokotekstinä PDF-muodossa.
This thesis explores how consumers from different cultural backgrounds perceive and respond to three AI-driven practices in e-commerce: hyper-personalization, inferred data use, and algorithmic bias. It focuses on how they engage with these systems in everyday digital interactions. The study adopts an interpretivist philosophy and an abductive approach, and we apply Cultural Dimensions Theory and Trust Theory as interpretive lenses. A qualitative research method wasemployed with semi-structured interviews. The interviews were conducted with ten participants from six cultural backgrounds and participants were selected through purposive sampling to ensure cultural diversity.
The findings suggest that consumers do not passively accept AI-driven practices but engage with them conditionally. Their responses are shaped by perceived usefulness, system opacity, structural dependence on digital platforms and their backgrounds. Hyper-personalization is accepted in some way when it enhances convenience but resisted when it crosses personal boundaries. Participants tend to verify AI-generated recommendations rather than accepting them at face value. Three forms of trust were identified: outcome-based, process-based, and institution-based. These show different ways consumers deal with uncertainty. This is especially clear in opaque algorithmic systems. Perceptions of fairness were also not consistent, as participants interpreted fairness in different ways.
The study also points to two patterns that extend current literature: the autonomy paradox, refers to a gap that participants said they make free choices but at the same time, they described being influenced by algorithms; the transparency paradox shows a similar tension that even more disclosure does not always reassure users. In some cases, it creates more discomfort when users become more aware of data practices behind the systems. Most participants were gradually accustomed with data being collected in the background as they interpreted that this seemed normal in everyday use. However, their responses their responses shifted when data trade-offs were made explicit, often leading to resistance. This suggests that how data practices are presented matters, and it is not just about the practice itself as reactions to inferred data also stood out when participants moved from rational thinking to emotional discomfort.
Cultural background influenced how participants interpreted these practices but did not strictly determine behavior, instead acting as a lens through which experiences were understood.
These findings also provide practical implications for firms, highlighting the need to carefully calibrate personalization, design transparency in more effective ways, adapt strategies across cultural contexts, and address accountability in AI-driven systems.
