Organising and Working in Times of (G)AI: Ethnographic Assemblages in Modern Institutions

Submission deadline date: 1 May 2024


This special issue calls for an innovative focus on generative artificial intelligence (GAI) within organisational cultures by exploring how AI impacts ethnographic methodology and offers critical analysis on the representation of empirical research findings in the times of AI. We seek to foster a more human-centric perspective concerning AI adaptation within diverse institutions. By investigating the experience of researchers and workers whose daily practices encounter disruptions due to the influx of AI technologies, this issue will lay the foundation for insightful explorations that extend beyond the technological dimension.

The rise of artificial intelligence (AI) promises to be one of the most significant technological shifts in our lifetimes (von Krogh, 2018). The rapid advancements in AI have prompted widespread adoption of transformative technologies by organisations across numerous sectors. Despite vastly different epistemic approaches to reaching ‘intelligence’ (Balasubramanian et al., 2022), AI is imbued with the promise to be able to complement humans’ cognitive abilities, leading to streamlined processes, enhanced efficiency, and increased innovation (Jarrahi, 2018; Raisch & Krakowski, 2021). However, the implementation of AI also raises critical questions about its consequences on individuals, work institutions, societal structures, and human-AI interactions (e.g., Brynjolfsson & McAfee, 2014; Hitt & Brynjolfsson, 1996; Lebovitz et al., 2022; Susskind & Susskind, 2015).

The evolution of web technologies has transitioned us from Web 1.0 to Web 4.0, characterised by the integration of humans and AI within a collaborative web. Generative AI (GAI), a specific aspect of AI able to create texts, images, and other media in response to prompts, is perceived as a significant tool reshaping organisations (Bender et al., 2021; Stolker-Walker & Van Noorden, 2023). The latest large language models (LLM), such as, GPT-4.5, LaMDA and BLOOM, possess advanced capabilities such as data analysis, autonomous content creation, coding, web connectivity, and the generation of multi-modal media. GAI's rapid and versatile performance empowers it to efficiently handle complex creative and analytical tasks, influencing how people work, communicate, produce, synthesise, and critically assess knowledge and delivery of services (Haase & Hanel, 2023; Jovanovic & Campbell, 2022; Kanitz et al., 2023; Pelau et al., 2021). As this technology reshapes modern workforces, there is a pressing need to expand our understanding of how AI impacts organisational culture, and how researchers navigate and represent this new technological era.

Aims and Objectives

First, we call for papers framing the process of studying institutions integrating AI, exploring evolving ethnographic methodologies for human and AI collaborations. How will AI impact ethnography within phenomenologically complex research 'fields', and what challenges or opportunities arise in terms of epistemic standpoints, ethics, data interpretation, and research dissemination?

Secondly, we invite critical analyses of empirical research or speculative essays on AI's relationship with human actors in organisations. Key considerations: How does AI affect human workers? How do institutions change with AI and how is this captured in ethnography? What new dynamics form between humans and technology, influencing decision-making, governance, and policy? Ultimately, we seek to question the future visions for workforces and institutions.

List of topic areas

  • Ethnographic methodology and empirical research on AI adoption, integration, and collaboration within organisations, as well as ethical and practice implications. 
  • The role, and ethnographic representation, of human-AI decision-making and relational processes within organisations.
  • The social and epistemological limitations of (G)AI in organisational research settings.
  • The implications and limitations of transparency and technological explainability of AI.
  • Novel concepts and empirical lenses to theorise (G)AI in organisational settings, especially speculative future qualitative studies of organisations.
  • Responsibility, accountability, ethics, and, justice around (G)AI
  • Issues and possibilities around the power and politics of (G)AI

Submissions Information

Submissions are made using ScholarOne Manuscripts. Registration and access are available here.

Author guidelines must be strictly followed. Authors should select (from the drop-down menu) the special issue title at the appropriate step in the submission process, i.e. in response to "Please select the issue you are submitting to". 

Submitted articles must not have been previously published, nor should they be under consideration for publication anywhere else, while under review for this journal.

Click here to submit!

Key deadlines

Opening date for abstracts submissions: 31st August, 2023

Closing date for abstracts submissions: 1st November, 2023

Opening date for manuscripts submissions: 1st November, 2023

Closing date for manuscripts submission: 1st May, 2024    
Email for submissions: Landon B. Kuester, Bas W. Becker


Balasubramanian, N., Ye, Y., & Xu, M. (2022). Substituting human decision-making with machine learning: implications for organizational learning. Academy of Management Review, 47(3), 448–465.
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? FAccT 2021 - Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. 
Brynjolfsson, E., & McAfee, A. (2014). The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. Norton & Company.
Haase, J. & Hanel, P. (2023). Artificial muses: Generative Artificial Intelligence Chatbots Have Risen to Human-Level Creativity.  
Hitt, L. M., & Brynjolfsson, E. (1996). Productivity, business profitability, and consumer surplus: Three different measures of information technology value. MIS Quarterly: Management Information Systems, 20(2), 121–142. 
Jarrahi, M. H. (2018). Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making. Business Horizons, 61(4), 577–586. 
Jovanovic, M., & Campbell, M. (2022). Generative artificial intelligence: Trends and prospects. Computer, 55(10), 107-112.
Kanitz, R., Gonzalez, K., Briker, R., Straatmann, T. (2023). Augmenting Organizational Change and Strategy Activities: Leveraging Generative Artificial Intelligence. The Journal of Applied Behavioral Science 59, 345–363.. 
Lebovitz, S., Lifshitz-Assaf, H., & Levina, N. (2022). To Engage or Not to Engage with AI for Critical Judgments: How Professionals Deal with Opacity When Using AI for Medical Diagnosis. Organization Science, 33(1), 126–148. 
Pelau, C., Dabija, D.-C., & Ene, I. (2021). What makes an AI device human-like? The role of interaction quality, empathy and perceived psychological anthropomorphic characteristics in the acceptance of artificial intelligence in the service industry. Computers in Human Behavior, 122, Article 106855. 
Raisch, S., & Krakowski, S. (2021). Artificial intelligence and management: The automation–augmentation paradox. Academy of Management Review, 46(1), 192–210. 
Stolker-Walker, C., & Van Noorden, R. (2023). The Promise and Peril of Generative AI. Nature, 614, 214–217. 
Susskind, D., & Susskind, R. (2015). The Future of the Professions. Oxford University Press.
von Krogh, G. (2018). Artificial Intelligence in Organizations: New Opportunities for Phenomenon-Based Theorizing. Academy of Management Discoveries, 4(4), 404–409.