The social, ethical, economic, and political implications of misinformation
Giandomenico Di Domenico
Maria Teresa Borges-Tiago
Yang Alice Cheng
Lies are as old as humankind. However, digital technologies have given new impetus to the spreading of misinformation (Allcott & Gentzkow, 2017), broadening its reach (Vosoughi, Roy & Aral, 2018), and giving mainstream popularity to conspiratorial groups once were fringe. False information can assume many different formats, reflecting the deliberately or inadvertently deceiving creation process: misinformation or disinformation (Cummings & Kong, 2019). Misinformation consists of all misleading information that is created and shared, not aiming to manipulate people. On the other hand, disinformation consists of all misleading information designed to confuse or exploit people. Disinformation can assume different forms, ranging from fake news, fake stories, clickbait, cyber propaganda, hoaxes, rumors, spam, satire/parody, to sloppy journalism (Borges-Tiago et al., 2020; Islam et al., 2020). It can also be divided into three broad categories according to its origin: (i) individuals; (ii) firms; (ii) or content farms, also known as troll factories or content mills (Novaes & Ridder, 2021). However, the line between misinformation and disinformation is often blurred in digital environments as disinformation can become misinformation when users believe in the content they read and start a pass-on movement.
To date, most of the scholarship has investigated the spreading patterns, detection methods, and characteristics of misinformation (e.g. Wang & Song, 2021; Zhang et al., 2020; Del Vicario et al., 2019; Pennycook & Rand, 2019; Borges-Tiago et al., 2020). However, more research is needed to understand how misinformation and disinformation evolve and their social, ethical, economic, and political implications.
Topics of interest
1. A different medium for a different type of misinformation
Most academic research on misinformation has focused on the effects of fake news headlines and articles on individuals (Pennycook & Rand, 2019; Kim & Dennis, 2019). However, misinformation might come in various formats, such as memes, images, and videos, leaving interesting gaps on their differential impact compared to textual articles.
New AI-enabled technologies allow the creation of more sophisticated forms of misinformation. Among them, Deep fakes and Cheap fakes have a more severe impact on individuals (Di Domenico & Visentin, 2020; Agarwal et al., 2019). Interestingly, misinformation spreads through social media and other channels such as messaging apps (de Freitas Melo et al., 2019). The massive spreading of misinformation through messaging apps has forced some platforms to restrict the dissemination of misleading content (Arun, 2019).
The availability of different types of content and channels opens unprecedented possibilities for fake contents creators to spread a specific type of content through a specific channel. For example, manipulated images and videos appear to be more popular on social media platforms like Facebook (Shen et al., 2019), while messaging apps are more suitable mediums for fake news article spreading. Hence, understanding how the content is customized and how a medium is chosen is fundamentally essential to prevent the spread of misinformation.
Topics of interest include (but are not limited to):
- The differential impact of various types of misinformation and disinformation on individuals
- Predictors of fake images and videos credibility assessments
- The impact of new forms of misinformation and disinformation
- The effectiveness of existing restrictions enabled by messaging apps
- How users assess the credibility of information on messaging apps
- How the type of fake content is adapted to the differential medium
2. The implications of misinformation
Misinformation impacts all aspects of our lives; thus, research is needed to discuss misinformation's social, ethical, economic, and political implications. From a social point of view, misinformation creates tensions among individuals, such as cyberbullying (de Fonseca & Borges-Tiago, 2021) or even violence and assaults, as in the case of mob violence in India (Banaji et al., 2019). Moreover, the algorithmic logic favoring the creation of so-called "echo chambers" - where misinformation thrives - opens up lingering ethics questions.
The format and source of misleading information are diverse, ranging from potentially deceptive online reviews to fabricated brand-related content on social media (Chen & Cheng, 2019).
Finally, misinformation can affect important political outcomes such as the US Presidential Elections or the Brexit referendum (Allcott & Gentzkow, 2017).
Technology enhances users' possibilities of creating and sharing misleading information since anyone can publish content online and potentially reach large audiences. But, technology can also be used to diminish the effects of this type of information, using reporting and flagging tools, supporting fact-checking sites, or creating AI tools that can detect deepfakes.
The implications of misinformation might be amplified or reduced depending on cultural differences (Borges-Tiago et al., 2020). A less explored culture-related aspect is language. Although deep bidirectional transformer language models are growing, most research efforts have target misinformation in English, neglecting the occurrence of this phenomenon in other languages (Song et al., 2021). Thus, understanding the influence of culture might provide valuable novel insights for designing more fine-grained public-policy interventions.
Topics of interest include (but are not limited to):
- The implications of misinformation and disinformation from a social, ethical, economic, and political point of view
- Long vs. short term effect of misinformation and disinformation
- The role of technology in mitigating the consequences of misinformation and disinformation
- Cross-cultural differences in responses to misinformation and disinformation
This call for papers aims to gather interdisciplinary research to discuss the above thematical areas. It invites submissions from a variety of methodological, theoretical, and multidisciplinary perspectives. In bringing technical, behavioral, and managerial perspectives together, this special issue explores where and why misinformation originates and propagates in digital environments and provides an understanding of its adverse implications for individuals, organizations, and societies.
Submission and Review Schedule
Submission system open: April 1, 2022
Paper submission due: July 31, 2022
First review result: August 31, 2022
Revision due: November 30, 2022
Second review result: December 31, 2022
Final decision: February 2023
Editorial Review Board
Gil Baptista Ferreira, Universidade Beira Interior, Portugal
David Berube, NC State University, USA
Enrique Bigne, Universitat de Valencia, Spain
Dimitrios Buhalis, University of Bournemouth, United Kingdom
Colin Campbell, University of San Diego, USA
Zifei Chen, University of San Francisco, USA
Kate Daunt, University of Cardiff, United Kingdom
Kirk Plangger, King’s College London, United Kingdom
Bahiyah Omar, University Sains Malaysia, Malaysia
Laura Grazzini, University of Florence, Italy
José Manuel Guaita Martínez, University of Valencia, Spain
Hua Jiang, Syracuse University, USA
Androniki Kavoura, University of West Attica, Greece
Raj Mahto, University of New Mexico, USA
Hanaa Osman, University of Bournemouth, United Kingdom
José Miguel Pina, University of Zaragoza, Spain
Giovanni Pino, University of Salento, Italy
Annamaria Tuan, University of Bologna, Italy
Aulona Ulquinaku, University of Leeds, United Kingdom
Marco Visentin, University of Bologna, Italy
Yuan Wang, City University of Hong Kong, Hong Kong
Hui Zao, Lund University, Sweden
Agarwal, S., Farid, H., Gu, Y., He, M., Nagano, K., & Li, H. (2019, June). Protecting World Leaders Against Deep Fakes. In CVPR workshops (Vol. 1).
Allcott, H., & Gentzkow, M. (2017). Social media and fake news in the 2016 election. Journal of economic perspectives, 31(2), 211-36.
Arun, C. (2019). On WhatsApp, rumours, lynchings, and the Indian Government. Economic & Political Weekly, 54(6).
Banaji, S., Bhat, R., Agarwal, A., Passanha, N., & Sadhana Pravin, M. (2019). WhatsApp vigilantes: An exploration of citizen reception and circulation of WhatsApp misinformation linked to mob violence in India. London School of Economics Research Online, monograph.
Borges‐Tiago, T., Tiago, F., Silva, O., Guaita Martinez, J. M., & Botella‐Carrubi, D. (2020). Online users' attitudes toward fake news: Implications for brand management. Psychology & Marketing, 37(9), 1171-1184.
Chen, Z. F., & Cheng, Y. (2019). Consumer response to fake news about brands on social media: the effects of self-efficacy, media trust, and persuasion knowledge on brand trust. Journal of Product & Brand Management.
Cummings, C. L., & Kong, W. Y. (2019). Breaking Down "Fake News": Differences Between Misinformation, Disinformation, Rumors, and Propaganda. In Resilience and Hybrid Threats (pp. 188-204). IOS Press.
da Fonseca, J. M. R., & Borges-Tiago, M. T. (2021). Cyberbullying From a Research Viewpoint: A Bibliometric Approach. In Handbook of Research on Cyber Crime and Information Privacy (pp. 182-200). IGI Global.
de Freitas Melo, P., Vieira, C. C., Garimella, K., de Melo, P. O. V., & Benevenuto, F. (2019, December). Can WhatsApp counter misinformation by limiting message forwarding?. In International conference on complex networks and their applications (pp. 372-384). Springer, Cham.
Del Vicario, M. D., Quattrociocchi, W., Scala, A., & Zollo, F. (2019). Polarization and fake news: Early warning of potential misinformation targets. ACM Transactions on the Web (TWEB), 13(2), 1-22.
Di Domenico, G., & Visentin, M. (2020). Fake news or true lies? Reflections about problematic contents in marketing. International Journal of Market Research, 62(4), 409-417.
Islam, M. R., Liu, S., Wang, X., & Xu, G. (2020). Deep learning for misinformation detection on online social networks: a survey and new perspectives. Social Network Analysis and Mining, 10(1), 1-20.
Kim, A., & Dennis, A. R. (2019). Says who? The effects of presentation format and source rating on fake news in social media. Mis quarterly, 43(3).
Novaes, C. D., & de Ridder, J. (2021). Is Fake News Old News?. In The Epistemology of Fake News (pp. 156-179). Oxford University Press.
Pennycook, G., & Rand, D. G. (2019). Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. Cognition, 188, 39-50.
Shen, C., Kasra, M., Pan, W., Bassett, G. A., Malloch, Y., & O'Brien, J. F. (2019). Fake images: The effects of source, intermediary, and digital media literacy on contextual assessment of image credibility online. New media & society, 21(2), 438-463.
Song, C., Ning, N., Zhang, Y., & Wu, B. (2021). A multimodal fake news detection model based on crossmodal attention residual and multichannel convolutional neural networks. Information Processing & Management, 58(1), 102437.
Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146-1151.
Wang, X., & Song, Y. (2020). Viral misinformation and echo chambers: The diffusion of rumors about genetically modified organisms on social media. Internet Research.
Zhang, W., Du, W., Bian, Y., Peng, C. H., & Jiang, Q. (2020). Seeing is not always believing: an exploratory study of clickbait in WeChat. Internet Research.