This page is older archived content from an older version of the Emerald Publishing website.

As such, it may not display exactly as originally intended.

Technology & Engagement in Personnel Selection

For a Special Issue of Journal of Managerial Psychology


Paper submission deadline: September 26, 2018

Guest Editors:

Ted B. Kinney, Select International, Inc., USA
Amie D. Lawrence, Select International, Inc., USA

Every once in a while, a new technology, an old problem, and a big idea turn into an innovation. - Dean Kamen, American inventor

Selection practitioners are operating in a dynamic time where new technologies and big ideas are emerging daily. Some of these innovations are developed by industrial and organizational (I/O) psychologists and many are not. These advances in technology have introduced new techniques, tools, and challenges to psychologists for the measurement of individual differences in personnel selection systems. To be sure, people have not changed; we still vary on a limited number of job-related knowledge, skills, abilities, and other (KSAO’s) characteristics, and measuring those personal characteristics still helps organizations understand the relative probability of success for each person they consider hiring. However, until recently, our measurement practices and procedures relied on tried and true technology and established I/O practices with only slight deviations. From paper-and-pencil testing to the early days of online proctored testing, our selection technology, while always slowly evolving, has remained mostly static until the last decade or so. In the last 10- to 20-years, however, the assessment landscape has grown to include technology-enabled measurement, which has led to a rapid increase in selection innovations. New tools, new techniques, and new problems have surfaced and continue to redefine the ‘state of the art’ in applied personnel selection. It has become a full-time job for I/O practitioners to consume and consider these advances to stay relevant in a quickly changing selection context.

As technology evolves, organizations take advantage of efficiencies by automating everything from complex manufacturing processes to Human Resources payroll systems to call center customer interactions. The one area where technology has been slow to penetrate is an organization’s selection system; but this is now shifting. Organizations are demanding to see selection systems incorporate the advances in technology that have been applied to other organizational problems, often without first (or ever) performing the rigorous scientific development and research that I/O’s would prefer. In the end, people and talent take up the largest percentage of items on any organization’s P&L sheet, and organizations are aggressively pursuing innovations to gain a competitive advantage in the selection of talent, whether or not those innovations are endorsed by I/O research. This growing demand for technology-enabled selection processes creates a challenge for I/O psychologists to show value, establish credibility, and rethink tried and true approaches in order to stay relevant.

As with all changes, there are challenges and opportunities. In this case, as technology advances, so does our ability to capture new and unique measures of job-related individual differences. While a standard personality inventory will always be reasonably useful in making selection decisions, what if we were able to measure similar KSAO’s through the application of emerging technology?  The possibilities for measurement that is faster, more engaging, more accurate, more deployable, less fake-able, and generally better, now exists. Forward thinking I/O psychologists have embraced these technology-enabled techniques and tools for measuring job-related individual differences. From serious games and simulations to complex information processing system measures, to interactive video and avatar-based item presentation, to machine learning and artificial intelligence applications, to scraping data from non-traditional sources of candidate information; our ability to capture unique data and apply them in the selection/decision-making process is advancing every day. It is truly an exciting time to work as an applied selection practitioner.

For this special issue, we seek original empirical research, as well as conceptual papers that explore recent technological innovations and how they relate to selection assessment. Additionally, we seek papers that examine how these innovations have changed candidate and organizational expectations regarding engagement in the selection process.

Below is a non-exhaustive list of topics and research questions that are representative of the aims and scope of this Special Issue. We are open to a range of research questions and topics that explore unique, usual, and different aspects of selection technologies and candidate engagement.

NOTE: The rapid pace at which technology advances and the need to quickly meet organizational needs has created a gap in the published literature. Most of the discussion and research on technology-enabled, pre-employment assessments has been presented at professional conferences and the results have not yet reached the academic journals. One of our goals of this special issue is to provide a friendly outlet for the emerging research in this area.

Assessment Platforms and Delivery: One of the most obvious and visually apparent technological changes is the increase in mobile devices. Twenty years ago, paper-and-pencil assessments were commonplace and many candidates required tutorials (e.g., mouse and operating systems) to be able to complete computerized assessments. Today, about 2/3rds of Americans are carrying mobile computers (e.g., smartphones, tablets) with them everywhere they go (Pew Research Center, 2015). In addition to mobile devices, mobile technology companies are examining wearable technology that provide additional ways for individuals to interact with their environment (e.g., Google Glass, Fitbit, Apple Watch). Any device that can connect to the internet could theoretically be used to capture data for psychological assessment purposes.

To date, few research articles have been published related to data obtained via mobile devices and selection. The articles that have been published have focused on measurement equivalence between mobile and non-mobile devices. These articles show some disagreement with regard to the psychometric properties of assessments delivered on mobile devices as compared to other devices. Our summary of the research concludes that traditional text-based items show equivalence across devices (e.g., Arthur, Doverspike, Munoz, Taylor & Carr, 2014; Illingworth, Morelli, Scott & Boyd, 2015). However, cognitive assessments show mixed results (Arthur et al., 2014; Brown & Grossenbacher, 2017; Impelman, 2013; King, Ryan, Kantrowitz, Grelle, & Daneis, 2015; Stephens & Wood, 2017) and interactive simulations show meaningful differences, where mobile device users are at a disadvantage (Chang, Lawrence, O’Connell & Kinney, 2016; O’Connell, Chang, Lawrence & Kinney, 2016). It appears that item type matters, but what else does? Other topics worthy of exploration are the role of device features and user contexts (e.g., screen size, operating system, type of internet connection), emerging new devices (e.g., wearables), validity equivalence across devices, unique or innovative measurements only available with mobile technology, organizational and/or candidate expectations with regard to usage of mobile devices for applicant reactions.

Fidelity: With advances in devices, come advances in video graphics, resolution, bandwidth, and the ability to incorporate multi-media into assessments. As a result, what used to be challenging to build and almost impossible to deliver online, has become accessible. All of these advances lead to an increase in fidelity for the candidate, making the user experience more interesting and engaging, and the data on the candidate more rich. In this category falls, gamification techniques, scalable simulations and virtual reality.

•    Gamification is the introduction of game-like features to assessments such as, progress bars, point totals, and leader boards (see Arthur et al., 2017 for a discussion of gamification, serious games and simulations). While gamified assessments may feel like video games in some ways, they are not games in the true-sense of the word. They are still designed to measure specific individual differences in an assessment context, but doing so leveraging features common to video games for goal of increasing engagement and participation. Adding gamification to non-game measures has a clear purpose in, for example, marketing survey research, where the measures are geared toward a passive audience with little motivation to complete. However, in a selection context, where job candidates are already engaged due to their desire to ‘win’ a job offer, it is not clear if gamification techniques provide ROI to organizations. Still, in an increasingly competitive labor market with a scarcity of candidates, building gamified assessment content designed to engage the candidate may provide a competitive advantage. What value does gamification provide in the selection process? How does it relate to assessment psychometrics and candidate reactions?
•    Simulations have been delivered on computers for years, but advances in software and internet stability have made it easier for individuals to build and deliver online simulations. One of the common criticisms of simulations is their context specificity. If a simulation is developed for a customer service role in a retail environment, it could be difficult to deploy that simulation in a different customer service environment. Newer technologies have made it possible to customize and make simulations more scalable to move across contexts. What outcomes are related to increasing the face validity of the context? How have scalable simulations influenced utility, accuracy, and fairness?
•    Serious games are beginning to emerge in selection contexts. Serious games are different than gamification and simulations, although these differences are often confused. Serious games are in fact games, but they are designed to measure “serious things”, for example job-related KSAO’s. To date, most work in developing serious games has focused on measuring cognitive ability, but the future will surely bring advances in our capability of measuring non-cognitive attributes through the use of serious games. We look forward to learning more about what serious games bring to the selection table in terms of unique measurement and candidates engagement.
•    Virtual reality (VR) and augmented reality (AR) have been used for gaming and training but so far, this technology has not entered many organizations’ selection system. This technology has clear advantages for building high fidelity simulations (Aguinas, Henle, & Beaty, 2001; Kugler, 2017). Most attractively, virtual environments can simulate low base rate job occurrences (e.g., a safety hazards) and measure candidates’ behavioral choices. While these virtual environments have the potential to measure a wide range of KSAO’s, measuring behavior in a realistic environment when these low base rate, but critically important, events emerge is an attractive application of this technology. Further, virtual reality technology has the ability to track a wide range of candidate behaviors that other technologies cannot. From fine-grained motor movement (e.g., arm and head movement) to decision speed, there is a vast array of new ‘items’ for I/O psychologists to research as potential predictors.

Data Availability: In today’s world, data are everywhere. Not that long ago, students and organizations were begging and pleading with individuals to complete assessments and surveys for research purposes. Now, there are data collection sites (e.g., mechanical turk) as well as social media apps like Facebook and LinkedIn and wearable technology (e.g., watches, activity trackers). All of these technologies make gathering personal data easy, but how valuable is it? For a discussion on the topic of Big Data see Fink, Guzzo, and Roberts (2017) as part of the SIOP White Paper Series. For selection,
•    What are the pros and cons of having large amounts of data at your disposal?
•    What is the quality of the data collected on social sites? What are the data challenges?
•    What kinds of data are available now that never were before, and what are the legal implications of these data?
•    How can this information be leveraged for new and exciting opportunities?
•    How can it be used for nefarious purposes?
•    What role should this kind of data play (or not) in personnel selection?

Data Analytics/Techniques: With large data sets come new analytical techniques for mining the data for meaningful relationships and trends. Recent innovations have employed techniques that “scrape” the data and pull large amounts of data out for further analysis (Landers, Brusso, Cavanaugh, & Collmus, 2016). Other techniques take large data sets and “teach” computers a set of rules. The machines are introduced to more and more data and they learn new rules from the trends in the data. This is called machine learning and is within the categorization of artificial intelligence. These kinds of techniques can be very powerful for psychologists, but the large amounts of data could also lead to sample specific results or spurious relationships and the tempting potential to overgeneralize results. These findings could be dangerous in the hands of an uninformed user with good intentions but a lack of understanding in how to interpret results. Topics worthy of discussion on this topic are:
•    What are the pros and cons of data analytics for selection?
•    What role does theory or should theory play in the analysis process?
•    What can we learn about selection from data analytics?
•    What are ways that data analytics can show I/O psychologists things relevant to selection that we aren’t able to see with our traditional techniques? (non-linear relationships etc.)

Individual Differences: Lastly, new technologies can capture personal information that hasn’t been available to psychologists before. Some organizations have taken facial recognition technology and used it to measure facial microexpressions during a video interview (Pentland, Twyman, Burgoon, Nunamker, & Diller, 2017). Additionally, individuals’ verbal response and tone are being analyzed by natural language processing technology, which can assess non-cognitive traits from inflection and word choice. This same technology can be applied to written text, as well, to identify underlying constructs (Armstrong & Landers, 2018). The accessibility of this kind of information opens up new avenues for industrial psychologists in terms of new individual difference predictors, employment law, and implications for discrimination in selection. Potential research question on this topic may include:
•    What are other new individual differences that have emerged as a result of new technology, that were otherwise inaccessible yet highly relevant to selection decisions?
•    What are the benefits and challenges for selection of using new technologies like these?
•    How are these measures related to traditional selection measures with regard to validity?
•    How fair are these measures with regard to protected classes?

We’ll end with another quote from the same inventor:

Innovation is so hard and so frustrating; it takes the intersections of people with courage, vision, and resources. – Dean Kamen

Let’s assemble the people with the courage and the vision in this special issue to encourage the use of resources to research and further our knowledge of assessment technologies.

Submission Process and Timeline

To be considered for the Special Issue, manuscripts must be submitted no later than September 26, 2018, midnight London Time. Submitted papers will undergo a double-blind peer review process and will be evaluated by at least two reviewers and a special issue editor. Acceptance decisions will be based on the review team’s judgments of the paper’s contribution on four key dimensions:

(1)    Theoretical contribution: Does the article meaningfully extend existing theory in the field of managerial psychology?
(2)    Empirical contribution: Does the article offer novel findings derived from appropriate study design and data analysis?
(3)    Practical contribution: Does the article present practical implications for improving management practice in selection?
(4)    Relevance to the special issue topic.

Authors should prepare their manuscripts for blind review according to the Journal of Managerial Psychology author guidelines, available at  APA or AMJ style is acceptable. Remove any information that could potentially reveal the identity of the authors to reviewers. Manuscripts should be submitted electronically at: For enquiries regarding the special issue, please contact [email protected]

Important dates
Paper submission deadline: 26 September 2018
Acceptance notification: January 2019
Publication: February 2019 (online first)


Aguinas, H., Henle, C. A., & Beaty Jr., J. C. (2001). Virtual reality technology: A new tool for personnel selection. International Journal of Selection & Assessment, 9(1/2), 70-84.
Armstrong, M.B., & Landers, R.N. (2018). Using natural language processing to measure psychological constructs. Symposium to be presented at the 33rd annual conference of the Society for Industrial and Organizational Psychology, Chicago, IL.
Arthur, W., Jr., Doverspike, D., Kinney, T. B., & O’Connell, M. (2017). The impact of emerging technologies on selection models and research: Mobile devices and gamification as exemplars. In J. L. Farr & N. T. Tippins (Eds.), Handbook of employee selection (2nd ed.). New York, NY: Taylor & Francis/Psychology Press.
Brown, M.I., & Grossenbacher, M.A (2017). Can you test me now? Equivalence of GMA tests on mobile and non-mobile devices. International Journal of Selection and Assessment, 25, 61–71.
Chang, L.C., Lawrence, A.D., O’Connell, M.S., & Kinney, T.B. (2016). Mobile vs. PC Delivered Simulations: Screen Size Matters. In Trevor McGlochlin (chair) Mobile Equivalence: Expanding Research Across Assessment Methods, Levels, and Devices. Symposium conducted at the 31st annual conference of the Society for Industrial and Organizational Psychology, Anaheim, CA.
Fink, A., Guzzo, R., & Roberts, S. (2017). Big Data at Work: Lessons from the field. White Paper prepared by the Visibility Committee of the Society for Industrial and Organizational Psychology. 440 E Poe Rd, Suite 101 Bowling Green, OH 43402.
Illingworth, A.J., Morelli, N., Scott, J.C., Boyd, S. (2015). Internet-based, unproctored assessments on mobile and non-mobile devices: Usage, measurement equivalence, and outcomes. Journal of Business Psychology, 30, 325-343.
Impelman, K. (2013). Mobile assessment: Exploring candidate differences and implications for selection. In N. Morelli (Chair), Mobile devices in talent assessment: Where are we now? Symposium conducted at the annual meeting of the Society for Industrial and Organizational Psychology, Houston, TX.
King, D., Ryan, A.M., Kantrowitz, T.M., Grelle, D., & Dainis, A. (2015). Mobile internet testing: An analysis of equivalence, individual differences, and reactions. International Journal of Selection and Assessment, 23, 382-394.
Kugler, L. (2017). Why virtual reality will transform a workplace near you. Communications of the ACM, 60 (8), 15-17.
Landers, R.N., Brusso, R.C., Cavanaugh, K.J., & Collmus, A.B. (2016). A primer on theory-driven web scraping: Automatic extraction of big data from the internet for use in psychological research. Psychological Methods 21(4), 475-492.
O’Connell, M. S., Chang, L., Lawrence, A. D., Kinney, T. B. (2016, April). PC–mobile equivalence of four interactive simulations: a within-subject design. In J. Ferrell & M. Hudy (Co-chairs), Going mobile: Empirical evidence from higher-fidelity mobile simulations. Symposium conducted at the 31st Annual Conference of the Society for Industrial and Organizational Psychology, Anaheim, CA.
Pentland, S.J., Twyman, N.W., Burgoon, J.K., Nunamaker, J.F., & Diller, C.B.R. (2017). A video-based screening system for automated risk assessment using nuanced facial features. Journal of Management Information Systems, 34(4), 970-993.
Pew Research Center, April, 2015, “The Smartphone Difference” Available at:
Stephens, K.M. & Wood,E.C. (2017). Pinch to zoom: Effect of image-heavy mobile assessments on performance. In N. Morelli (Chair), Mobile testing “in the wild”: Apps, reactions, images, and criterion-validity. Symposium presented at the annual meeting of the Society for Industrial and Organizational Psychology, Orlando, FL.