The use of computers to collect data about us and create algorithms to predict something about us has been increasing for decades. Today, that is coalescing into a growing use of — and dependence — on artificial intelligence, or AI. Mobile phones, PCs, Google, Alexa and even our smart televisions use AI, attempting to make decisions for us. AI is designed to learn from the experiences it encounters in an attempt to mimic the human brain; from finishing our sentences as we type to choosing the right babysitter for our children, AI is quickly becoming a part of our lives.
But lately, there seems to be a flurry of articles regarding the pros and cons of AI. They can be found in the Society for Industrial-Organizational Psychology’s TIP.com magazine (“The evolution of automation in talent assessment and selection”), WashingtonPost.com, (“A face scanning algorithm increasingly decides whether you deserve the job.”;“Wanted: the perfect babysitter. Must pass AI scan for respect and attitude.”; and “AI won’t eliminate bias in hiring.”), Forbes.Com (“Artificial intelligence poses new threats to Equal Employment Opportunity”),and IT Pro.com (“What are the pros and cons of AI?”)
To be clear, I am not opposed to AI. On the contrary, it has already proven to have big benefits.
In general, AI just makes us more productive; for example, I was able to write this article by hand and read it into my computer, saving significant time. An artificial intelligence system can sift through a million gigabytes of information in seconds, which is something that the human brain is not designed to do.
Programmed correctly, AI makes fewer errors than humans. From lapses in concentration to simple mistakes, even the best of us are prone to errors. This is critical within certain fields and industries where accuracy or precision are top priorities.
The medical field has quickly evolved to take advantage of AI. For example, a company called DeepMind is using AI to diagnose sight-threatening eye conditions. Partnering with Moorfield’s Eye Hospital and UCLA’s institute of ophthalmology, their AI system reduces the time doctors spend studying thousands of eye scans and can diagnose patients within seconds.
I do not believe AI is the panacea for replacing humans in decision-making. While AI is able to capture and evaluate huge combinations of data, I have three major areas of concern that must be addressed for the science to move further along.
Garbage In, Garbage Out
One of the greatest challenges we face in AI’s decision-making mechanism is that AI is only as intelligent and insightful as the individuals responsible for its initial programming. Important errors in judgment can occur at this level. Conscious or unconscious biases may also affect the outcomes. Drew Harwell, a technology reporter for the Washington Post, covering AI and the algorithms changing our lives reports, “Some AI experts believe that systems like these have the potential to supercharge the biases of age or racial profiling, including flagging words or images from certain groups more often than others.”
Overzealous programmers may not include the help of experts from all relevant disciplines. Without this, AI may create more problems instead of fewer. In the case of AI making hiring decisions, we can’t be sure how much of the design of the algorithm is dictated by IT experts and how much is due to input from behavioral experts such as social psychologists and industrial-organizational psychologists.
Science is More Than Data Mining
One of the fastest growing professional positions has to do with data analytics. This is analyzing raw data in order to make conclusions about the information. The buzzword in organizations and in our professional field has become Big Data. It seems the bigger the data file and the greater the numbers of variables included, the more attractive the project. But in many cases, this leads to data mining. Data mining can be valuable by finding patterns and correlations within large data sets and attempting to predict outcomes.
But data mining has its own problems. A good example of data mining going wrong is reported by Natalie Regoli, editor-in-chief for Vittana.org. “In 2014, an active shooter situation caused people to call Uber to escape the area. Instead of recognizing the dangerous situation, the algorithm Uber used saw a spike in demand, so it decided to increase prices.”
Data mining has its limitations and, especially in the hiring process, we must go beyond correlations and understand why things have a relationship. Good science begins with a thoughtful prediction of relationships — we call this having a hypothesis. This level of science requires hypotheses based on human knowledge and intuition.
I’ve been a student and practitioner of psychology — especially industrial-organizational psychology — for over 40 years. The most unmistakable lesson I’ve learned is that predicting behavior is very challenging. I have also learned that, when using assessment tools to describe a candidate’s potential, the whole is greater than the sum of its parts. I’ve yet to find knowledgeable colleagues who would agree to make final hiring decisions without humanly assimilating all available data.
A concerning application that is getting increasing press is the use of AI algorithms for hiring- basing decisions on facial expression, word choice and tone of voice. The largest user of this technology is a recruiting-technology firm named HireVue. Over 100 employers now utilize HireVue’s system. Candidates use PC cameras to complete a video interview. During this time, HireVue’s proprietary technology attempts to differentiate between a productive worker and a worker who isn’t “fit” for the job, based on their facial movements, tone of voice and their mannerisms.
Meredith Whittaker, Co-founder of the AI Now Institute, sees this as pseudoscience. She is one of many skeptics. HireVue doesn’t explain its AI decisions, and according to WashingtonPosts.com’s technology author, Drew Harwell, “…the company doesn’t always know how the system decides on who gets labeled ‘future top performer’.”
Is it Legal? Is Diversity and Inclusion Affected?
The mystery behind AI’s decisions in hiring practices are only destined to bring out the skeptics. Hiring companies need to truly understand the decision criteria in order to avoid this. But the legal critiques have already begun.
In August (2019), Illinois Governor, J.B. Pritzker, signed into law the first-of-a-kind legislation regulating the use of artificial intelligence using video interviewing during the hiring process. This act will go into effect in January, 2020.
In addition, there are already questions of whether AI may perpetuate biases and can lead to discriminatory hiring recommendations. An example provided by employment and labor attorney, Paul Starkman, is Amazon’s recent abandonment of its decades-long project with AI screening of resumes because it could not teach the machine how to avoid illegal discrimination in hiring.
The list goes on. The Electronic Privacy Information Center (EPIC), a public interest research center based in Washington, D.C. recently asked the Federal Trade Commission to investigate HireVue. The EPIC complaint focuses on potential bias against women and minorities. Epic also states a concern over how AI algorithms assess overweight candidates, those who suffer from depression or non-native English speakers.
There are many advantages of AI, particularly when it comes to data handling, decision making in technical diagnoses, and precision or accuracy challenges. But AI needs to be cautiously used in arenas where predicting behavior is the goal – especially with future work performance where careers are determined.
As stated, predicting behavior is very challenging. I repeat – in this area, “the whole is greater than the sum of its parts.” I believe human input gets us closer to the “whole.” Therefore, when I hear that AI is making your hiring decisions, I cringe.
As the past chief assessment psychologist for a fortune 100 company and now CEO for E.A.S.I-Consult, I always tell our customers to combine their own thoughts with the objective test data they’ve collected. If we can combine AI with human input, we get closer to treating everyone fairly. Otherwise, we – and our organizations – are taking a very big risk.
For More Information
For related articles on this topic, go to https://easiconsult.com/articles-archive/.
About the Author
David Smith, PhD, is the founder and CEO of E.A.S.I-Consult®. E.A.S.I-Consult works with Fortune 500 companies, government agencies, and mid-sized corporations to provide customized Talent Management solutions. E.A.S.I-Consult’s specialties include leadership assessment, online pre-employment testing, survey research, competency modeling, leadership development, executive coaching, 360-degree feedback, online structured interviews, and EEO hiring advisement. The company is a leader in the field of providing accurate information about people through professional assessment. To learn more about E.A.S.I-Consult, visit www.easiconsult.com, email ContactUs@easiconsult.com or call 800.922.EASI.