Few technologies hold more promise than AI. And indeed, rightly used, AI has the potential to bring a lot of benefits. However, in the middle of this hype, it is good to remember that wrongly used AI can also potentially bring disaster. Every Industry has its question marks regarding benefits vs potential risk factors, neither as well-known as we would like them to be.
I narrow this article down to the recruiting industry and more exact to AI vs the Executive Search Industry because this is where I have made my career. I want to stress that I am here exclusively talking about C-level Searches. Some Firms also search for people to entry-level and low-level positions in general. Searching for a CEO for a billion-euro turnover company is different compared to searching for, e.g. a Coder. The process, professional experience and expertise required in C-level Searches are different. Therefore, also the question marks regarding the use of AI are different in C-level Searches.
In the AI pro-people end of the spectrum are, e.g. all the companies that develop and sell AI-solutions to the Executive Search Industry. Much of what you hear about "how fantastic AI is", comes from the marketing departments of these companies. It feels like they only can see good things coming out of using AI. When asked about potential AI risk factors, they often brush away the question marks people raise as theoretical ones - "Don't worry, we got it under control" is the message. Well, I can hardly blame anyone trying to sell their product. The responsibility of doing Due Diligence when buying a product belongs to the buyer.
There also exists AI pro-people within the Executive Search Industry. Many are ardent AI supporters, praising AI, and so excited about the potential benefits that they are sometimes "blind" to the risks. There also exist people who always want to be first in line when modern technology is implemented and are therefore AI pro-minded. That said, I know from my experience that many Search professionals are not all exactly IT or technology wizards, so the trust in AI is sometimes based more on a strong belief, than fact-based knowledge. Maybe some also believe the AI-company salespeople more than I do. Nothing wrong with that.
At the other end of the spectrum are the Search Professionals who, while not against AI, feel that there are also risks and want to move more slowly. We also have Search Professionals with a different philosophy, who feel that AI should never "replace" an experienced top-level Search Professional, particularly so when talking about C-level Executive Search. There also exist numerous thought-leaders, specialists and scientists, who have the knowledge, experience and perspective to understand what might happen if we get it wrong and who therefore advise extreme care. Many consulting firms, e.g. McKinsey, KPMG, PWC having researched the subject, say the same.
Algorithms have already for a long time been able to predict, with an impressive degree of accuracy, personal things about you using a very small number of your Facebook Likes. Personal means: gender, political affiliation, sexual orientation, religion, favourite intoxicants. (Microsoft and the University of Cambridge published some research about this in 2013). It's not news to anyone that advertising platforms like Google and Facebook collect personal information, but we have gotten so used to this, that most don't care. Maybe we should. It is 2020. These surveillance and predictive abilities have developed tremendously since 2013.
During the Corona crisis, Google has released county-level reports in the USA about exactly how the travel patterns have changed over the past few weeks. Google says that the data collected can't be identified or connected to an individual. Except, I have noticed I can barely walk out from a random shop, before I already get suggestions on my mobile phone about visiting similar shops. So, they likely also now when you are at home and leave home. Theoretically, you can "turn this feature off", but for practical reasons few do.
This example alone should send shudders down anyone's spine. These algorithms are today made by humans and can also be used for purposes not in our best interest. This is likely also happening today. What if in the future AI at some point takes control and starts developing own algorithms, that are not in our best interest? Is there any risk that this could happen in algorithms related to behavioural analysis and predictive abilities in recruiting too?
One key argument in favour of AI is its ability to screen resumes without a hint of bias and to identify the best candidates in record time.
AI may well have these kinds of abilities. But the algorithms are human made. Coders are also biased, so who is the one deciding the rules. What if all coders creating the algorithm are Americans? Is it 100% certain that their American culture, life values, and individual biases regarding, e.g. diversity, ethical values, race, gender, politics, are not affecting their algorithm? Does this algorithm have the same reliability also in all other cultures and countries where these parameters are very different? What if all coders are Chinese, French, British, Italian? Does it matter that the AI application is in, e.g. English instead of a person's native tongue? Is there in the algorithm a country-based parameter fixing the cultural differences? I don't think so.
As a human being, the recruiter is always biased. For example, recruiter A could hire a candidate that recruiter B would not even invite to a job interview. A lot of recruitment is done based on feelings, and the criteria are often quite arbitrary (Often used sales arguments by AI-recruiting software salespeople).
The first sentence in the chapter above is true by definition. The second sentence may be well true but is this always wrong? Should we not have the right to choose whom we like working with. In my mind, the element of chemistry and professional expertise fit should also be allowed to have some impact when recruiting, if we are to work very closely with this person. The third sentence is a generalisation, insinuating that most people who recruit don't know or care what they are doing. "A lot" and "often" are words not fitting every company or person. There exist many persons who do not hire people purely based on feelings or based on arbitrary criteria.
AI can undoubtedly bring benefits in screening, analysing and evaluating people in recruiting situations, but a blind trust in AI is bound to create problems. The higher up in the organisation we go when recruiting people, e.g. the C-level, the more complex and demanding everything becomes. This concerns also AI. I can spontaneously think of at least four question marks (there are certainly more):
Despite all my doomsday warnings, I too believe that AI can be of help in screening, assessing and predicting human behaviour. AI can also bring significant benefits in Executive Search by processing hard facts, e.g. proactively sourcing and analysing the recruiting market potential, locating and processing information efficiently, reducing time to hire. What I am saying, is, don't just have blind faith in AI, understand what you are doing, pay attention to the risks involved and tread cautiously.
If I was a Senior C-level Executive, who would I rather have as my business partner and trusted adviser?
Would I choose an experienced Search Consultant with excellent business acumen - with whom I have developed a good, trustworthy personal relationship over the years - with whom I can talk about anything, also personal matters, discuss business world gossip - with whom I can go golfing, or even share a pint of beer? Or, would I choose a robot, that I know may know a lot of things, but really could not care less about me - and can't even golf. I would choose the Search Consultant any time of the day. Who would you choose?
If you found this article interesting, please also read my other article about AI in my blog, that is Artificial Intelligence (AI) in Top Level Executive Search. There I approach AI from a different point of view.