Originally published on 5 April, 2022.
AI carries a promise of reducing human biases in HR processes while making them also faster, more consistent, and of higher quality. However, a growing number of recruiters are worried that AI could amplify past human biases and limit people’s job opportunities.
The examples of HR AI introduced by Amazon, Hirevue, and Uber remind us that when applied without proper consideration of the impacts, AI-driven automatisation of HR can also lead to serious risks for job seekers, employees, and brands using AI.
By reading this article, you’ll get a better understanding of:
- What the HR AI landscape looks like
- What regulators and companies are doing to make the use of HR AI more ethical
- How do companies source HR AI software
- What recruiters and HR AI providers can do to use AI more responsibly
The possibilities of AI tools in HR are endless.
In the past two years, the human resource technology market has seen growing investment volumes and investor interest. In fact, several startups have reportedly hit unicorn status using AI-based technologies to drive next-generation recruiting.
There are already over 250 different commercial AI-based HR tools on the market, according to CB Insights. AI is routinely used in HR processes from advertising job ads, collecting candidate data, screening, profiling and predicting worker performance.
Also, as ESG keeps gaining attention in board rooms, more recruiters turn to AI also for driving the enterprise diversity initiatives.
How are regulators and the industry responding to the increasing use of AI in HR?
EU’s upcoming AI Act introduces new quality management and transparency requirements for the providers and users of HR AI. The requirements aim to protect people from the potential adverse effects on fundamental rights.
In New York, the new automated hiring law requires bias audits from any automated employment decision tool before being used by recruiters. It also demands candidates and employees be notified about the use of such tools in the assessment or evaluation for hire or promotion.
The US Equal Employment Opportunity Commission (EEOC) recently launched their AI initiative to ensure AI used in hiring and other employment decisions comply with existing federal civil rights laws.
Unsurprisingly, the EU AI Act considers recruitment and worker management as high-risk applications of AI along with AI applied in health, education, law enforcement, various other public services, and more.
Not only are regulators awake, but the risks are also well recognised by companies in various industries. In the US, a group of America’s largest recruiters including CVS Health, Deloitte, General Motors, Humana, IBM, Mastercard, Meta, Nike and Walmart formed a Data & Trust Alliance, to adopt responsible data and AI practices.
Positioning itself as a “do tank”, the group promises to develop and share practices that can be adopted across industries to facilitate faster learning. The group’s first focus, Algorithmic Bias Safeguards for Workforce, underlines the importance of employment-related AI risks for companies and the urge to act.
How do enterprises source HR AI systems?
According to a European enterprise survey, European enterprises use procurement as the most common way of sourcing AI-based systems.
Only 20% of enterprises develop AI fully in-house, while 60% report buying third-party software instead.
For HR, the actual figure is arguably much higher. The US alliance of major recruiters goes even further by saying that most of the algorithmic systems used to support workforce decisions are introduced and maintained by third-party vendors from software providers, professional networking sites, and consultants, to recruiting firms.
Considering these sourcing strategies, in HR, responsible AI is first and foremost about responsible deployment and use of third-party technologies.
So, HR professionals must start asking AI vendors questions that allow them to be held accountable for the technology they deploy. Informed by such transparency, recruiters can use AI-based products responsibly or, when needed, decide not to take into use products that conflict with the enterprise’s ethical principles.
EU’s new AI regulation introduces obligations for AI technology providers to give transparent instructions of use for users and to keep them updated throughout the system lifecycle. Ultimately, it is about delivering what is needed for establishing accountability, which is what we need for HR AI.
Recruiters should demand greater transparency from their HR AI technology providers.
Building on the upcoming compliance requirements and emerging industry practices, we compiled a template for systematic transparency exchange between technology providers and recruiters. Here's a summary of the key questions:
- What are the intended purposes and use contexts of your product, and what is the business value created?
- What is the accuracy of your product, and how do you ensure its consistent performance over time?
- What data have you used to train, test and operate your product? How have the datasets been examined in view of potential biases?
- What potential risks might your product cause, and how do you mitigate them?
- What measures have you put in place to test, detect, mitigate and monitor potential biases across the product lifecycle from design to data selection, to model training, to deployment and monitoring?
- Which tools and education do you provide to ensure people overseeing your product can understand its capacities and limitations, monitor its operations, interpret the system output and when needed, intervene or interrupt the system?
- What changes have you made or plan to implement to your product and how do those changes impact the behavior of your product?
- What is the expected lifetime of your product and the required maintenance measures to ensure it works properly?
- What are the responsibilities in ensuring your product performs as intended?
- Has the product been audited, by whom and how? Which actions have been taken to address the recommendations made?
HR AI technology providers must prepare to serve transparent information about their AI products to their customers throughout the product lifecycle. By doing so, AI technology providers not only demonstrate proactivity in addressing the ethical risks, such as AI biases, and regulative requirements but also establish a new feedback channel for monitoring their system and its use cases on the market.
Understanding the use of AI and the potential ethical problems observed by customers is vital in monitoring AI risks and driving product quality in an increasingly competitive AI marketplace.
While third-party AI risks become a new source of risk for brands, procurement itself is the most critical process where the AI alignment with ethical principles and regulations is operationalised in HR.
Saidot’s AI governance platform connects HR AI technology providers and recruiters around transparency. With our platform, HR AI technology providers can maintain their AI product transparency effortlessly and reach enterprises with systematic transparency data from one place in a comparable format preferred by major recruiters.