Employer is liable for AI’s discriminative actions
The principles of equal treatment must be applied to recruitment, which means that any discrimination based on nationality and ethnicity, race, colour, religion or belief, age, disability or sexual orientation must be avoided.
It is certainly not allowed to program algorithms in such a way that discrimination would be "deliberately prepared". At the same time, attention should be paid that the program would not independently discriminate against candidates. Given the fact that artificial intelligence is self-taught and based on data provided or otherwise available to it, self-learning moments can change criteria and outcomes over time. Therefore, it is always necessary to consider whether the abundance of data helps the program to avoid making false or even illegal decisions.
Article 22 of the General Data Protection Regulation provides that the data subject shall have the right to request that no decision be taken on them solely based on automated processing, including profiling, which results in legal consequences for the data subject or has a significant impact on the data subject. This need not apply if, inter alia, automated processing is necessary for the conclusion or performance of a contract between the data subject and the controller or is based on the explicit consent of the data subject. Conclusion and performance of the contract or the consent of the data subject do not relieve the employer, as a controller, of the obligation to take appropriate measures to protect the rights of workers and candidates to equal treatment and to challenge.
The fact that technologies have been used in recruitment does not relieve the future employer and the recruiter of the responsibility associated with the breach of the principles of equal treatment.
Liisi JürgenAttorney at Law, Partner
Mobile: +372 525 2646
liisi.jurgen@njordlaw.ee