"ARTIFICIAL INTELLIGENCE IN THE WORKPLACE: ADDRESSING LEGAL GAPS AND SAFEGUARDING EMPLOYEE RIGHTS IN THE AGE OF AUTOMATION – A COMPARATIVE ANALYSIS WITH GLOBAL PERSPECTIVES" BY - S. HARIINI SHRI & MADDIPATI SRI SESHAMAMBA
"ARTIFICIAL INTELLIGENCE IN THE WORKPLACE: ADDRESSING LEGAL GAPS AND SAFEGUARDING EMPLOYEE RIGHTS IN THE AGE
OF AUTOMATION – A COMPARATIVE
ANALYSIS WITH GLOBAL PERSPECTIVES"
AUTHORED BY - S. HARIINI
SHRI & MADDIPATI SRI SESHAMAMBA
B.COM. LLB
Abstract:
Artificial Intelligence (AI) has rapidly transformed various sectors,
including the working environment. Initially introduced for basic
administrative tasks in the 2010s, AI has grown more sophisticated, playing
roles in recruitment, human resources, and automating job functions. By 2017,
AI was actively used for resume screening, chatbot-driven tasks, and employee
monitoring. While AI tools have streamlined processes, concerns arise around
privacy, discrimination, and bias, particularly in hiring and employee
surveillance. Legal frameworks globally are
evolving to address the challenges
posed by AI’s integration into the employment sector. This research
focuses on the insufficiency of current legal protections against AI-related
issues in the workplace, particularly within India. The available research
material so far addresses the legal and ethical challenges of using AI for employee
surveillance and performance evaluation, emphasising the need for
updated regulatory frameworks to ensure fairness and transparency. The
integration of AI into employment law requires balancing innovation with the
protection of workers' rights and privacy. There is a clear need for the
development of comprehensive legal frameworks that balance the benefits of AI
with the need to safeguard workers' rights. It proposes regulatory measures to mitigate bias, ensure equitable
treatment, and protect personal data in an increasingly automated workplace by
assessing how AI impacts labour rights and privacy, drawing parallels with
legal systems in countries such as the USA, the UK, Canada, and China.
Keywords: Artificial Intelligence (AI), Workplace, Employee Surveillance, Privacy, Discrimination, AI-driven Hiring, Legal Frameworks, Labor Rights, Data Protection, Employment Law, Legal Liability, Accountability,
Worker Protections
Introduction:
The rapid integration of artificial intelligence (AI) in India's
employment sector is transforming
traditional work practices but simultaneously giving rise to a multitude of
legal challenges. This paper investigates the current legal
framework governing AI in the workplace, identifying significant deficiencies in
existing laws that fail to adequately address issues such as algorithmic discrimination, data privacy, and transparency in AI decision-making. It argues for the urgent need for comprehensive
legislation that specifically addresses the unique complexities of AI
technology, including standards for
accountability, ethical usage, and employee protection.
Moreover, this research highlights the necessity for establishing formal
mechanisms for employees to report
grievances related to AI-driven processes, advocating for legal
recognition of these issues in labour disputes. A comparative analysis
of regulatory approaches in
jurisdictions such as the European
Union, the United States, and the United Kingdom provides valuable insights
into best practices, emphasizing the importance of proactive measures in
protecting workers' rights.
By synthesizing these findings, the paper aims to propose
a framework for developing a robust legal infrastructure that not
only mitigates risks associated with AI in employment but also fosters an environment conducive
to innovation and fairness. Ultimately, this research seeks to
inform policymakers and stakeholders on the imperative of crafting laws that
balance technological advancement with
the ethical and legal obligations to safeguard employees in an increasingly
automated workforce.
Literature Review:
1.
ANALOGY BETWEEN
THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE AND EMPLOYMENT SECTOR - NATIONAL AND INTERNATIONAL ASPECTS by Bensha C Shaji (Assistant
Professor of Law Hindustan University Chennai) Angel Shaji (PhD.
Research Scholar, Christ
Deemed to be University Bangalore). The author addresses the critical issues
faced by individuals in India as they confront
job losses, on site disruptions; highlighting the
negative impact on the economy of a developing nation. The paper emphasizes the absence of comprehensive
laws governing the use of artificial intelligence (AI) in the
employment sector. As automation and robotics increasingly threaten
to displace human
labour across variousindustries, the uncertainty
surrounding job security grows. The author conducts a
detailed examination of current
employment laws, identifying significant deficiencies that fail to protect workers
in the face of technological
advancements.
2.
THE IMPACT OF ARTIFICIAL INTELLIGENCE
ON EMPLOYMENT LAW AND WORKER PROTECTIONS IN INDIA by Utkarsh
Upadhyay (Jamia Millia Islamia, New Delhi). This paper examines the impact of
artificial intelligence (AI) on employment law and workers’ protection. AI is
increasingly being used to automate routine tasks, which could lead to job
displacement in certain industries. The paper explores how AI may affect
employment law areas such as discrimination, wage and hour laws, and workplace safety. Additionally, the paper considers
the potential impact of
AI on worker protections,
including workers’ compensation and employee
benefits. The paper concludes that AI’s impact on employment law and workers’
protection is still evolving, and employers need to ensure that their AI
systems are designed and tested to avoid unintended consequences that may
negatively affect workers.
3.
THE IMPACT OF ARTIFICIAL INTELLIGENCE ON THE LABOUR
MARKET AND THE WORKPLACE: WHAT ROLE FOR SOCIAL DIALOGUE
by The Global Deal. This
article delves into the concerns
surrounding autonomousdecision-making in the workplace, particularly in HR and
management processes. The use of AI
for monitoring and evaluating employees—whether in tracking or during
recruitment and performance assessments—poses significant ethical dilemmas.
These practices raise critical issues related
to excessive surveillance and the erosion
of fundamental workers’ rights. AI-driven decisions can
fundamentally alter the employer-employee relationship, introducing risks of
biased outcomes, discrimination, and potential violations
of data protection and human rights.
The paper emphasises the urgent need for legal frameworks that govern
these AI applications, ensuring that ethical considerations and workers' rights
are prioritised in an increasingly automated workplace.
4. THE LEGAL IMPLICATION OF ARTIFICIAL
MINTELLIGENCE MIN THE
WORKFORCE by Mritunjay Kumar (National
university of study and research in law,
Ranchi). The paper discusses in employment law the use of AI in hiring,
monitoring, and performance evaluations necessitates an examination of privacy,
discrimination, and fairness
concerns. Intellectual property
issues arise when protecting
AI generated inventions and creations, requiring exploration of patentability, copyright, and ownership rights. Labor and
employment regulations must address the impact of AI on the workforce,
including job displacement, retraining, and the need for new regulations to
address AI specific challenges. Liability and accountability consideration involves
determining legal
responsibilityfor harm or errors caused by AI systems, necessitating and
understanding of their intersection with data protection laws.
5.
AI
IN LABOUR
RELATIONS: LEGAL IMPLICATIONS
AND ETHICAL CONCERNS by Priyanshu Sahu, (RUAS School of Law). This discussion delves into key dimensions including Job Displacement, Worker Rights and Collective Bargaining, Discrimination and Privacy, Regulation and Protection, and New Job Opportunities and Education. Employers need to assess how AI will affect
employee well-being, and the societal ramifications of automation and job
displacement. Since AI algorithms can be complex and transparent, it can be
challenging to understand how decisions are made and who is accountable. In summary, there are opportunities and difficulties associated with incorporating AI into
labour relations. Artificial intelligence presents significant ethical and legal
issues that need to be carefully considered
and resolved, even though technology has the potential to increase productivity and
efficiency. In order to ensure that AI is applied in a
responsible and moral manner in the workplace, regulatory frameworks must be
modified. This will allow innovation to be combined with the defence of
societal values and employee rights.
Statement Of Research Problem:
Insufficient legal recognition of issues caused by ai driven
systems in the working sector for recruitment
and performance evaluation, due to legal gaps in the existing
legislations causing a lack of
ai specific legal remedies.
There is a lack of legal cases related to artificial intelligence (AI) in
the employment sector, highlighting a critical gap in both recognition and
redress for affected workers. This absence of case law leaves employees without
clear avenues for challenging discriminatory practices or unfair treatment
arising from AI-driven decisions. There needs to be recognition and
encouragement of filing of cases to create a body of case law. This would help
ensure accountability and protection for workers in an increasingly automated environment, fostering a fairer and more just
workplace.
Research Objective:
This research examines the inadequacy of existing legal protections against AI-related issues in
the workplace, with a particular focus on India. Current literature primarily
addresses the legal and ethical challenges associated with using AI for
employee surveillance and performance evaluation, highlighting the urgent need for
updated regulatory frameworks to promote fairness and transparency.
Integrating AI into employment law necessitates a careful
balance between fostering
innovation and safeguarding workers' rights and privacy.
There is a distinct requirement for comprehensive legal frameworks that reconcile the advantages of AI with
the necessity of protecting
employees. This study proposes
regulatory measures aimed at mitigating bias,
ensuring equitable treatment, and safeguarding personal
data in an increasingly
automated work environment by drawing
comparisons to legal
systems in countries
such as the USA, UK, Canada, and China.
Research Hypothesis:
The current legal statutes
in place to safeguard
individual’s rights against injury caused by involvement
of AI in the working sector is insufficient.
Research Questions
Whether Involvement of AI in
the Employment Sector Causes Discrimination in Hiring and Evaluation?
Whether the Digital Personal Data Protection (DPDP) Act Sufficiently Safeguards the Rights of Individuals Against Potential Biases and Discrimination Arising from AI-Driven Decision- Making Processes in Employment?
Whether Involvement of AI in the Employment Sector Causes Job Erosion?
Research Method - Analysis
And Interpretation:
This study will be doctrinal in
nature, focusing on qualitative analysis of legal texts, statutes, relevant
literature and case laws to understand the legal and ethical concerns faced in
the employment sector under usage of ai and how there’s a need for legal
framework to be established governing ai particularly.
This methodology involves a systematic examination of literature and their interpretations
to assess issues related to biases, discrimination, and job security. Primary
sources of examination are relevant statutes such as the Industrial Disputes
Act, IT act, Equal Remuneration Act, and others, along with review of case
laws. Secondary sources are research papers,
research articles, commentaries,
textbooks, scholarly articles etc.
The integration of artificial intelligence (AI) in the employment sector presents
various legal concerns that impact employees. The following is gist of
the same:
Legal Issues
Discrimination and Bias: AI systems can perpetuate biases
present in training data,
leading to discriminatory
hiring, promotions, or evaluations based on race, gender, or age.
Data Privacy Violations: Extensive data collection for AI applications
can infringe on employee privacy rights, especially if data is used without
informed consent.
Lack of Accountability: Difficulty in tracing
responsibility for decisions made by AI can lead
to challenges in holding organisations accountable for unfair practices.
Job Displacement: Current
labour laws may not adequately protect employees from job losses due to automation, leading to
disputes over retrenchment and compensation.
Insufficient Grievance Mechanisms: Existing frameworks may lack effective
channels for employees to contest AI-driven decisions or seek redress for
grievances related to AI use.
Transparency Deficiencies: Lack of legal requirements for organisations to disclose AI decision-making processes can lead to
opacity and mistrust among employees.
Intellectual Property Issues: Questions about ownership of AI-generated
work can arise, complicating employment agreements and potential disputes over
proprietary information.
Ethical Issues
Algorithmic Transparency: The use of opaque AI systems raises
ethical concerns regarding the fairness and understandability of decisions affecting
employees.
Informed Consent: Employees may not fully understand the implications of
consent agreements regarding their data, leading to ethical breaches in data
usage or might feel pressurized to give consent.
Monitoring and Surveillance: AI-driven monitoring systems can create a
culture of surveillance, infringing on employees' rights to privacy and
autonomy.
Workplace Inequality: The risk of exacerbating existing
inequalities through biased
AI systems raises ethical
questions about fairness and equity in employment practices.
Mental Health Impact: Constant surveillance and performance evaluations
by AI can contribute to stress and anxiety among
employees, raising ethical concerns
about well-being.
Lack of Employee Agency: The
reliance on AI for decision-making can diminish employees' sense of control and
agency in their roles, leading to ethical implications regarding worker dignity.
Social Responsibility:
Organisations have an ethical
obligation to ensure that AI
technologies are used responsibly and do not harm employees or create
unsafe work environments.
In India, the below mentioned legal statutes and regulations exist to
safeguard employees against negative impacts
in the employment sector. Due to the lack of stern laws governing AI in particular under the employment sector, these
laws can be used in the face of protecting employees from potential risk caused due to
AI until a formal legal framework
for the same is laid down.
1.
The Constitution of India, Article 14 Right to Equality
Violation:
If AI in employment decisions discriminates arbitrarily, it violates the right
to equality and non-discrimination enshrined in Article 14.
2.
The Constitution of India, Article 16 Equality of Opportunity in Public
Employment
Violation:
If AI discriminates based
on caste, religion, gender, etc., it could
violate Article 16 which guarantees equal opportunity in
public employment.
3.
The Information Technology Act, 2000: Section 43A
Violation:
AI systems handling sensitive personal data or information (SPDI) must follow
reasonable security practices. A violation can occur if AI is used improperly and personal data is compromised.
4.
The Equal Remuneration Act, 1976: Section
4
Violation:
If AI makes biased decisions that lead to unequal pay for men and women for the same work, it violates the Equal
Remuneration Act.
5.
The Rights of Persons
with Disabilities Act, 2016: Section
3
Violation:
If AI discriminates against persons with disabilities in employment, it
violates Section 3, which provides for equality and non-discrimination.
6.
The Industrial Disputes Act 1947
Section 25T (Prohibition of unfair labour
practices): If AI systems lead to unfair
laborpractices such as wrongful termination or arbitrary decisions,
Section 25T may be invoked.
Section 25F: Requires that an employer
provide notice and compensation to employeesbefore
retrenching them, ensuring job security in the face of automation.
7.
The
Sexual Harassment of Women at Workplace (Prevention, Prohibition and Redressal)
Act, 2013: Section 3
Violation:
If AI algorithms or tools fail to protect women from sexual harassment or are used to manipulate reporting, it could violate Section 3 which
prohibits sexual harassment in the workplace.
8.
The Contract
Labour (Regulation and Abolition) Act, 1970: Section
10
Violation:
If AI tools make decisions about employing or terminating contract labour in
violation of labour regulations, companies could be held accountable under this
Act.
9.
The Payment
of Wages Act, 1936: Section
7
Violation: If AI systems
handling payroll make unauthorised deductions or delays in payments,
Section 7 of this Act, which regulates deductions from wages, might be
violated.
10.
The Maternity
Benefit Act, 1961: Section
12
Violation:
If AI systems discriminate against pregnant employees or deny them maternity
benefits, it would violate Section
12 of this Act, which guarantees protection during pregnancy and
maternity leave.
CASE LAWS:
K.K. Gautam v. State of U.P. and Ors: This case involved a challenge to the use of AI-powered facial recognition technology for attendance monitoring in government schools. The petitioner argued that using such technology violated students’ right to privacy
and autonomy. The court directed the state government to ensure that the
use of the technology was in
compliance with the Personal Data
Protection Bill, 2019, and other relevant laws.
State of Maharashtra v. Vijay
Tukaram Gomate: This case
involved a challenge to the use of AI in the
police department for predictive policing. The petitioner argued that the use
of such technology violated
privacy rights and could result in false arrests. The court held that the use of
predictive policing technology should be transparent and that the police should
have clear guidelines for its use and relied on the Maharashtra Industrial
Relations Act, 1946 (section 9: equal representation and fair treatment). Additionally, principles of natural justice
and relevant sections from the Constitution of India, particularly Articles 14 (right to equality) and 21 (right to
life and personal
liberty), were also referenced to underscore the need for fair administrative processes in
employment-related matters.
Anivar A Aravind v. Ministry of Home Affairs:
In this case, the petitioner challenged the use of
an AI-powered surveillance system by the Indian government,
arguing that it violated privacy rights. The court directed
the government to ensure that the use of the system was in compliance with the Personal Data Protection Bill,
2019, and other
relevant laws and that the data collected was only used for the purpose
for which it was collected.
The current legal framework in India, is often inadequate to address the
complexities and challenges posed by the integration of artificial intelligence
(AI) in the employment sector. This insufficiency manifests in several key
areas:
1.
Generalisation of Existing
Labour Laws
Broad Provisions: Many labour laws, such as the Industrial Disputes Act, 1947, are formulated with traditional employment
practices in mind. They lack specificity concerning AI-driven processes, which
can result in outdated interpretations when applied to modern work environments.
For instance, terms like "retrenchment" do not account for automated
layoffs that might occur without traditional notification or compensation
mechanisms.
Limited Applicability: Provisions regarding unfair
labour practices, job security, and wages do not explicitly address scenarios where
AI systems might make decisions about hiring, promotions, or terminations,
leading to potential legal ambiguities.
2.
Data Protection and Privacy Gaps
Inadequate Data Protections: Although
the Digital Personal
Data Protection (DPDP)
Act seeks to safeguard personal
data, it does not provide
robust protections against
biases that may arise from the use of AI in employment.
The Act lacks specific guidelines for handling data that informs AI algorithms,
potentially allowing discriminatory practices to go unchecked.
Consent
Issues: The requirement for informed consent under the DPDP Act is crucial, but
in employment scenarios, power dynamics may pressure employees into consenting without
fully understanding the implications. This can result in ethical
breaches and legal challenges.
3.
Insufficient Mechanisms for Accountability
Transparency
Deficiencies: There are no explicit legal requirements for organizations to
disclose how AI systems function or how decisions are made. This lack of
transparency can prevent employees from understanding the basis for adverse decisions impacting their careers, leading to feelings of injustice
and potential legal disputes No Requirement for Algorithmic Audits: Current
laws do not mandate regular audits of AI systems to assess their fairness or
identify biases. Without these audits, organizations may unintentionally
perpetuate discrimination, leaving employees without recourse
4.
Limited Employee
Rights and Protections
Weak
Grievance Redressal Mechanisms: The existing legal framework lacks robust
mechanisms for employees to challenge AI-driven decisions. While the DPDP Act
provides rights to access and correction, these rights may not be practical
when applied to automated systems that lack transparency.
Inadequate
Legal Recourse: Employees may find it challenging to seek justice in cases of AI- related discrimination. Existing laws do not
provide clear pathways for addressing
grievances related to biases in AI decision-making processes, leaving individuals
with limitedoptions.
5.
Challenges with Job Security
and Automation
Job
Displacement Provisions: The existing framework offers limited protections
against job displacement due to AI. While the Industrial Disputes Act requires
notice and compensation for layoffs, these provisions may not adequately
address the nuances of job loss due to automation, which can occur without
traditional layoffs.
No Provisions for Reskilling: Current labor laws do not mandate employers to providetraining or reskilling opportunities for employees whose jobs may be affected
by AI. This oversight can lead to a workforce that is unprepared
for the evolving job market.
6.
Absence of Ethical Guidelines
Lack of Ethical Standards: There are no comprehensive legal
guidelines that address
the ethical implications of AI
in employment, such as fairness and accountability. This absence allows
organizations to deploy AI systems without considering their potential impact
on employees' rights and well-being.
Neglect
of Social Responsibility: The
current legal landscape does not sufficiently encourage organizations to adopt
responsible AI practices. Without legal incentives, companies may prioritize
efficiency over ethical considerations, exacerbating risks to employee rights.
7.
Dynamic Nature
of Technology
Lagging
Regulations: As AI technology evolves rapidly, existing statutes are often
outdated, failing to keep pace with innovations in the workplace. This disconnect can lead to regulatory
gaps that fail to protect employees effectively.
Reactive Rather
Than Proactive: The current legal framework tends to be reactive, responding to issues only after they arise,
rather than anticipating challenges posed by AI
integration. This approach can leave
employees vulnerable to exploitation and unfair practices.
The
insufficiencies of current legal statutes in India to govern AI in the
employment sector present significant risks to employees. Existing laws lack
specificity, accountability, and the ability
to adapt to rapidly changing
technologies. They are merely reactive and not proactive. The need for
a formal legal framework that addresses these gaps is urgent. Such a framework should incorporate tailored
regulations, accurate accountability mechanisms, enhanced employee rights,
and ethical standards to ensure that AI technologies are deployed responsibly and justly in the workplace.
Whether Involvement of AI in the Employment Sector Causes
Discrimination in Hiring and Evaluation?
The integration of artificial intelligence (AI) into the employment
sector has transformed traditional hiring and evaluation processes. While AI
has the potential to enhance efficiency and objectivity, concerns about
discrimination and bias are increasingly prevalent.
Understanding AI in Hiring and Evaluation
·
Functionality
of AI Systems: AI tools used in recruitment often analyze vast datasets to
identify patterns that may indicate a candidate's suitability for a position.
These systems may include resume screening software, interview analysis tools,
and performance evaluation algorithms.
·
Data-Driven
Decision-Making: AI systems rely on historical data to inform their decisions.
This data-driven approach can streamline processes, but it also raises concerns
about the quality and biases of the input data.
·
Mechanisms Leading to Discrimination
·
Historical Bias in Training
Data: AI algorithms are typically trained
on historical hiring data, which may reflect existing
biases in the recruitment process. If past hiring decisions favoured certain
demographic groups (e.g., based on race, gender, or education), the AI may
inadvertently learn and perpetuate these biases in its recommendations.
·
Feature
Selection and Algorithm Design: The way AI systems are designed can introduce
bias. For example, if certain features (like educational background or work
experience) are prioritized in an algorithm, it may disadvantage candidates fromdiverse
backgrounds who may not have had the same opportunities.
·
Natural Language
Processing (NLP) Bias:
AI tools that utilize NLP to analyze
language in resumes or interviews can inadvertently favor candidates who communicate in ways
that align with prevailing cultural norms. This can disadvantage individuals from different linguistic or
cultural backgrounds.
·
Feedback
Loops: AI systems that learn from their outputs can create feedback loops. If an AI consistently selects candidates
from similar backgrounds, it may reinforce existing biases over time, making it
more difficult for diverse candidates to be considered in the future.
·
Implications
of Discrimination
·
Inequality in Opportunities: Discriminatory AI systems can perpetuate and
exacerbate existing inequalities in the labour market. Groups that are already
underrepresented may find it even
more challenging to secure employment or advancement, leading
to a lack of diversity in the workplace.
·
Legal
and Ethical Consequences: Organizations using biased AI tools may face legal
repercussions under anti-discrimination laws. Ethical concerns arise as well,
particularly if companies prioritize efficiency over equitable hiring practices.
·
Impact on Company Culture:
A lack of diversity stemming
from biased hiring
practices can negatively impact company culture, innovation, and
employee morale. Diverse teams are often linked to improved problem-solving and
creativity.
The involvement of AI in the employment sector clearly poses risks of
discrimination and bias that must be addressed proactively. Organizations must
implement strategies to mitigate bias, ensuring that AI systems promote
fairness and equity in hiring and evaluation.
Whether the
Digital Personal Data Protection
(DPDP) Act Sufficiently Safeguards the Rights
of Individuals Against Potential Biases and Discrimination Arising from AI-Driven Decision- Making Processes in Employment?
The DPDP Act is designed to protect individuals' personal data, regulate
data processing by organizations, and ensure
data subject rights.
It aims to create a balance between
dataprotection and the growth of the digital economy.
Mechanisms Addressing Bias and Discrimination:
·
Consent
Mechanisms: Section 6: Processing of Personal Data
This
section stipulates that personal data can
only be processed if the individual has
provided explicit consent. It outlines the requirements for obtaining consent,
ensuring that individuals are informed about the purpose of data processing and
that consent is freely given. This is crucial in employment contexts, where
candidates should be informed about how their data will be used, especially in
AI systems that might produce biased outcomes.
·
Data Minimization: Section 5: Purpose
Limitation and Data
Minimization
This
section emphasizes that data processing must be limited to what is necessary
for the purposes for which it
is processed. It mandates that organizations should
only collect personal data that is relevant and necessary for their stated purpose, thereby
minimizing the amount of data collected. This can help mitigate the risk of
discrimination by limiting the data points that
could lead to biased AI outcomes.
·
Rights to Data
Access and Correction: Section
12: Rights of Data
Principals
This
section outlines the rights of individuals concerning their personal data,
including: Individuals have the right to request access to their personal data
held by data fiduciaries. Individuals can request corrections to their personal
data if it is inaccurate or incomplete. Individuals have the right to access their personal
data and request corrections. This provision can empower employees and job applicants to identify and contest any biased data or incorrect information used in AI-driven
decisions.
Potential Shortcomings in Safeguarding Rights
·
Lack
of Specific Provisions for AI Bias: The DPDP Act does not explicitly address
biases in AI systems or mandate organizations to conduct bias assessments. This
gap can leave individuals vulnerable to discrimination, as AI systems
may operate without accountability for their outputs.
·
Insufficient
Clarity on "Automated
Decisions": While the Act covers data processing, it lacks detailed
provisions regarding automated decision-making processes. The absence of a
clear framework for transparency in AI-driven decisions can hinder individuals'
ability to understand how such decisions are made.
·
Limited
Enforcement Mechanisms: According to Section 3(c)(ii) of the DPDP act, it does
not apply to data that is made publicly available by the “data principal” or
any other person legally obligated
to make the data publicly
available. Such data is used to train ai models for the purpose of
performance evaluation and background checks which acts as an invasion to
privacy and biased perspectives under the scope of employment.
·
Over-reliance
on Consent: While consent is vital,
the power dynamics in employment may pressure individuals into agreeing to data
processing practices without fully understanding the implications. This can
lead to situations where individuals unknowingly consent to biased
decision-making.
While
the Digital Personal Data Protection Act provides foundational protections for
individuals regarding their personal data, it requires enhancements to adequately
safeguard against biases and discrimination arising from AI-driven
decision-making processes in employment. This act does
not provide any accurate remedy
for ai injuries specificallymaking
it vague in terms of involvement of ai.
Whether Involvement of AI in the Employment Sector Causes Job Erosion
The rise of artificial intelligence (AI) in the employment sector
has prompted significant debate regarding its impact on job security. While AI promises
increased efficiency and innovation, there are growing
concerns about job erosion, particularly for certain roles and
industries. This analysis explores
the mechanisms through
which AI might
contribute to job erosion, the sectors
most affected, implications for the workforce, and potential strategies for
adaptation.
Automation of Tasks: AI technologies automate various tasks, from routine
data entry to complex decision-making processes. This automation can lead to
job displacement, particularly for roles that involve repetitive tasks.
AI in Decision-Making: AI systems are increasingly used to assist or replace human decision-
making in areas such as hiring, performance evaluation, and operational
management, potentially reducing the need for human oversight.
Job Replacement: As AI technologies become capable of performing tasks traditionally done by humans, certain job roles may
become redundant. For example, chatbots can handle customer service inquiries,
diminishing the need for human customer
service representatives.
Skill Displacement: AI can change the skill requirements of various jobs.
Workers may find their existing skills obsolete,
leading to job erosion in sectors
that do not adapt or retrain
their workforce.
Efficiency Gains:
Organizations adopting AI often aim for greater efficiency, which can lead to
reduced staffing needs. As companies streamline operations, they may eliminate roles that are no longer deemed
essential.
Shifts in Job Functions: AI may transform job functions rather than
eliminate them entirely. Employees may find themselves shifted into roles that
require different skill sets, leading to job erosion in traditional areas.
Sectors Most Affected by Job Erosion
·
Manufacturing:
Historically, manufacturing has been a primary sector impacted by automation.
Robots and AI systems can perform tasks like assembly and quality control,
significantly reducing the need for human labor.
·
Retail:
The retail sector has seen a rise in AI-driven technologies, such as automated
checkouts and inventory management systems, leading to a decrease in cashier
and stockroom positions.
·
Customer
Service: The proliferation of chatbots and virtual assistants has transformed customer
service. Many inquiries that once required human interaction can now be handled
by AI, leading to a reduction in customer service roles.
·
Administrative
Roles: AI systems are increasingly capable of performing administrative tasks such as scheduling, data entry, and basic bookkeeping, potentially leading to job losses in
these areas.
Implications of Job Erosion
·
Economic
Impact: Job erosion can lead to higher unemployment rates and economic
instability, particularly in communities reliant on affected industries. This
can exacerbate income inequality and reduce overall consumer spending.
·
Worker
Displacement: Displaced workers may struggle to find new employment
opportunities, especially if they lack the skills needed for emerging roles.
This displacement can lead to increased stress and reduced job satisfaction.
·
Social
Discontent: Widespread job erosion can contribute to social unrest and
dissatisfaction with economic policies, as individuals feel threatened by technological advancements.
The
involvement of AI in the employment sector raises significant concerns
regarding job erosion. While AI offers the potential for increased efficiency
and innovation, it also poses risks of job displacement and skill obsolescence. There are high chances of unemployment and a negative impact on the economy.
Suggestions:
China
The New Generation Artificial Intelligence Development Plan (2017): This outlines China's strategic approach to AI,
emphasizing ethics and governance.
The AI Ethical Guidelines (2021): Issued by the Ministry of Science and
Technology, these guidelines set principles for responsible AI development, that stress fairness,
transparency, and accountability.
The Personal Information Protection Law (2021): While not exclusively about AI, this law emphasizes data protection and privacy,
impacting how AI systems handle personal data.
The Cybersecurity Law (2017): This law includes provisions relevant
to AI, focusing on data security and user rights.
1.
Zhang v. Beijing Daxing
District Labor Bureau (2014)
This
case involved a dispute over wrongful termination linked to performance
evaluations influenced by AI metrics. The case highlighted the need for human
oversight in decisions influenced by AI. The court ruled in Favor of the
employee, stating that performance evaluations must adhere to fairness
standards and not solely rely on automated assessments.
2.
Wang v. Chongqing Huayi Group (2016)
The case dealt with discrimination in hiring practices
based on automated screening processes.
The court found that reliance on biased algorithms in hiring led to
discrimination against specific demographic groups.
This decision emphasized the importance of auditing AIsystems to prevent discriminatory
outcomes.
3.
Liu v. Shenzhen Longgang
District Labor Bureau (2018)
This
case focused on the treatment of gig workers monitored by AI systems. The court
ruled that AI systems used for monitoring must not infringe on workers' rights
and privacy. The ruling highlighted the importance of balancing technological
advancement with worker protections.
United States
Equal Employment Opportunity Commission (EEOC) Guidance: While there’s no federal AI- specific employment law yet, the EEOC has issued guidance on
AI in employment, warning that if AI systems result in discrimination (like
through biased algorithms), it would violate laws like Title VII of the Civil
Rights Act, ADA, and ADEA.
Algorithmic Fairness: The EEOC focuses
on ensuring AI systems don’t create disparate impact (unintentional discrimination) when used in hiring or
evaluating employees.
AI Bill of Rights (Blueprint): Released by the White House, this is more
of a guideline than law, but it focuses on protecting workers from algorithmic
bias and ensuring fair treatment in
AI-related decision-making processes.
1.
EEOC v. Amazon (AI in Hiring)
(2021):
Amazon
faced scrutiny from the Equal Employment Opportunity Commission (EEOC) after it
was revealed that its AI-driven hiring systems disproportionately rejected candidates based on gender or racial bias. Amazon revised
its AI systems and made adjustments to ensure they were not discriminatory,
though specific case details were largely settled out of court, with no
formal ruling. This marked the first major government intervention into AI
hiring practices.
2.
Lowe v. Axxiom (AI in Background Checks) (2018):
This
case involved a job applicant whose
employment offer was rescinded based on a
flawed background check conducted by an AI-driven system. The applicant alleged that the AI made incorrect
associations with criminal
records, violating the Fair Credit Reporting Act (FCRA). The court sided
with the applicant, highlighting the
risk of using AI without sufficient
human oversight, and Axxiom was required to pay damages.
3.
Lopez v. Uber Technologies (2020) - Gig Worker Classification:
This
case centered on Uber’s use of AI-driven systems to control and monitor
drivers, with drivers claiming they should
be classified as employees due to how the system managed their work, including performance reviews and availability. Uber settled with plaintiffs, agreeing to pay millions
in compensation and to make changes in their AI management systems.
This case, along with others,
prompted re-examination of gig workers’ rights in the U.S., especially in California under Assembly Bill 5 (AB5).
European Union (EU)
AI Act (proposed): The EU is working
on the Artificial Intelligence Act, a comprehensive legal framework aimed at regulating AI across different
sectors, including employment. It categorizes
AI systems into four risk categories: unacceptable, high, limited, and minimal risk. High-risk systems (like AI used for
hiring, firing, or promotions) will be subject to strict regulations like transparency,
accountability, and the need for human oversight.
General Data Protection Regulation (GDPR): Under
Article 22 of the GDPR, people have
the right not to be subject to decisions based solely on automated processing,
including AI, if it has a significant impact on them (e.g., in employment).
United Kingdom
Equality Act 2010: Though not AI-specific, this act prevents
discrimination in employment, and if AI causes
direct or indirect discrimination based on protected
characteristics (e.g., race, gender, disability), it’s a
violation.
ICO Guidelines: The Information Commissioner’s Office in the UK has
released guidelines for organizations using AI in employment decisions,
stressing transparency, fairness, and the right to human intervention if an AI system makes significant decisions
about an employee’s job. Employers must ensure that AI systems
are designed to avoid bias,
and they must regularly
monitor these systems for discriminatory effects.
1.
AI Surveillance in the Workplace (British
Airways Case):
British
Airways faced criticism and complaints over its use of AI-powered surveillance
to monitor employee activity. While this case didn’t go to court, regulatory
bodies like the Information Commissioner’s Office (ICO)
were involved in ensuring compliance with GDPR and privacy
laws. The ICO reminded employers about their obligations under GDPR,
specifically around transparency and proportionality of employee monitoring. Employers were warned
to ensure AI surveillance respects privacy rights.
2.
Deliveroo Rider Classification Case (2018):
Misclassification
of workers due to AI management systems, affecting employment rights. Deliveroo
riders, whose schedules and jobs were managed via an algorithm, sought to be
recognized as "workers"
with corresponding rights like minimum wage and holiday pay.
This involved AI-driven systems in the gig economy and how they classify
workers. The Central Arbitration Committee (CAC) ruled that Deliveroo riders
were not classified as "workers" due to their ability to reject jobs and
engage others in their place, though this has since been appealed and
re-evaluated.
3.
AI Facial
Recognition - Ed Bridges
v South Wales Police (2020):
AI
facial recognition technology violating privacy rights and lacking proper
oversight, impacting the use of AI in workplace security. This landmark case
involved the use of AI- powered facial recognition technology by South Wales
Police, which scanned crowds for criminal matches. Ed Bridges claimed this was
an unlawful breach of his privacy and human rights. The UK Court
of Appeal ruled in Bridges'
Favor, stating that the use of facialrecognition by the police was
unlawful, as it violated privacy and data protection regulations.
Canada
Artificial Intelligence and
Data Act (AIDA) (proposed): Canada drafting the AIDA as part of Bill C-27, which will regulate high-impact AI systems. This includes
employment-related AI systems, ensuring they don’t cause discrimination or
violate worker rights.
PIPEDA (Personal Information Protection and Electronic Documents Act): Under
PIPEDA, AI systems that use personal
data in employment decisions need to respect
privacy and inform employees of the algorithms being
used.
How Foreign Laws and Their Insight Can Help India Implement New
Laws:
·
Emphasis on Ethical AI Use: Their focus on fairness,
transparency, and accountability in
AI provides a framework for India to consider similar ethical principles in its
legislation.
·
Incorporation
of Labor Rights: The alignment of AI usage with existing labour laws can guide
India in ensuring that new AI regulations uphold and reinforce worker protections.
·
Legal Precedents for Fair Practices: These
landmark judgments in the above
countries offer insights into how to address potential disputes arising from AI use in employment, advocating for fairness and
oversight.
·
Monitoring and Auditing Requirements: The necessity for regular audits
of AI systems in China highlights the importance of accountability,
which India can adoptto ensure
compliance with anti-discrimination laws.
·
Balancing
Innovation with Protections: Their approach to ensuring that AI does not lead to worker exploitation can inspire India
to create regulations that foster innovation while protecting employee
rights.
·
Emphasis
on Risk Assessment: Canada’s requirement for risk assessments of AI systems can
guide India in establishing protocols to identify and mitigate potential biases
and discrimination in AI applications.
·
Framework
for Transparency: The focus on transparency and informing employees about algorithmic processes in foreign countries can inspire Indian regulations to
ensure that organizations disclose the criteria and data used by AI
systems.
·
Protection
of Employee Rights: USA's regulatory approach reinforces the importance of protecting worker rights in the face of
AI-driven decisions, which can be a guiding principle for India in developing
its own laws.
·
Judicial
Precedents: The landmark judgments in all the countries can serve as a
reference for Indian courts when adjudicating disputes related to AI in
employment, providing a legal foundation for addressing discrimination and
bias.
·
Promoting Fairness: Canada’s initiatives to promote fairness
in AI can encourage India to adopt similar principles,
ensuring that its laws are aligned with international best practices.
We humbly recommend to ensure
the effective regulation of AI-driven processes within the framework of the
IT Act and the DPDP Act, by inclusions of clear definitions of AI and its applications, establishing
accountability and liability standards for AI-generated outcomes. Additionally,
provisions for transparency and explainability should be mandated, allowing
users to understand how AI systems make decisions. It is also
necessary to make provisions for ai driven recruitment bias
since section 5 of the qual remuneration act only addresses recruitment bias
based on gender whereas there are other grounds that need to be addressed
too. By integrating these elements, the revised
legislation can create
a balanced environment that fosters technological
advancement while safeguarding user rights and public trust.
Scope And Limitation
Of Study:
This doctrinal research methodology will facilitate a thorough
examination of existing legal and ethical issues majorly focussing on discriminative practices and job erosion
caused due to ai along
with other negative drawbacks. It also analyses protections available for
employees in the context of AI in the employment sector in the existing labour
laws. By focusing on statutory interpretation,
case law analysis and interpretations of existing research
material this paper aims to contribute suggestive insights into actionable recommendations for improvement in
comparison with foreign laws. The limitations of the paper could be the
narrowed down approach due to major lack of abundant literature material,
accurate legal framework or case laws available specifically targeting ai since it has just started evolving.
The rapidly changing nature of AI may render certain
findings time-sensitive, continuously shifting or not applicable
in the near future.
Conclusion:
Undoubtedly, unregulated and unsupervised use of AI at the workplace raises
grave ethical and legal concerns. Currently, in India,
there are no standalone employment law legislations that address concerns
regarding the use of AI at the workplace. In the absence of appropriate
regulation, the entire ecosystem is unwittingly dependent on individual
employers to have adequate internal policies
addressing relevant concerns arising out of the use of AI which is a very unstable approach. Hence the need of the hour
is to establish and implement a specific and
accurate legal framework targeting usage of ai in employment sectors
to govern the issues caused due to it and provide
the needed remedy to the victims.
References:
1.
Cheng,
Z. (2023, September 13). Ethics and discrimination in artificial intelligence-
enabled recruitment practices. Retrieved from Nature.com: https://www.nature.com/articles/s41599-023-
02079-x.
2.
Centre for Internet and Society (CIS).
(2019). Artificial intelligence and the Indian
legal system. Retrieved from https://cis-india.org/internet-governance/blog/ai-and-the-indian-legal-system
3. Upadhyay, U. (n.d.). The Impact Of
Artificial Intelligence On Employment Law And
Worker Protections In India. Retrieved from theamikusqriae:
https://theamikusqriae.com/the-impact-ofartificial-intelligence-on-employment-law-a
nd-worker-protections-in-india/
4. Rajat Sethi, D. B. (2023, July 5). Regulating Artificial Intelligence in India: Challenges and Considerations. Retrieved from Chambers.com:
https://chambers.com/articles/regulating-artificialintelligence-in-india-challenges-and-
considerations
5.
Sengupta, S. (2019). AI and the future of work in India. Economic
and Political Weekly, 54(26-27),
56-62.
7.
https://www.lawctopus.com/academike/ai-in-labour-relations-legal-implications-and-e
thical-concerns/#:~:text=One%20of%20the%20most%20significant,reflect%20and%
20reinfrein%20societal%20prejudices.
8.
https://theamikusqriae.com/the-legal-implication-of-artificial-intelligence-in-the-work
force/
9.
https://theamikusqriae.com/the-impact-of-artificial-intelligence-on-employment-law-a nd-worker-protections-in-india/
10.
https://www.theglobaldeal.com/news/The-impact-of-artificial-intelligence-on-the-labo
ur-market-and-the-workplace.pdf
12.
Binns, Reuben.
"Fairness in Machine
Learning: Lessons from Political Philosophy." In Proceedings of the 2018 Conference on Fairness,
Accountability, and Transparency (FAT* 2018). This paper discusses the ethical
implications of fairness in AI, relevant to employment.
13.
Dastin, Jill.
"Amazon Scraps Secret
AI Recruiting Tool That Showed Bias Against Women." Reuters, 2018.
14.
Raji, Inioluwa
Deborah, and Joy Buolamwini. "Actionable Auditing: Investigating Bias in Machine Learning through Adversarial Testing."
15.
"Artificial Intelligence in Employment: Legal and Ethical
Issues." International Labour Organization (ILO), 2021.
16.
López, Manuel,
and Manuel Martínez.
"AI and Employment Law: The Future of Work and the Legal Framework."
Journal of Labor and Employment Law, 2021.
17.
"Algorithmic Bias Detectable in AI Hiring Tools." Harvard
Law Review, 2020. This
article discusses legal issues arising from algorithmic bias in hiring
practices.