New Jersey is taking bold steps to protect residents from algorithmic discrimination with a groundbreaking Civil Rights and Technology Initiative. As artificial intelligence (AI) becomes more common in hiring, housing, and healthcare, the state is making sure that civil rights laws keep up.
As a leader in New Jersey employment law, I want to help clients navigate this new digital frontier by shedding some light on this groundbreaking initiative.
Ty Hyderally is a highly regarded employment law attorney and the owner of Hyderally & Associates, P.C., a firm with offices in Montclair, New Jersey, and New York City.
Recognized among the Top Ten Leaders of Employment Law in Northern New Jersey, Hyderally brings decades of experience litigating employment discrimination, whistleblower, retaliation, wage and hour, and contract-related claims.
His firm aggressively represents employees and provides proactive counsel to businesses on compliance, policy drafting, and workplace training.
A former president of the National Employment Lawyers Association (NJ), Hyderally is also a frequent lecturer and has served in national leadership roles within the American Bar Association’s Labor and Employment Litigation Section.
The Civil Rights and Technology Initiative is a statewide effort led by the New Jersey Division on Civil Rights (DCR) to address discrimination risks created by advanced technologies.
This includes AI systems, machine learning models, and other automated decision-making tools now used in employment, education, housing, credit, and healthcare. Launched under Attorney General Matthew Platkin, the initiative has two central pillars:
These programs aim to ensure that innovation benefits everyone—especially those protected under New Jersey’s Law Against Discrimination (LAD).
Algorithmic discrimination refers to bias that occurs when an automated decision-making tool unfairly treats someone based on race, gender, disability, or another protected characteristic. These tools are often used to make high-stakes decisions, such as:
While these systems may seem neutral, they often reflect the biases of the data they’re trained on. For example, an AI system might rank resumes lower for women or minorities simply because past hiring data favored white male applicants. These outcomes can be unintentional, but they still violate the LAD.
New Jersey’s LAD applies to both human and automated acts of discrimination. That means if an AI system leads to unequal treatment in areas like employment or housing, the company using that tool can be held liable, even if they didn’t create the technology. According to the state’s 2025 guidance, a business can violate the LAD if it:
The law doesn’t require intent to discriminate. If the effect of using the tool is discriminatory, that may be enough to trigger legal consequences.
New Jersey is one of the few states taking proactive steps to regulate AI through existing civil rights laws. While only a handful of states have passed AI-specific legislation, New Jersey has issued official guidance clarifying how the LAD applies to technology.
In January 2025, the Attorney General’s office released a detailed guide explaining how entities using AI tools in employment, housing, and healthcare must test those tools for fairness and avoid practices that result in disparate impact.
This forward-looking approach places New Jersey at the forefront of civil rights enforcement in the digital era, holding tech-enabled systems to the same standards as human actors.
Employment law attorneys in New Jersey fight discrimination, and now that includes bias driven by artificial intelligence. As automated tools become more common in hiring and workplace management, employment attorneys help employees:
With strategic legal guidance, employers and employees can stay ahead of technological shifts that affect civil rights in the workplace.
New Jersey’s guidance document highlights two recent studies that show how automated tools can reinforce bias:
A 2019 study found that an AI tool used to prioritize patient care under-recommended treatment for Black patients because it used spending history, not actual health risk, as a proxy. Because Black patients historically had less access to care, they were assigned lower risk scores, despite having the same medical needs.
A 2024 study showed that resume screening algorithms favored candidates based on racially-associated names. Hispanic and Asian women were favored for certain roles over equally qualified Black men or white men, purely based on statistical correlations in the training data. These tools weren’t designed to discriminate, but because they were trained on biased historical data, the outcomes were still unequal, and potentially illegal under the LAD.
If you’re a business or organization in New Jersey using AI tools, follow these steps:
Failure to take these steps could result in lawsuits, fines, or public enforcement actions.
If you believe you’ve been subject to algorithmic discrimination, here’s your step-by-step action plan:
You don’t need to prove that the discrimination was intentional, only that it occurred and affected you based on a protected trait.
It’s discrimination that occurs when automated tools produce biased outcomes, especially in hiring, housing, and healthcare.
Yes. Under the LAD, covered entities are liable for the tools they use, even if developed by another company.
No, but the state applies existing civil rights laws, especially the LAD, to regulate the use of AI technologies.
New Jersey’s Civil Rights and Technology Initiative sets a new standard for how governments can protect their residents in a tech-driven world. By extending longstanding civil rights protections to include AI and automated tools, the state is ensuring fairness isn’t left behind in the rush to innovate.
If you think AI has been used unfairly against you in employment or housing, don’t wait. Reach out to a trusted voice in New Jersey employment law for guidance and support.
https://www.reuters.com/legal/legalindustry/state-ags-fill-ai-regulatory-void-2025-05-19/
https://www.njoag.gov/wp-content/uploads/2024/07/WebScrapingRFI-792024.pdf