New Jersey’s Civil Rights and Technology Initiative

how the pregnant workers fairness act is changing the workplace
New York’s New Paid Prenatal Leave Laws
May 15, 2025
The Impact of Racist Algorithms and the Push for Equity in AI

New Jersey is taking bold steps to protect residents from algorithmic discrimination with a groundbreaking Civil Rights and Technology Initiative. As artificial intelligence (AI) becomes more common in hiring, housing, and healthcare, the state is making sure that civil rights laws keep up.

As a leader in New Jersey employment law, I want to help clients navigate this new digital frontier by shedding some light on this groundbreaking initiative.

About the Author

Ty Hyderally is a highly regarded employment law attorney and the owner of Hyderally & Associates, P.C., a firm with offices in Montclair, New Jersey, and New York City.

Recognized among the Top Ten Leaders of Employment Law in Northern New Jersey, Hyderally brings decades of experience litigating employment discrimination, whistleblower, retaliation, wage and hour, and contract-related claims.

His firm aggressively represents employees and provides proactive counsel to businesses on compliance, policy drafting, and workplace training.

A former president of the National Employment Lawyers Association (NJ), Hyderally is also a frequent lecturer and has served in national leadership roles within the American Bar Association’s Labor and Employment Litigation Section.

What Is the Civil Rights and Technology Initiative?

The Civil Rights and Technology Initiative is a statewide effort led by the New Jersey Division on Civil Rights (DCR) to address discrimination risks created by advanced technologies.

This includes AI systems, machine learning models, and other automated decision-making tools now used in employment, education, housing, credit, and healthcare. Launched under Attorney General Matthew Platkin, the initiative has two central pillars:

  • The Civil Rights Innovation Lab, which develops tools to support public outreach, complaint handling, and investigations
  • The Algorithmic Fairness Project, which works to prevent bias in how organizations use technology

These programs aim to ensure that innovation benefits everyone—especially those protected under New Jersey’s Law Against Discrimination (LAD).

What Is Algorithmic Discrimination?

Algorithmic discrimination refers to bias that occurs when an automated decision-making tool unfairly treats someone based on race, gender, disability, or another protected characteristic. These tools are often used to make high-stakes decisions, such as:

Employment:

  • Who sees job ads or gets selected for interviews
  • Employee performance evaluations and promotion decisions
  • Work scheduling and shift assignments
  • Workplace monitoring and productivity tracking

Housing and Real Estate:

  • Who gets approved for a mortgage or apartment
  • Tenant screening and rental application processing
  • Property valuation and pricing decisions
  • Insurance coverage and premium calculations

Healthcare:

  • What care a patient receives in a hospital
  • Treatment prioritization and resource allocation
  • Insurance approval and coverage decisions
  • Risk assessment and diagnostic recommendations

Education:

  • How students are monitored and disciplined
  • Academic performance evaluation and grading
  • College admission and scholarship decisions
  • Resource allocation and program placement

Financial Services:

  • Credit approval and loan decisions
  • Interest rate calculations and payment terms
  • Fraud detection and account monitoring
  • Investment recommendations and portfolio management

Retail and Customer Service:

  • Employee scheduling and task assignment
  • Customer service routing and response prioritization
  • Inventory management and product availability
  • Pricing algorithms and promotional targeting

While these systems may seem neutral, they often reflect the biases of the data they’re trained on. For example, an AI system might rank resumes lower for women or minorities simply because past hiring data favored white male applicants. These outcomes can be unintentional, but they still violate the LAD.

How Does the Law Against Discrimination Apply to AI?

New Jersey’s LAD applies to both human and automated acts of discrimination. That means if an AI system leads to unequal treatment in areas like employment or housing, the company using that tool can be held liable, even if they didn’t create the technology. According to the state’s 2025 guidance, a business can violate the LAD if it:

  • Uses a hiring algorithm that disproportionately screens out candidates based on race or gender
  • Relies on automated tenant screening tools that exclude applicants with disabilities
  • Deploys facial recognition tools that misidentify people of color

The law doesn’t require intent to discriminate. If the effect of using the tool is discriminatory, that may be enough to trigger legal consequences.

Why Is New Jersey Leading the Nation in AI Oversight?

New Jersey is one of the few states taking proactive steps to regulate AI through existing civil rights laws. While only a handful of states have passed AI-specific legislation, New Jersey has issued official guidance clarifying how the LAD applies to technology.

In January 2025, the Attorney General’s office released a detailed guide explaining how entities using AI tools in employment, housing, and healthcare must test those tools for fairness and avoid practices that result in disparate impact.

This forward-looking approach places New Jersey at the forefront of civil rights enforcement in the digital era, holding tech-enabled systems to the same standards as human actors.

How Can Employment Attorneys Help?

Employment law attorneys in New Jersey fight discrimination, and now that includes bias driven by artificial intelligence. As automated tools become more common in hiring and workplace management, employment attorneys help employees:

  • Challenge unfair terminations or hiring practices tied to algorithmic decisions
  • Understand how the LAD protects them from emerging forms of discrimination
  • Take action against employers using biased or opaque decision-making systems

With strategic legal guidance, employers and employees can stay ahead of technological shifts that affect civil rights in the workplace.

What Are Real-World Examples of Algorithmic Discrimination?

New Jersey’s guidance document highlights two recent studies that show how automated tools can reinforce bias:

Healthcare Bias

A 2019 study found that an AI tool used to prioritize patient care under-recommended treatment for Black patients because it used spending history, not actual health risk, as a proxy. Because Black patients historically had less access to care, they were assigned lower risk scores, despite having the same medical needs.

Hiring Disparities

A 2024 study showed that resume screening algorithms favored candidates based on racially-associated names. Hispanic and Asian women were favored for certain roles over equally qualified Black men or white men, purely based on statistical correlations in the training data. These tools weren’t designed to discriminate, but because they were trained on biased historical data, the outcomes were still unequal, and potentially illegal under the LAD.

What Should Employers and Tech Users Do?

If you’re a business or organization in New Jersey using AI tools, follow these steps:

Step-by-Step Employer Compliance Process:

  1. Audit your tools – Regularly test for disparate outcomes and document your findings.
  2. Ask vendors hard questions – Don’t assume a third-party product is fair just because it’s popular.
  3. Train staff – Make sure HR, compliance, and tech teams understand civil rights requirements.
  4. Consult with legal counsel – Especially before implementing new AI systems in hiring or customer service.
  5. Establish ongoing monitoring – Create regular review processes for all automated decision-making tools.
  6. Document everything – Maintain records of testing, validation procedures, and bias mitigation efforts.

Failure to take these steps could result in lawsuits, fines, or public enforcement actions.

What Should You Do If You’ve Been Affected?

If you believe you’ve been subject to algorithmic discrimination, here’s your step-by-step action plan:

Steps to Take If You Experience AI Discrimination:

  1. Collect records of your interaction with the company or decision-making system
  2. Request an explanation for the decision, including whether an automated tool was used
  3. Document everything – Save emails, applications, and any communications about the decision
  4. Gather evidence of disparate treatment compared to similarly situated individuals
  5. File a complaint with the New Jersey Division on Civil Rights
  6. Contact an experienced attorney like Ty Hyderally to evaluate your legal options

You don’t need to prove that the discrimination was intentional, only that it occurred and affected you based on a protected trait.

Civil Rights Technology Initiative FAQs

What is algorithmic discrimination?

It’s discrimination that occurs when automated tools produce biased outcomes, especially in hiring, housing, and healthcare.

Are employers responsible for AI bias from third-party vendors?

Yes. Under the LAD, covered entities are liable for the tools they use, even if developed by another company.

Does New Jersey have specific AI laws?

No, but the state applies existing civil rights laws, especially the LAD, to regulate the use of AI technologies.

Why This Initiative Matters

New Jersey’s Civil Rights and Technology Initiative sets a new standard for how governments can protect their residents in a tech-driven world. By extending longstanding civil rights protections to include AI and automated tools, the state is ensuring fairness isn’t left behind in the rush to innovate.

If you think AI has been used unfairly against you in employment or housing, don’t wait. Reach out to a trusted voice in New Jersey employment law for guidance and support.

Resources:

https://www.reuters.com/legal/legalindustry/state-ags-fill-ai-regulatory-void-2025-05-19/

https://www.njoag.gov/wp-content/uploads/2024/07/WebScrapingRFI-792024.pdf

https://www.njoag.gov/about/divisions-and-offices/division-on-civil-rights-home/priorities/civil-rights-and-technology/

https://www.njoag.gov/wp-content/uploads/2025/01/2025-1-8-DCR-Guidance-on-Algorithimic-Discrimination-.pdf

Comments are closed.