Examining the Ambitions, Compliance Challenges, and Long-Term Implications of the New York AI Act
The New York AI Act (Local Law 144), which came into effect in January 2023, represents a landmark attempt to regulate AI-driven hiring tools. Targeting Automated Employment Decision Tools (AEDTs), this legislation aims to promote transparency, fairness, and accountability in recruitment practices that utilize AI. However, nearly a year later, the law’s impact has been minimal. Compliance rates remain low, and companies often avoid meeting the Act’s demands due to its limited scope and lack of enforceability. This disconnect between the Act’s goals and its practical outcomes underscores broader issues in AI regulation.
This article explores the intentions of the New York AI Act, the compliance obstacles businesses face, and what these challenges signal for the future of AI-based hiring.
Limitations of the New York AI Act
The New York AI Act was designed with ambitious goals: it requires companies that use AEDTs to conduct annual bias audits, publish these results, and notify candidates if AI is part of the hiring decision-making process. However, a recent study by Cornell University found that only 18 out of 391 surveyed NYC employers had published their bias audit results, illustrating a serious compliance gap.
AI expert Hilke Schellmann describes the Act as “lacking enforcement power,” explaining that many companies are not meeting its standards due to the Act’s lack of effective oversight. The low compliance rate highlights that while the Act aims to promote transparency, it has yet to compel businesses to make significant changes.
Narrow Scope of AEDTs
Initially, the New York AI Act intended to cover a broad range of hiring tools but was revised to apply only to fully automated tools with no human involvement in decision-making. This narrow definition allows companies to include some level of human review, exempting them from the Act’s requirements.
According to Amanda Blair, an attorney at Fisher Phillips, this limited scope was a point of contention, as a broader definition was considered overly burdensome for businesses. The New York Department of Consumer and Worker Protection (DCWP), which oversees enforcement, opted for this narrow scope to ease compliance requirements, though it has ultimately weakened the Act’s effectiveness.
Limited Awareness Among Job Candidates
Although the law mandates companies to inform candidates when AI is used in hiring, experts note that many applicants remain unaware. This lack of awareness means candidates are less likely to file complaints, which complicates enforcement efforts. A city spokesperson mentioned that the DCWP has received few complaints, suggesting that candidates may not fully understand their rights under the Act.
Schellmann states, “If candidates don’t know AI is being used, they can’t raise concerns. Without companies posting audit results, job seekers are left in the dark.”
Compliance Challenges for Businesses
While some companies are working to meet the Act’s requirements, many find compliance challenging due to technical complexities and reputational concerns.
Ambiguity in AI Auditing Standards
One significant challenge is the lack of standardized methods for conducting AI bias audits. Sethupathy, an AI regulatory expert, notes that “meaningful audits require more than basic metrics. Without a clear standard, it’s hard for businesses to ensure compliance while producing results that genuinely reflect system fairness.”
Public Disclosure: Balancing Transparency and Risk
The Act’s requirement to publish audit results has also sparked debate, with many companies worried about potential reputational damage. Minimal fines—ranging from $500 to $1,500 per day—mean that some businesses prefer to risk non-compliance over exposing themselves to public scrutiny.
Beyond New York: A Growing Trend in AI Regulation
Despite its limitations, the New York AI Act is influencing regulatory trends beyond the city. States such as New Jersey and California are considering similar measures, and the European Union’s forthcoming AI Act may set a global standard with a stronger focus on governance, continuous monitoring, and controls rather than public disclosure.
Sethupathy predicts future regulations will adopt a more proactive approach: “We’ll likely see a shift toward internal oversight and comprehensive accountability.” This trend mirrors the EU’s structured compliance framework, which emphasizes governance and monitoring without requiring businesses to disclose audit results publicly.
What’s Next for AI Compliance in Hiring?
While the New York AI Act is a pioneering step, it underscores the need for clearer definitions, stronger enforcement, and standardized auditing processes. Schellmann argues, “Transparency alone won’t address AI bias; it’s only one part of the solution.” As AI regulations evolve, companies committed to ethical hiring practices must adapt to these standards to remain competitive and compliant.
Conclusion
The New York AI Act set out to establish ethical standards in AI-driven hiring practices, but its limited enforcement power and narrow scope have hindered its impact. As the first legislation of its kind in the United States, it is still a crucial development, highlighting the need for robust frameworks that ensure AI accountability in hiring. Businesses that take proactive steps to align with these standards will be positioned as leaders in responsible AI use, supporting a broader industry shift toward transparency and fairness in recruitment practices.