IRS turns to AI after gutting staff, and taxpayers are not impressed

After massive layoffs, the IRS is deploying AI to handle tax disputes and case work. The community is skeptical.

The Internal Revenue Service is taking an unusual step to cope with its depleted workforce: it's bringing in artificial intelligence. According to recent reporting, the agency has partnered with Salesforce to deploy AI agents across multiple divisions, including the Office of Chief Counsel, Taxpayer Advocate Services, and the Office of Appeals. These are precisely the kinds of roles where taxpayers might expect to interact with experienced human professionals who understand the nuances of tax law and individual circumstances.

The timing is striking. The IRS has lost roughly one-third of its tax auditors compared to 2024 levels, following aggressive budget cuts earlier this year. The agency's workforce has been decimated by layoffs and furloughs, leaving it struggling to handle the volume of work that keeps the tax system functioning. Salesforce says the AI agents will handle tasks like summarizing cases and searching documents, meant to supplement rather than replace human workers. But observers are asking a reasonable question: if the agency is already short-staffed, how much actual human review will these AI recommendations receive?

Commenters across the internet have expressed deep skepticism about the arrangement. Many worry that AI systems, prone to errors and hallucinations, could bungle tax cases or favor certain taxpayers over others. There's particular concern about whether the technology might inadvertently protect wealthy individuals from scrutiny while ordinary filers face algorithmic decisions.

The sarcasm has been sharp, with some suggesting the AI will be programmed to go easy on the rich. Others point out the irony: the government cut human auditors who could recover significant tax revenue, and now it's betting on machines to do work that requires judgment and accountability.

Salesforce's executives have been careful to note they don't advocate for AI to process tax returns without human oversight. But they've also acknowledged that how the IRS deploys the technology is ultimately the agency's call. That leaves an uncomfortable question hanging in the air. With fewer people on staff and more work piling up, will the human review actually happen, or will AI recommendations increasingly become the final word? This kind of workforce reshuffling is already visible in the private sector, where companies such as IBM are pursuing an AI-driven workforce transformation that blends layoffs with hiring in new roles, a trend explored in more detail in analyses of IBM's AI-led staffing changes.

The math here is worth considering. Studies suggest that every dollar spent auditing top earners returns roughly twenty-six dollars in recovered taxes. The IRS estimated it could collect an additional five hundred sixty-one billion dollars over the next decade with full funding. Instead, the agency is being asked to do more with less, relying on technology that's still unproven in this context. The move also fits into a broader pattern of ambitious federal data and automation initiatives, similar to the expansion of biometric programs described in reports on US government biometric data collection. Whether this latest gamble pays off remains to be seen.

What This Means for People Learning AI

This IRS deployment offers a cautionary case study for AI learners and practitioners. The scenario illustrates critical challenges in real-world AI implementation: the gap between AI capabilities and organizational readiness, the risks of deploying systems without adequate human oversight infrastructure, and the ethical complexities of AI decision-making in high-stakes domains. It sits alongside broader introductions to different types of AI systems and real-world AI applications, but highlights a darker side of what happens when those systems are dropped into under-resourced institutions.

For those studying AI, this case demonstrates why understanding not just model performance but also organizational context, accountability frameworks, and failure modes is essential. Legal battles over training data, like the landmark court decision on training models as fair use covered in discussions of AI and copyright law, show that technical work never exists in a vacuum. Learners who want to build practical skills can explore structured AI project ideas or even guided events such as AI-focused learning weeks, but they should also pay attention to governance and public trust.

The same tensions are emerging in consumer tech as well, where Microsoft's AI-heavy Windows roadmap has raised similar questions about control and consent, issues examined in coverage of how Windows turns 40 while its AI future grows less certain.

By Brian Dantonio

Brian Dantonio (he/him) is a news reporter covering tech, accounting, and finance. His work has appeared on hackr.io, Spreadsheet Point, and elsewhere.

View all post by the author

Disclosure: Hackr.io is supported by its audience. When you purchase through links on our site, we may earn an affiliate commission.

Learn More