EOIR PM 25-40: What Immigration Attorneys Need to Know About AI in Court Filings

EOIR's August 2025 policy memo warns immigration attorneys about AI hallucinations in court filings. Here's what it says, what it doesn't say, and what it means for your practice.

By Asylo
EOIR PM 25-40: What Immigration Attorneys Need to Know About AI in Court Filings

In August 2025, the Executive Office for Immigration Review issued Policy Memo PM 25-40 addressing AI-generated content in immigration court filings. The memo was prompted by reports of fabricated legal citations appearing in proceedings.

The bottom line: EOIR PM 25-40 warns that immigration attorneys who submit AI-generated hallucinations or errors face discipline, but it does not ban AI use. The standard is straightforward: verify everything, don't file boilerplate, and build verification into your workflow.

Here's what the memo says, what it doesn't say, and what it means for immigration attorneys using or considering AI tools.

What PM 25-40 says

The memo warns that practitioners who submit "hallucinated or erroneous AI-generated content" face discipline under multiple grounds:

8 C.F.R. § 1003.102(c): knowingly or with reckless disregard offering false evidence. This is the primary enforcement mechanism. If an AI tool fabricates a case citation and the attorney submits it without verification, the attorney is on the hook, not the AI vendor.

Ineffective assistance of counsel: if AI-generated errors in a filing harm the client's case, the attorney's competence is in question.

Conduct prejudicial to the administration of justice: a catch-all that gives immigration judges broad discretion.

Boilerplate filings: the memo specifically flags filings showing "little or no attention to the specific factual or legal issues applicable to a client's case" as grounds for discipline under § 1003.102(u). This is aimed directly at attorneys who use AI to generate generic filings without tailoring them to the client's situation.

The memo also empowers individual immigration judges to regulate AI use through standing orders and directs adjudicators to report suspected AI misuse to EOIR's Attorney Discipline Program and Anti-Fraud Program.

What it doesn't say

PM 25-40 is not a ban on AI. It doesn't prohibit AI use, doesn't require mandatory disclosure of AI assistance, and doesn't impose specific technical requirements on AI tools. It references ABA Formal Opinion 512 (July 2024) on attorney obligations when using AI, but doesn't go beyond the existing ethical framework.

The memo is a warning, not a regulation. It puts attorneys on notice that AI errors in filings will be treated the same as any other error. The attorney bears responsibility.

Why this matters now

The risk PM 25-40 addresses is real and documented. CLINIC's 2025 testing found ChatGPT fabricating case names as asylum authority. A UK tribunal referred a barrister to the Bar Standards Board for citing a nonexistent Court of Appeal judgment based on "ChatGPT research." A global database now tracks over 1,200 cases of AI-generated hallucinations in legal proceedings, with the rate accelerating.

The Stanford study on legal AI tools found that even purpose-built legal research platforms from LexisNexis and Thomson Reuters hallucinate 17-33% of the time. General-purpose tools like ChatGPT and Copilot, which most attorneys are actually using, perform worse.

For immigration attorneys specifically, the stakes are high. Asylum cases involve persecution claims where the evidence determines whether someone is returned to danger. An AI-fabricated BIA citation in a legal brief doesn't just fail the filing. It can fail the client.

What this means for your practice

PM 25-40 doesn't change what attorneys should already be doing. It reinforces it. If you're using AI tools in your practice, the memo points to three practical requirements:

Verify everything. Every AI-generated citation, case reference, and legal argument needs to be checked against the actual source. If the AI cites "Matter of [Name], [Volume] I&N Dec. [Page]," the attorney needs to confirm that decision exists and says what the AI claims it says.

Don't file boilerplate. The memo specifically targets generic filings. If an AI generates a brief that could apply to any asylum case from any country, it's exactly the kind of output PM 25-40 flags. Every filing needs to address the specific facts, claims, and legal issues of the individual client.

Build verification into your workflow. The issue isn't whether to use AI. It's whether the systems around AI catch errors before they reach the court. Human review checkpoints, citation verification, and source-grounded research tools that cite actual documents rather than generating plausible-sounding references are the practical answer.

The attorneys who will have problems aren't the ones using AI. They're the ones using AI without verification infrastructure, copying ChatGPT output into filings without checking whether the citations are real or the legal analysis is accurate.

How to use AI safely in immigration practice

The safest AI tools for immigration practice share two characteristics: they're grounded in primary sources, and they provide verifiable citations.

For country conditions research, that means tools built on actual State Department reports, UNHCR publications, and human rights documentation, not tools that search the internet and summarize what they find. For legal research, it means tools that search actual USCIS policy manuals, BIA precedent decisions, and federal register notices, with citations to specific documents and page numbers.

The standard PM 25-40 sets isn't complicated: don't submit AI-generated content you haven't verified, and don't file generic work product that ignores the client's specific situation. The attorneys who build verification into their AI workflows will use AI safely and effectively. The ones who don't will eventually appear in EOIR's discipline reports.


We built our immigration AI systems with PM 25-40 compliance as the foundation: human review checkpoints, citation verification, and audit trails on every workflow. See how we build AI departments for immigration firms →

Tell us what's eating your time.

No pitch. No demo. Just a conversation about whether AI makes sense for your operation.

Get in touch