Forbes contributors publish independent expert analyses and insights.
In today's column, I examine a newly filed lawsuit against OpenAI regarding the noteworthy aspect that ChatGPT is being allowed by the AI maker to provide legal advice.
Here's the deal. Presumably, contemporary generative AI and large language models (LLMs) are not supposed to be handing out legal advice. The act of doing so would seem to violate the various UPLs (Unauthorized Practice of Law) stipulations throughout the United States (other countries tend to have similar provisions, but not all do). Only lawyers are supposed to give out legal advice. Non-lawyers generally cannot do so, though you can act as your own "lawyer" if you wish to take that chance.
In this recently filed case, the plaintiff is a company that was sued by an individual for various claims, and the individual allegedly tapped into ChatGPT to devise legal filings. The legal filings were apparently all over the map, causing the company to seemingly inordinately expend money and resources to fight the case. The company seeks to be compensated by OpenAI for the presumed use of ChatGPT to legally aid this individual in suing them and wasting their time and effort.
Does this landmark lawsuit have any legs to stand on?
Let's talk about it.
This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
AI And The Law
As a quick background, I've been extensively covering and analyzing a myriad of facets regarding the intersection of AI and the law for many years. You can find my writings not only in my Forbes column but also as posted in Bloomberg Law, ABA Law Journal, The National Jurist, The Global Legal Post, Lawyer Monthly, The Legal Technologist, MIT Computational Law Journal, and so on.
There are two major perspectives on the mixture of AI and law:
* (1) AI & Law. The application of AI to perform legal reasoning, and
* (2) Law & AI. The application of laws to the governance and regulation of AI.
Thus, you can apply AI to the law, and you can conversely apply the law to AI. For my big picture overview of both of these exciting and rapidly evolving realms, see my discussion at the link here and the link here.
I will be focusing here on the application of AI to perform legal reasoning. It is an intriguing problem that remains vexing and unsolved. There is a specialized field of research that concentrates on making advances in AILR (AI Legal Reasoning) -- see my books on the steady but uneasy progress toward attaining AILR, including my introductory book at the link here and my advanced book at the link here.
The overarching goal of AILR is to devise AI that can work on the same level as human lawyers and practice law as they do. You would not be able to differentiate the legal efforts of human lawyers from the AILR. They would be on par with each other. Think of this in the widest and deepest way possible, such that the AI is equal to whatever legal efforts and shenanigans that human lawyers undertake.
Some researchers hope to go even further. The aim is to craft AI that is superior to human lawyers and can be a kind of superhuman lawyer. This highly advanced AI would run circles around those of human lawyers in all legal matters. It would no longer make sense to hire a human attorney since they could be completely outmaneuvered by the superhuman AILR.
We aren't there yet, so please don't drop out of law school. Savvy budding lawyers are realizing that attorneys armed with AI are going to outdo and outshine lawyers who shun AI. Make sure to get as much AI under your belt and protect yourself from getting career-sidelined.
The Practice Of Law Is Sacred
To presumably protect the public at large, the United States has landed on precepts that allow only lawyers to practice law. There is a sensible basis for this. If a person claimed they could save you from legal woes by legally representing you, but they weren't an actual lawyer, you might engage them and end up getting improper and imprudent legal advice. Imagine the number of scams and scammers that would emerge. It would be a legal doomsday scenario for the general public.
Anyone who holds themselves out as a lawyer, but isn't a lawyer, can get themselves into rather dangerous legal troubles. This overall notion is commonly referred to as the Unauthorized Practice of Law (UPL), varying depending upon the legal jurisdiction, but in the United States, there is a relatively consistent set of state-by-state rules barring people from pretending to be attorneys. For my extensive analysis of the use of AI in the legal field and the resultant implications for UPL, see the link here and the link here.
Consider the rules in California that pertain to the unlawful practice of law. The California Business and Professionals Code (BPC) contains Article 7, covering the unlawful practice of law, for which subsection 6126 clearly declares this:
* "Any person advertising or holding himself or herself out as practicing or entitled to practice law or otherwise practicing law who is not an active licensee of the State Bar, or otherwise authorized pursuant to statute or court rule to practice law in this state at the time of doing so, is guilty of a misdemeanor punishable by up to one year in a county jail or by a fine of up to one thousand dollars ($1,000), or by both that fine and imprisonment."
Mindfully examine that legal passage. I emphasize this because the act of holding oneself out as a lawyer can be prosecuted as a crime that lands the person in jail. Do the crime, pay the time, as they say.
Some Say UPL Is Unfair
Not everyone buys into the idea that only lawyers should be legally allowed to practice law.
One belief is that this is essentially a monopolistic contrivance. It is a means of preventing open competition. It is a racket, as it were. The main purpose would seem to be to keep the supply of available legal advisors low and artificially keep the cost high. Only those of the secret society can make bucko bucks. Plainly a devious scheme.
How could the practice of law be democratized?
If somehow everyone could get legal advice without having to pay an arm and a leg, the aspects of justice across-the-board would be more likely. It wouldn't be that only the wealthy get the most out of the law. Whether rich or poor, all would have the same access to getting bona fide and top-notch legal guidance.
The hoped-for magical way to achieve this would be to lean into AI.
If we can get AI legal reasoning to be on par with human lawyers, you would seemingly be able to choose which path to go. You could use a human lawyer or use AILR. We don't know what the pricing would be for AILR, but the assumption is that it would likely be less expensive than human lawyers. Plus, AILR could be available anytime and anywhere. And would be available on a massive scale, thus the entire population of the United States could presumably have their "own AI lawyer" at the ready.
Some say that this would be nirvana. People could tap on their smartphone and instantly get AILR to advise them on how to handle that speeding ticket or what to do about that lawsuit against them for putting up a fence on their neighbor's property. The other side of the coin is that this would be an utter nightmare. If people are already prone to sue each other, this would take this to stratospheric levels. There would be a tremendous number of cases going before the courts that would completely overwhelm the justice system.
Allow me to note that the counterargument to the courts getting overwhelmed is that we would simply place AILR into the role of human judges. The beauty is that the AILR could handle any volume of cases and proceed to do so expeditiously. That makes some shudder to think that judges and possibly even juries would be composed of AI, rather than our fellow humans.
OpenAI ChatGPT Usage Policies
All the major AI makers have established their own set of usage policies regarding how you can make use of their AI wares. This includes the popular LLMs such as OpenAI ChatGPT and GPT-5, Google Gemini, Microsoft CoPilot, Meta Llama, xAI Grok, Anthropic Claude, and others.
OpenAI says this on their usage policies webpage:
* "Everyone has a right to safety and security. So, you cannot use our services for the provision of... tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional" (posted as of October 29, 2025).
It is interesting to observe the changes in the wording of the OpenAI usage policies over time. For example, I previously discussed the OpenAI prohibition on using their wares for legal advice, and at that time (see the link here), the policy was stated this way:
* "OpenAI's models are not fine-tuned to provide legal advice. You should not rely on our models as a sole source of legal advice."
I conjecture that this prior wording was a bit ambiguous by saying that their models were not "fine-tuned to provide legal advice" and that the models should not be "a sole source of legal advice". One interpretation would be that you could use their models for legal advice, but that you are merely cautioned that it isn't fine-tuned for that purpose and that you would be astute to seek additional sources for your legal advice.
The implications are that if you are willing to accept the idea that their AI isn't fine-tuned, and only just generally capable of legal advice, you are pretty much good to go. In terms of consulting other sources, the sources weren't named, and thus, you could seemingly read a book on the law or ask a friend for legal advice. Period, end of story.
The Bounds Are Unclear
The current version of the warning or cautionary indication seems quite a bit sterner and more direct. I would say it seeks to plug the loopholes of the prior verbiage. You aren't to use their AI services for tailored legal advice unless you have the appropriate involvement by a licensed legal professional. That seems a tad more conclusive.
But I might add that this still allows room to maneuver. The emphasis is on "tailored" legal advice. The presumption is that you can ask the AI for any non-tailored legal advice. It might be a wink-wink suggestion that, hey, go ahead and ask any broad questions about legal matters, just do not get specific. A person could get tricky on this provision. Suppose you ask the AI to give legal advice for a "fictitious" situation, which just so happens to precisely match your specific situation. You could insist that it wasn't tailored advice being sought.
Is that enough of an escape hatch for OpenAI to be free and clear of UPL?
Until now, there haven't been any realistic attempts at testing this provision.
Lawsuit Filed Against OpenAI
A lawsuit launched by Nippon Life Insurance against OpenAI was filed in the Northern District of Illinois on March 4, 2026. Nippon Life Insurance is the plaintiff, and OpenAI is the named defendant.
The legal claim is laid out this way (capitalization shown as is):
* "NIPPON brings this lawsuit against OPENAI FOUNDATION and OPENAI GROUP PBC under the common and statutory laws of the State of Illinois for: a) tortious interference with a contract; b) the unlicensed practice of law; and c) abuse of process."
* "This action arises from OPENAI's collective conduct, through its artificial intelligence ('AI') chatbot program, ChatGPT, in providing legal assistance to a user, Graciela Dela Torre (hereinafter referred to as 'Dela Torre'), without licensure."
* "As Dela Torre's legal assistant and advisor, OPENAI intentionally induced and facilitated Dela Torre's breach of a valid and enforceable settlement agreement with NIPPON by encouraging and assisting her in filing a motion to reopen a lawsuit that had been dismissed with prejudice. It also aided and abetted her abuse of the judicial process."
I am going to stay at a 30,000-foot level for this analysis of the case. There are a slew of nuances and details that are juicy and valuable for an in-depth examination. If the readers' interest warrants, I'll do a series of postings to cover the numerous particulars. Stay tuned.
Also, because this was just recently filed, we don't yet have a detailed response by OpenAI to the lawsuit. I will speculate about what the response is likely to contain. Undoubtedly, OpenAI will categorically reject the claims and seek to have the lawsuit dismissed. That's a nearly ironclad sure bet.
The Alleged Misconduct
I shall unpack topline facets of the filed lawsuit. By ChatGPT allegedly assisting the individual who was legally wrangling with Nippon Life Insurance, the lawsuit claims these three key issues or legal problems exist:
* (1) ChatGPT, under the auspices of OpenAI, encouraged breach of the settlement contract.
* (2) ChatGPT, under the auspices of OpenAI, engaged in the unlicensed practice of law (UPL).
* (3) ChatGPT, under the auspices of OpenAI, facilitated abuse of the judicial process.
The lawsuit essentially asserts that ChatGPT was acting as a legal assistant or advisor on behalf of or at the behest of OpenAI. Note that the lawsuit isn't saying that OpenAI did these actions directly. In other words, there were no employees at OpenAI who took these legally questionable actions of helping the individual who was involved in the settlement.
Instead, it was done entirely via ChatGPT. And, logically, since ChatGPT is made by and controlled by OpenAI, the company OpenAI ought to be held responsible for what their AI did. The AI maker and its AI developers are to be held accountable for the actions of their AI.
I mention and emphasize the point that this lawsuit targets OpenAI as a company. There has been abundant boloney in the news media and social media that the lawsuit is targeting ChatGPT, as though ChatGPT has sentience and represents a form of personhood. That's hogwash. For my coverage of the AI and personhood question, concerning whether we will someday recognize AI as though it is on par with that of legal personhood, see my discussion at the link here. We don't at this time.
Anyway, the nutty stuff out there is merely the ongoing tsunami of people making up stuff and clamoring to accumulate views, doing so without any genuine attempt to figure out what they are spouting. It's sad how much disinformation and misinformation arise. The bottom line is that this has to do with OpenAI and the making, fielding, and upkeep of their AI wares, specifically ChatGPT.
I will next briefly explore each of the three key claims of the lawsuit. I am not offering legal advice. Nothing I say here has any legal significance and is entirely the musings of a layman. Anyone encountering any kind of legal circumstance that parallels the lawsuit being discussed should seek advice from their representative legal counsel. That's my clear-cut fine print for this elicitation.
Claim #1: Tortious Interference With A Contract
The first claim is that ChatGPT encouraged the filing of motion(s) that violated the settlement agreement that had been established between Nippon Life Insurance and the individual named in the case. If the individual had entered the settlement agreement into ChatGPT, or otherwise explained it to ChatGPT, the argument is that ChatGPT "knew" about the settlement and was overtly and flagrantly telling the individual to circumvent it. This could be likened to a third-party encouraging contractual non-compliance. You can get into legal trouble doing that.
One path of escape for OpenAI is that if ChatGPT had not been informed about the settlement, the AI then would not have "known" about it. In that circumstance, it is much harder to blame ChatGPT since it "unknowingly" provided such advice (well, if it did provide that kind of advice to begin with). Did the individual provide the settlement to ChatGPT, and/or did the individual explain the settlement, and if so, to what degree did ChatGPT therefore have access to the settlement agreement?
But, even if all of that did occur, the courts have not seemed to cross the line of holding generative AI on par with the actions of a human actor that has actual knowledge. You are in a weak posture to contend that ChatGPT or any LLM has a semblance of "knowing" about something such as a settlement agreement. Sure, it might have the words in hand, but that's a far cry from having a human-level understanding of the contents.
More To Strictly Question
There's more to this first claim that merits unpacking.
The plaintiff would seemingly have to show that ChatGPT caused the breach of the settlement (assuming that there was a breach). Remember that ChatGPT is not sentient. It was the sentient individual who opted to make the filings with the court. The decision to reopen the litigation was ultimately made by that individual, a human being. Though ChatGPT might have suggested doing so, the buck stops with the human. I realize that a third-party argument can be made. It seems like quite a reach to insist that ChatGPT was the proximate cause.
If you are going to hold ChatGPT accountable (via OpenAI), the same argument could be made that if the individual read a book, did a search on the Internet, or did any other seeking of an outside source, all of those would be similarly held accountable for the breach. That would not likely hold water.
Finally, the use of tortious interference typically revolves around intentional conduct. Did a third party intend to direct actions to cause a breach? The problem with this aspect is the matter of legal intent. We might all agree, hopefully, that ChatGPT is not sentient and, ergo, cannot form human intent. Perhaps it can form some kind of computational or mathematical "intent," but that's not the same as human intent. Proving to a court or a jury that generative AI has intent is going to be quite a tough row to hoe.
Claim #2: Unlicensed Practice Of Law
The plaintiff seems to assert that ChatGPT carried out lawyer-like efforts, including interpreting a legal settlement agreement, giving legal advice to the individual on legal options, and drafting or assisting in drafting a legal filing. All in all, that seems like AI practicing law, doing so without a license and not being authorized as a human lawyer would be.
First, let's get on the table that there has been a long history of arguments made about the use of automation as potentially violating UPL. Famous examples include LegalZoom and Rocket Lawyer. For my detailed analysis, see the link here. The gist is that this is not a new argument. We've been around this bend before.
Second, the question of UPL is typically enforced by regulators, not by private litigants. It is a serious topic that state bar associations handle. Attorney generals sometimes take on UPL situations.
Does Nippon Life Insurance have suitable standing to pursue a UPL claim?
I wouldn't hold my breath on it. Also, OpenAI would almost certainly highlight that the provisions of their usage policies preclude the use of ChatGPT for said purposes. The individual seemingly violated the usage policies, assuming they didn't get corresponding appropriate legal advice from a licensed professional. Therefore, OpenAI would contend that you cannot hold OpenAI to blame for what the user did. OpenAI would attempt to hold its head high, as ChatGPT only provides educational content about the law. The user failed to comply with how to properly use ChatGPT. Shame and blame go to the user.
As if that's not already enough, the other golden rod would be to have OpenAI invoke the First Amendment to the U.S. Constitution. Courts have repeatedly held that legal information is a form of protected speech. This differs from the elements of a law practice that represents and formally advises on legal issues. Regulating AI responses of this nature would raise free speech concerns.
That's a humongous can of worms, and doubtful that this lawsuit will get to pry it open.
Claim #3: Abuse of Judicial Process
This third claim is by far the weakest of the three major claims. The plaintiff is apparently claiming that ChatGPT aided in misusing the court system. It aided and abetted in abusing the process of justice.
The conventional path of this type of argument is that the abuse consisted of malicious litigation tactics. Or there was coercion that made exploitive use of legal procedures. Courts usually seek to ascertain that there was an ulterior motive at play.
I suppose you could try to get the internal computational calculations of ChatGPT that occurred when generating the responses, and then try to find something to hang your hat on in that morass. The thing is, as I've identified numerous times, trying to ferret out what is happening deep inside a large-scale artificial neural network (ANN) on a human-logic basis is still beyond our prevailing skill set, see my analysis at the link here and the link here. This also takes us back to the question of intent and whether generative AI forms human intent.
You are facing a nearly insurmountable mountain climb to get generative AI into the box of having intent or human agency. Good luck with that far-fetched legal possibility.
Final Musings For Now
If the lawsuit happens to gain traction, it will be well-worth keeping tabs on it since the outcome could have earthshattering ramifications. Beyond OpenAI, all LLM makers would need to radically revise their AI, or else face similar legal exposures. They would have to curtail the generation of any content that had a resemblance to legal advice.
Trying to excise or delete the legal advisory capacity of an LLM is not a particularly viable option. Too much of that content is patterned on and intersecting with zillions of other facets of the internal structures. Cutting out the legal stuff would almost certainly gut the essence of the AI. The AI would no longer likely function across-the-board. Without my unduly anthropomorphizing AI, which I prefer to avoid doing, it would be akin to lobotomizing the LLM.
The likelier route would be to adopt AI safeguards that seek to suppress the generation of such content, or that stop the generation before it gets underway. That is a more technologically feasible possibility. The challenge will be to find a balance between what is construed as legal advice versus simply serving as a legal educational aspect. The courts would need to lay this out, or regulators would need to be specific about the boundaries. Without a clear playing field, the AI makers would be stabbing in the dark to figure out what is permitted versus what gets them in legal trouble.
In my humble opinion, I don't see this case going very far. It seems that the plaintiff doesn't have proper standing for the UPL claim. The intent elements are missing. The causation is exceedingly weak. My hunch is that this is headed to an early dismissal by the judge.
The Bigger Picture
I have a twist that I'd like to share.
I anticipate that this lawsuit might trigger an altogether "unexpected consequence" response overall. If the case manages to capture the headlines for a while, it could stir up a hornet's nest, although the case itself might fail in the end and disappear into the abyss of legal history.
The hornet's nest is that the question of AI providing legal advice is going to finally get prominence. So far, it has been only a matter known and discussed among those versed in the AI and law domain. Few others think about it, talk about it, or worry about it. Maybe regulators will step up to overtly address the issue. State bars might get energized. AI makers might see the writing on the wall and proactively institute new AI safeguards around the generation of anything resembling legal advice.
A final comment for now.
The comedian Steven Wright proffered one of the funniest lines about attorneys (which even attorneys tend to relish too): "I busted a mirror and got seven years bad luck, but my lawyer thinks they can get me five." Is that lawyering advice from a human attorney or generative AI?