TechFides — May 2026
A managing partner I spoke with last month told me her associates were using ChatGPT to summarize deposition prep. She said it like it was a productivity win. Then she paused. "Where does that go, exactly?"
That's the question every founding attorney needs to ask before the bar association asks it for them.
This is what private AI looks like for a real law firm — and why I think the next two years will separate firms that own their AI infrastructure from firms that rent it and find out the hard way what they signed.
The privilege problem nobody is talking about clearly
Attorney-client privilege is a duty, not a setting. When your associate pastes a deposition outline into ChatGPT to "make it tighter," that text leaves your network. It travels to OpenAI's servers, sits in their logs, and depending on the account tier, may train future model versions.
Your privilege opinion will say it didn't waive privilege. The opposing counsel's motion will argue otherwise. A judge somewhere is going to be the first to write the published ruling, and the firm whose data was on the wrong side of it is going to spend a year explaining the line item on their malpractice premium.
Most firms are not waiting for that ruling. They are running the same risk every day, just hoping their case isn't the test case.
The state bars have already started to weigh in. Florida, California, New York, and Texas have all issued ethics opinions on generative AI in the past 18 months. Every single one names the same thing: confidentiality of client data is the lawyer's responsibility, regardless of what the vendor's terms of service say.
You cannot outsource a duty.
What private AI actually means
When I say "private AI," I mean three specific things:
The model runs on hardware you control. Not a server farm in Ohio. Not "encrypted in transit and at rest" with a third party. A box in your office, or a colocation cage your IT person can physically point to.
The data never leaves your network. Every prompt, every document, every drafted memo stays inside your firewall. There is no API call. There is no cloud round-trip. There is nothing for opposing counsel to subpoena from a vendor.
You own what runs. Open-source models like Llama 3 and Mistral have caught up to GPT-4 for the workflows that matter to law firms — document summarization, contract redlining, case-law research, deposition prep. They are licensed so that you can run them perpetually, even if the original publisher disappears.
This is not theoretical. The hardware to run a 70-billion-parameter model on-premise costs between $5,000 and $15,000 today. Three years ago it was $40,000. The economics have shifted.
What it actually looks like in a 25-attorney firm
Picture a mid-size litigation firm in Dallas. Twenty-five attorneys, eight paralegals, three offices. Today their AI footprint looks like this:
- Twelve Microsoft Copilot Pro seats at $30/seat/month = $360/month
- Six personal ChatGPT Plus subscriptions on the firm card = $120/month
- Two associates who quietly signed up for Claude Pro = $40/month
- An associate who is "experimenting" with Cursor for contract drafting = $20/month
- Maybe $200/month in "research tool" subscriptions that touch AI
That is roughly $740/month in AI subscription stack. None of it is governed. None of it is auditable. None of it is private. Each tool's privacy policy is a different agreement with a different vendor on a different renewal cycle.
Now imagine the same firm with private AI installed. One server in their server closet. A Llama 3 70B model running on it, configured for legal workflows. Attorneys access it through a web interface that looks and feels like ChatGPT. Same speed. Same quality on the workflows that matter. The bill is one line item: a managed retainer covering hardware, deployment, monitoring, and updates.
Cost: in our Private AI tier structure, a firm this size lands at the Growth plan — $2,299/month. That is roughly three times their current subscription stack, but it eliminates the privacy exposure, kills the seat-based price creep, and gives them an asset they own, not a service they rent.
The firms running the math are noticing the second number, not the first.
The objections, in order of how often I hear them
"We don't have IT."
Most SMB firms do not. That is exactly why we exist. We loan the hardware, we install it on-site, we monitor it 24/7, and your team uses it like any other application. Your office manager does not become a sysadmin. Your associates do not learn Linux. They open a browser and type a prompt.
"What if the model isn't as good as ChatGPT?"
For 80 percent of legal workflows — summarization, drafting, research — open-source models are now indistinguishable from GPT-4 in blind tests. For the remaining 20 percent (long-context reasoning, novel legal arguments), there is still a quality gap. We are honest about that. For those workflows, your team still has the option to escalate to a public model — but only when they consciously choose to, with the data they consciously decide is shareable.
The default is private. The exception is public.
"What if the technology changes?"
It will. Open-source AI is moving faster than enterprise software has moved in twenty years. Our managed retainer includes monthly updates to the latest stable model release. When Llama 4 ships, your firm gets it. When a vertical-specific model fine-tuned for legal research drops, we evaluate and deploy it. The hardware is yours; the operating system is ours to keep current.
"What does the bar say?"
The bar says you are responsible for client confidentiality regardless of what tool you use. The bar does not say you cannot use AI. The bar says you have to use it in a way that maintains your duty.
Private AI is the cleanest possible answer to "how did you maintain confidentiality?" Your data did not leave the building. There is no third-party processor agreement to argue about. There is no breach notification matrix to fill out.
Where to start
If you are a founding or managing attorney reading this, the next move is not to buy hardware. The next move is to take honest inventory of what your firm is doing today.
For the next week, ask three questions in your morning huddle:
- What client matters touched ChatGPT, Claude, or Copilot last week?
- Which associates are using AI tools the firm is not paying for?
- What is your written policy on AI use, and when was it last reviewed?
If you cannot answer all three with confidence, you have shadow AI in your firm. That is the actual exposure — not the AI, but the lack of visibility into where your client data is going.
We built TechFides to solve this for firms exactly your size. The hardware is loaned. The deployment is included. The price is one predictable monthly line item. The data stays in your building.
If you want to see what this looks like for your firm specifically, take our 8-minute AI readiness assessment — it produces a tailored roadmap with real numbers, not a sales call.
The firms that move on this in the next twelve months will look back in three years and treat it the way they treat case management software now: a normal cost of doing business, not a brave technology decision.
The firms that wait are betting that no judge anywhere writes the ruling first.
Like this? Get the next one Wednesday.
One email per week. No marketing filler. Unsubscribe anytime.