TechFides — May 2026
A regional accounting firm I worked with last quarter ran a quiet inventory of their team's AI use. They found 14 different AI tools touching client financials. None had a signed agreement with the firm. All had at least one user in the last 30 days.
The managing partner read the list, set it on the desk, and said: "If a client asked us today where their P&L data lives, I have no idea what we would tell them."
That sentence captures what is happening across professional services in 2026. The work is confidential. The AI tools are not. The gap between the two is widening every month.
Here is what private AI for accounting firms, consulting practices, architecture firms, engineering firms, and other professional services looks like — and why I think most managing partners are about a year away from being asked the question they cannot answer.
What "professional services" means in this context
When I say professional services in this article, I mean firms whose product is expert judgment applied to confidential client information.
That includes:
- Accounting and tax firms (CPAs, EAs, bookkeeping)
- Management consulting (strategy, operations, change management)
- Architecture and engineering firms
- Specialty consulting (cybersecurity, environmental, regulatory)
- Boutique investment banks and M&A advisors
- Marketing and brand strategy firms with confidential client work
- Recruiting and executive search
These firms share a common pattern: clients hire them because the firm knows things, sees things, and synthesizes things the client cannot do internally. The firm's job is to apply expert judgment to information the client trusts it with. Confidentiality is not a feature of the engagement. Confidentiality is the engagement.
When AI enters this picture, the rules change. And most firms have not updated the rules.
The four ways AI is leaking out of professional services firms
I have done enough AI inventories now to recognize the pattern. Confidential client information is leaving professional services firms through four predictable channels:
1. Drafting and summarization. A senior associate pastes a long client email thread or a 40-page report into ChatGPT and asks it to summarize. The summary comes back in 30 seconds, saving an hour. The original document — including client names, financial figures, strategic plans — now sits in OpenAI's logs.
2. Analysis assistance. A consultant pastes a client P&L into Claude and asks for "three things that look unusual." The AI surfaces useful patterns. The client's revenue mix and cost structure are now in Anthropic's training pipeline.
3. Content production. A marketing director uses Jasper or Copy.ai to draft a press release announcing a client's acquisition before the announcement is public. The deal terms enter the AI vendor's data flow before they enter the public record.
4. Internal collaboration tools. Notion AI, Slack AI, and similar tools quietly summarize internal channels and documents. The summaries pull in client names, project codenames, and confidential discussion. All of it crosses to the AI feature provider.
Each individual instance is small. The aggregate exposure is substantial — and growing every quarter as more tools add AI features by default.
Why the standard answers don't work
Three responses I hear from sophisticated managing partners when I describe the problem:
"We have an AI policy."
I have read a lot of these policies. They tend to be one of two things: aspirational language about "responsible AI use" with no enforcement mechanism, or a list of approved tools that does not match what staff actually use. Either way, the policy lives in a Word doc that nobody reads after orientation.
A policy is the right idea. A policy without an alternative tool is a policy that gets ignored.
"We use the enterprise version, which doesn't train on our data."
True for some products. Less true than it sounds for most.
ChatGPT Enterprise's terms exclude training. So do Claude Team and Microsoft 365 Copilot. But the data still leaves your network. It crosses the public internet. It sits in someone else's data center. It is processed by someone else's team. The vendor has logs of every prompt for at least 30 days, often longer.
For most professional services work that is acceptable. For some clients — board-level engagements, M&A transactions, investigations, regulated industries — it is not. The standard you signed up to in your engagement letter may be higher than "the vendor promises not to train on it."
"Our IT department blocked the consumer tools."
Two problems. First, AI features are increasingly embedded in tools your IT department did not block — Microsoft 365, Google Workspace, your CRM, your project management software, your video conferencing tool. Second, blocked tools just push staff to personal accounts, which is worse because IT no longer has visibility.
The cleanest answer is not to block. The cleanest answer is to provide an alternative your staff prefers, that runs on your terms.
What private AI actually looks like in a 35-person consulting firm
Picture a strategy consulting firm in Frisco. Four partners, eight engagement managers, twenty consultants, three operations staff. Their AI footprint today probably looks like:
| Tool | Users | Monthly | Notes |
|---|---|---|---|
| Microsoft 365 Copilot | 35 | $1,050 | Firm-wide |
| ChatGPT Plus | 12 | $300 | Mix of firm + personal |
| Claude Pro (firm card) | 4 partners | $80 | "For when GPT misses" |
| Claude Pro (personal) | 6 consultants | $120 | Expensed reimbursement |
| Otter.ai Business | (shared) | $40 | Meeting transcription |
| Notion AI | 35 | $280 | Firm-wide |
| Mid-tier marketing AI tools | 2 | $80 | Marketing team |
| Total | — | $1,950/mo | $23,400/year |
Now picture the same firm with private AI installed. One server in the data closet. A medium-large open-source model running on it (Llama 3 70B or similar). Configured for consulting workflows — engagement summaries, slide drafting, market research, contract review. Staff access through a chat interface that feels like ChatGPT.
For our Private AI Scale tier, this size firm fits at $3,999/month — hardware loaned, monitoring included.
The economics are roughly comparable. The risk profile is dramatically different. The data does not leave the building. The vendor relationship goes from 6+ contracts to one.
The conversation with the client that changes
Three years from now, the most sophisticated clients are going to ask the same question in the kickoff meeting:
"How does your firm use AI on our engagement, and what happens to our data?"
Some firms will answer "we don't" — and lose the engagement to a competitor that uses AI to deliver faster.
Some firms will answer "we use the same tools you use, with enterprise terms" — which is true, but it gives the client no defensible answer when they have to explain it to their board.
The firm that answers "AI work happens on our private infrastructure, in our office, on hardware we own — your data does not leave the engagement" — that firm wins the work, and gets to charge for the higher trust position.
Confidentiality has always been the product in professional services. AI changes which firms can credibly claim it.
Where to start
If you are a managing partner reading this, the first move is not procurement. It is an honest 30-minute AI inventory.
This week, ask three questions:
-
What AI tools have client data flowed through in the last 90 days? Not what your policy says. What actually happened. Ask the engagement managers, not the IT lead.
-
Which of those tools has a signed agreement with the firm covering client data? For each one, get the agreement and read the data-handling clause.
-
What would you tell your top three clients if they asked, today, where their data lives? Write the answer in three sentences.
If those answers do not match what you are comfortable with, you have shadow AI in your firm. Private AI is the way to bring it back into the light without taking the productivity gains away from your team.
We built TechFides Private AI for professional services firms specifically because this is the vertical where the trust contract is most explicit and the AI exposure is hardest to govern through policy alone.
If you want a tailored view of what owning your AI looks like for your firm — including a comparison to your current subscription stack and a defensible answer for the next client kickoff meeting — start with our 8-minute readiness assessment.
The firms that move on this in the next twelve months will be the ones whose answer to the AI question becomes a competitive advantage. The firms that wait will find out which clients were quietly waiting for a better answer.
Like this? Get the next one Wednesday.
One email per week. No marketing filler. Unsubscribe anytime.