AI adoption in M&A is often discussed as if the key question were simply how many firms use it. That is too shallow. The harder and more useful question is which firms are turning AI into real operating capability and which firms are still treating it as an experiment that sits outside the core deal workflow.
That distinction matters because the competitive gap does not open when one firm buys software and another does not. It opens when one firm changes how it screens targets, reviews documents, escalates issues, and supports decision-making, while another firm continues to rely on slower and more fragmented workflows.
Deloitte's 2025 M&A generative AI study makes clear how broad the experimentation and integration base has already become: 86% of surveyed organizations reported having incorporated GenAI into M&A workflows in some form [Deloitte, "2025 GenAI in M&A Study," 2025]. McKinsey's January 2026 work pushes the conversation one step further by reporting that respondents using gen AI in M&A saw roughly 20% lower costs on average, while 40% reported deal cycles that were 30% to 50% faster [McKinsey & Company, "Gen AI in M&A: From theory to practice to high performance," January 2026]. Bain's 2024 and 2025 M&A reports reinforce the same broad direction: the market is moving beyond casual interest toward practical evaluation of where AI helps most inside dealmaking [Bain & Company, "2024 M&A Report," 2024]; [Bain & Company, "2025 Global M&A Report," 2025].
What Adoption Actually Means
The word adoption is misleading because it covers very different states of maturity.
At the lowest level, a team may simply test a general-purpose AI tool to summarize notes or compare a few documents. That counts as exposure, but it does not change how deals get done.
At a higher level, a team may run pilots inside specific workflows such as target screening, document review, or issue triage. This is more meaningful, but still does not guarantee durable change.
Real adoption begins when AI becomes part of the repeatable operating record. That means the tool is used in live workflows, the outputs are tied to evidence, ownership is clear, the team measures results, and the process keeps working across multiple deals rather than only in a demo or a one-off pilot.
This is why the leaders-versus-laggards distinction matters more than the raw adoption rate. Plenty of firms can say they use AI. Fewer can show that it has changed throughput, cost, and decision quality in a defensible way.
What Leaders Tend to Do Differently
The firms pulling ahead are usually not the ones making the boldest claims. They are the ones making the most disciplined workflow choices.
They start with a narrow, high-friction use case
Leaders usually do not begin with an abstract mandate to "use AI across M&A." They start with one place where the current process is slow, repetitive, and evidence-heavy. That often means target screening, market mapping, document review, issue synthesis, or integration planning.
These are good first use cases because the workflow is measurable and the pain is already clear. If a team can reduce document review drag or improve the quality of first-pass screening, the value is easier to observe than in more speculative use cases.
They keep the evidence layer intact
This is the most important operational distinction. Leaders use AI to speed research and synthesis, but they do not let the evidence disappear into unsupported summaries. Outputs remain tied to the contracts, filings, notes, and workpapers that produced them.
That is what makes AI usable in real deal settings. Without that evidence traceability, the tool may look fast but it creates review risk instead of removing it.
They make ownership explicit
Someone owns the workflow, someone owns the validation standard, and someone owns the measurement. Laggards often fail here. A tool is bought, a few people experiment, and no one owns the process long enough to turn experimentation into operating practice.
They measure workflow outcomes, not generic excitement
Leaders do not ask whether people liked the demo. They ask whether the team screened better targets, escalated issues earlier, reduced time spent on repetitive review, or improved the clarity of investment committee materials.
This is where McKinsey's 2026 findings are most useful. The reported cost and cycle-time gains matter not because they make a good headline, but because they imply that some firms have already moved beyond experimentation into measurable operating change [McKinsey & Company, "Gen AI in M&A: From theory to practice to high performance," January 2026].
Where Laggards Usually Get Stuck
Most lagging firms are not blocked by lack of awareness. They are blocked by weak operating choices.
Tool-first thinking
The firm starts by asking which tool to buy instead of which workflow needs redesign. That often leads to shallow experimentation with no durable process change.
Too many use cases at once
When teams try to cover sourcing, diligence, modeling, integration, and reporting at the same time, they usually spread attention too thin and fail to operationalize any one workflow well.
No validation standard
If the team does not know how an AI-assisted output should be checked, by whom, and against what evidence, adoption stalls because reviewers lose trust in the results.
Weak governance
Deloitte's 2025 study is especially relevant here because it highlights how central data security and governance remain in M&A AI adoption [Deloitte, "2025 GenAI in M&A Study," 2025]. Firms that treat security, confidentiality, and reviewability as afterthoughts tend to slow themselves down later when legal, compliance, or deal leaders refuse to rely on the tool in real situations.
No continuity between workflow stages
A common laggard pattern is to use AI in one isolated stage without preserving the context for the next. The team might get a faster summary at the top of the funnel, but then has to rebuild everything manually when the opportunity moves into deeper review.
Which Use Cases Tend to Win First
The most durable first use cases in M&A are usually the ones with four characteristics: high information volume, high repetition, high comparison burden, and clear human review points.
Target screening and market mapping
Map the process
Stress-test the deal process against a real operating model.
Sorai is built for teams that need financial, tax, and legal diligence to stay aligned before the final memo sprint.
McKinsey's M&A work highlights target identification and sourcing as strong opportunities for GenAI because these stages involve large amounts of research, categorization, and synthesis [McKinsey & Company, "Gen AI in M&A: From theory to practice to high performance," January 2026]. Firms benefit when they can define the target universe more intelligently and keep it current without overwhelming the deal team with noise.
Due diligence document review
This is a natural fit because the work is evidence-heavy and repetitive, and the payoff from faster organization and issue escalation is tangible.
Cross-workstream issue synthesis
One of the biggest sources of M&A friction is the translation layer between financial, legal, tax, and commercial findings. AI can help when it reduces that translation burden without obscuring the underlying source material.
Integration planning and operating handoff
Bain's broader 2024 and 2025 framing around selectivity and execution discipline matters here because adoption only becomes strategic when it improves follow-through, not just analysis [Bain & Company, "2024 M&A Report," 2024]; [Bain & Company, "2025 Global M&A Report," 2025].
Why the Gap Widens Over Time
The leaders-versus-laggards gap tends to widen because adoption creates compounding process improvements.
Leaders build institutional memory faster
Once AI is tied to actual workflows, the firm begins to accumulate structured knowledge about what worked, what did not, and where delays or false positives appeared.
Leaders create better review loops
Because the workflow is measurable, they can refine prompts, data inputs, issue taxonomies, and validation steps over time. That makes the process better on the next deal, not only the current one.
Laggards keep paying the same process tax
When a firm does not operationalize AI, it continues to absorb the full burden of manual screening, fragmented review, and repeated synthesis. That does not always create immediate failure. It does create a steadily widening execution disadvantage.
Expectations shift internally
Once some teams demonstrate that a workflow can be faster and more evidence-linked, the rest of the organization recalibrates what "normal" looks like. Firms that do not create that internal benchmark often underestimate how much they are still losing to process friction.
What a Smart Adoption Path Looks Like
The most pragmatic path is usually narrow, measured, and cumulative.
1. Pick one workflow with obvious friction
Choose a use case where the team already knows the pain. That could be target screening, document review, or issue synthesis.
2. Define the validation rule before rollout
Who checks the output? What evidence must be visible? What counts as a usable result? These rules should exist before adoption scales.
3. Measure workflow results directly
Track review time, issue quality, false positives, cycle-time reduction, or other process metrics that matter to the team.
4. Expand only after the first workflow is real
This is where many programs fail. They expand after a promising demo instead of after a stable operating change.
5. Keep governance attached to speed
Speed gains are only durable if security, confidentiality, and auditability stay in the design. Otherwise the firm eventually slows itself down again through mistrust and rework.
What the Data Supports and What It Does Not
The current evidence does support the conclusion that AI use in M&A is broadening, that workflow integration is increasing, and that firms using gen AI in M&A are seeing measurable benefits in cost and cycle time [Deloitte, "2025 GenAI in M&A Study," 2025]; [McKinsey & Company, "Gen AI in M&A: From theory to practice to high performance," January 2026].
What the evidence does not support is lazy overstatement. Not every tool works. Not every firm is mature. Not every use case is proven equally well. The market is moving quickly, but the advantage belongs to firms that treat adoption as an operating discipline rather than a branding exercise.
Where Sorai Fits
Sorai is built for the workflows where adoption becomes meaningful: evidence-heavy, multi-workstream deal processes that need more continuity between raw data, issue ownership, and senior review. In that context, AI is most valuable when it reduces translation friction and preserves context rather than simply generating faster summaries.
The Bottom Line
AI adoption in M&A is no longer mainly a story about who has access to the tools. It is a story about who has redesigned enough of the workflow to benefit from them. Leaders operationalize narrow high-friction use cases, preserve the evidence trail, and measure real process outcomes. Laggards experiment without turning those experiments into dependable execution capability.