Average sign-to-close time
191 days
BCG reported an average of 191 days from announcement to close for deals above $2 billion in its 2024 M&A report.
Benchmark Report
The Sorai Diligence Index 2026 is a benchmark report for investors, advisors, and legal teams that need a cleaner view of what is slowing diligence down, where AI is actually being adopted, and which operating disciplines matter most when deals are under time pressure.
Mar 28, 2026 · 9 min read · Sorai Research · Diligence Workflow & Benchmarking · Updated Mar 28, 2026
Average sign-to-close time
191 days
BCG reported an average of 191 days from announcement to close for deals above $2 billion in its 2024 M&A report.
Deals missing announced close timing
~40%
BCG said about 40% of transactions took longer to close than announced at signing.
Organizations using GenAI in M&A workflows
86%
Deloitte reported that 86% of surveyed organizations have integrated GenAI into their M&A workflows.
Security as the leading GenAI concern
67%
Deloitte identified data security as the top concern for GenAI adoption in M&A at 67% of surveyed organizations.
Quick answer
The Sorai Diligence Index 2026 is a benchmark framework that combines named public market research with Sorai's five-part diligence operating model. It is designed to give teams citation-ready language on timelines, bottlenecks, and AI adoption without pretending that generic automation alone fixes the process.
The Sorai Diligence Index 2026 is built to answer a simple operating question: where does diligence lose time and confidence before it reaches the investment committee, lender, or final negotiating table?
Rather than acting like a vanity trend report, the index turns market signals into a practical benchmark. It combines public research with a process scorecard so buyers and advisors can compare their current workflow against a more decision-grade operating model.
The headline timeline problem has not gone away. Boston Consulting Group reported that deals above $2 billion averaged 191 days from announcement to close in 2024, and roughly 40% of announced deals missed their original close timing.
That matters because long diligence cycles are rarely just scheduling problems. They usually reflect friction across document intake, specialist review, open-issue convergence, and late-stage synthesis for decision-makers.
The index treats diligence slowdown as an operating-model issue first. In practice, teams lose time because the work lives in too many places and converges too late.
That is why the same bottlenecks show up repeatedly across mid-market and large-cap processes, regardless of whether the team has strong technical talent.
AI adoption in M&A is no longer hypothetical. Deloitte reported that 86% of surveyed organizations already use GenAI somewhere in their M&A workflow, which means the market has moved from experimentation to operating-model choices.
The harder question is not whether firms use AI. It is whether they use it inside a process that preserves evidence, reviewer judgment, and defensibility. Deloitte's security signal and McKinsey's repeated governance warnings both point to the same conclusion: speed without control does not create decision-grade diligence.
To make the report usable, Sorai scores diligence maturity across five equal dimensions. Teams do not need a perfect score to improve, but they do need clarity on where delay and weak confidence actually originate.
The most common pattern is uneven maturity: one workstream becomes highly efficient while the shared evidence chain and final synthesis remain fragile. The index is designed to surface that mismatch.
The practical use of the index is straightforward: benchmark the current process, identify the weak link, and fix the operating discipline before adding more complexity.
For marketing and domain-authority work, the report also gives Sorai a public benchmark asset that can be cited by legal press, private markets writers, advisors, and partner firms without forcing them to cite a product page directly.
Assets & next steps
One-page benchmark summary for outreach, resource directories, and partner follow-up.
Use the checklist as the supporting operational asset behind benchmark or guest-article outreach.
The companion article that expands the operational causes of timeline drag in detail.
Use this when a fund, law firm, or deal platform wants the benchmark translated into a live workflow review.
Sources cited
Used for the 191-day sign-to-close benchmark and the share of deals that missed announced timing.
Used for the broader point that focused diligence can surface economically meaningful findings on a compressed timeline.
Used for adoption and concern signals cited in the report, including 86% workflow integration and 67% data-security concern.
Used for the argument that GenAI can accelerate diligence and negotiation only when wrapped in stronger process design and guardrails.
Used for the distinction between fast AI output and investment-grade completeness or realism.
Used for the framing that value capture begins with faster, deeper, and more focused diligence rather than disconnected specialist work.
Author
Research program for diligence workflow and benchmark analysis
The research function synthesizes named market studies into benchmark frameworks and source-linked research pages that can be used in evaluation, outreach, and press conversations.
Related pages
Internal route
See how Sorai turns the benchmark framework into a live diligence operating model.
Internal route
Map the benchmark back to the five-phase process teams actually run.
Internal route
Use the adoption section to compare workflow software, VDR add-ons, and generic AI.
Internal route
Connect the benchmark to the earlier decision point where teams can reduce wasted work.
FAQ
It is Sorai's benchmark report for M&A diligence timelines, operating bottlenecks, and AI adoption. The market facts come from named public sources, while the scorecard and operating framework are Sorai's synthesis.
No. It is a benchmark synthesis built from public 2024-2026 market research plus Sorai's operating-model framework for evaluating diligence maturity.
Use it to benchmark current workflow maturity, support partner or committee discussions about process redesign, and provide citation-ready context in outreach or bylined articles.
Yes. Sorai encourages citation with a direct link to the canonical report URL so readers can review the benchmark framework and named source list in full context.