Reid Hoffman's version of the network moat argument goes like this: the company that accumulates the most useful structured knowledge about a domain, and makes that knowledge progressively harder to replicate, is the company that wins the domain. Not the company with the best product at launch. The company with the best institutional memory at year three.
Most analytics tools are built to be replaced. The knowledge lives in the consultant's head, in the spreadsheet the intern built, in the slide deck from the agency review. When the engagement ends, the knowledge walks out the door. The next engagement starts from scratch.
The priors bank is a different bet. It is an argument that measurement knowledge compounds, that each engagement makes the next one more accurate, and that the lab that builds the compounding infrastructure owns the moat.
Here is how it works, and why the privacy architecture is the part that matters most.
What a prior actually does
A Bayesian model does not just fit your data. It combines your data with a prior belief about the parameters. The prior says: before we look at this specific dataset, here is what we expect the carryover decay for television to look like, here is the plausible range for paid social saturation, here is where Meta contribution typically sits for a brand of this type in this category.
If your prior is diffuse (broad range, high uncertainty), the data has to do all the work. A tight prior from a well-informed source means the model converges faster, behaves sensibly on sparse data, and produces outputs that a practitioner can defend: "We expected television carryover to sit around 0.65 to 0.8 based on comparable brands in this category. Your data confirmed 0.71. That is not a surprise, and here is what it means for your planning window."
A diffuse prior produces a coefficient. A warm prior produces a finding.
The difference matters most when your dataset is short: a new brand, a new channel, a new market. Without a prior, the model has nothing to anchor on. With a warm prior from comparable engagements, the model starts with the benefit of accumulated experience rather than with statistical ignorance.
The compounding structure
Here is the mechanism: each client engagement, at the close of the analysis cycle, contributes de-identified posterior distributions to a shared library. Not raw data. Not campaign details. Not spend figures. Posteriors: the probability distributions over model parameters that the model has learned from the data.
The library organises these by category, channel, market, and seasonality context. A brand running a television-heavy FMCG campaign in Australia gets priors derived from posterior contributions by comparable brands. A DTC brand leaning on paid social gets priors shaped by DTC engagements at similar scale.
The next brand to run a model in that category starts with a richer prior than the one before. The model that ran last year was the prior-setter for the model running now. The model running now is the prior-setter for the one running next quarter.
This is not a metaphor. It is a structural property of Bayesian inference. The posterior from one analysis is mathematically the correct prior for the next analysis on a similar problem. The question is whether you have a system for capturing, organising, and retrieving those posteriors, or whether they are discarded when the engagement closes.
The privacy architecture (why k=5 matters)
A shared priors library raises an obvious question: what stops it from leaking client information?
The answer is differential privacy, and the implementation detail that matters most is the k=5 floor.
Before any posterior contribution enters the shared library, it must pass a contributor threshold. A minimum of five independent engagements must have contributed to a given cell in the library before that cell becomes available as a prior for new analyses. A posterior from a single engagement is held in isolation until four comparable engagements have contributed to the same category/channel/context cell.
The k=5 floor prevents reverse-engineering. If a shared prior is derived from a single engagement, sophisticated analysis of the prior can reconstruct information about that engagement. Once five or more engagements have contributed, the reconstruction becomes intractable. The prior is a population property, not an individual property.
The second layer is differential privacy noise. Before posteriors are aggregated into the shared library, calibrated noise is added to the distributions. The noise is sized relative to the sensitivity of the parameter: parameters with tighter natural ranges receive less noise; parameters that vary widely across engagements receive more. The calibration preserves the statistical signal while ensuring that the contribution of any individual engagement cannot be distinguished from the aggregate.
The third layer is opt-in default-on. Clients are enrolled in contribution by default, because contribution is what makes the library useful and the default choice should reflect the collective benefit. But opt-out is one setting, no penalty, no delay. A client who prefers not to contribute still benefits from the library as a recipient of warm priors; they simply do not contribute posteriors back. The library grows more slowly without their contribution, which is an honest trade-off that clients can make with full information.
Why warm starts change the economics
The cost argument for warm starts is not subtle.
A cold-start MMM on a new brand or new market typically requires three to six months of data before the model is producing reliable outputs. During that period, the practitioner is managing a model that is still learning, producing wide intervals, and generating recommendations the client cannot act on with confidence. The practitioner's time is spent stabilising the model rather than generating insight.
A warm-start model, with strong priors from comparable engagements, can produce actionable outputs in weeks, not months. The convergence is faster. The intervals are tighter. The recommendations are defensible earlier in the engagement cycle.
For a client, this means the measurement practice is paying back sooner. For the lab running the measurement practice, it means the practitioner's time is spent on interpretation and recommendation rather than on model stabilisation. The economics of measurement change when the starting point is informed rather than diffuse.
The moat in practice
Hoffman's argument applied to measurement looks like this: the lab that has run the most well-structured engagements in a category has the best priors for that category. The best priors produce the fastest convergence and the most defensible outputs. Clients get better results sooner. The lab runs more engagements. The priors get better. The cycle compounds.
This is not replicable by a competitor entering the market today, even with superior technology. The technology is not the moat. The accumulated posteriors are the moat. Rebuilding a prior library from scratch requires running engagements. Running engagements takes time. Time is what the incumbent has and the challenger does not.
The priors bank is not a feature. It is the structural compounding mechanism of a measurement practice that gets harder to compete with every month it operates.
What this means if you are a client
If you are evaluating a measurement partner, the question to ask is not "what is your modelling approach." The modelling approach is table stakes. The question is "what do your priors look like for my category, and how were they built."
If the answer is "we will build priors from your data," you are paying for cold-start convergence. If the answer is "we have contributed posteriors from comparable engagements in your category, here is the k-coverage, here is the opt-out documentation," you are starting warm.
The difference shows up in the first deliverable. Tight intervals, specific numbers, defensible recommendations from week four rather than month four. That is what a priors bank produces when it is built correctly and maintained with privacy architecture that earns trust.
Acera Labs publishes the methodology behind its priors bank because the methodology is the evidence. Vague claims about "AI-assisted modelling" and "proprietary data" are not. If the Sift newsletter on measurement infrastructure interests you, it is where the work is documented in detail.
The Sift newsletter covers the infrastructure layer of modern marketing measurement: priors, privacy, data engineering, model governance. Subscribe below.