Skip to main content
Skip table of contents

Modeling FAQ

Q: What is the “spend level” in the priors

A: You can think of spend level as the “typical” or reference level of spend. Combined with ROI/CPA priors, it creates a prior on the effect channel spend has on the KPI. Increasing the spend level allows a channel to saturate at higher levels of spend; decreasing it lets the channel saturate sooner. Tweaking this parameter is useful if you see a channel (or many channels) getting too much credit.
Spend level is not the same as kappa (the saturation parameter), though it does influence our prior on kappa.Changing the spend level has minimal effect on recovery of the saturation parameter.

Q: How can I improve the parameter recovery dashboard?

Broadly speaking, the biggest factors affecting parameter recovery are:

  1. Number of channels: more channels = harder to recover

  2. Correlated spend patterns: e.g., if channels A and B have similar spend patterns, it's harder to tell which one drove the effect

  3. Long time shifts: it's harder to detect very long-sustaining effects

  4. Signal imbalance: If most KPI signal is driven by one channel or the intercept, there’s less leftover signal to recover other effects

Q: When should we change the priors on ROIs or on the Intercept?

A: There’s no easy answer here but one indication your priors might need adjusting is if there’s prior-data conflict where the model really “wants” the parameter value to be something far away from the priors you set. This can often be observed if the intercept prior is set too low or too high, and the model ends up with intercept estimates that are “pressing against” the prior bounds.

Q: Should we exclude small / low-spend channels from the model?

A: We don’t recommend removing low-spend channels. We should just lower our expectations for their recoverability. If we can’t learn more than what’s embedded in the prior, that is fine. If our priors are well-grounded, the model will assign them reasonable parameters instead of ignoring them.
These channels may also be good candidates for grouping, so we can pool information across similar channels.

Q: Do I need to change priors when I’m refreshing the model?

Most companies and operators using Recast refresh the models on a weekly cadence. That is, every 7 days (or so) they load fresh data into the platform and run a standard “model run” that re-estimates all of the parameters in the model, and then deploys those results to the reporting dashboard.

The goal when operating Recast is for these weekly refreshes to be fully automated or very close to it. When things are working well, operators should be able to run new data through the Recast model without needing to tweak any priors or model configuration settings.

As we discuss in the docs on building a good model, the idea is to invest a lot of work upfront to build a good, robust media mix model that can then be refreshed regularly without issues. So during a standard weekly (or monthly, or whenever) refresh, you should not need to edit or update anything about the priors or the model configuration. You should simply be uploading new data and launching the run.

As part of the model run, the Recast platform will automatically check the 30-day holdout accuracy as well as week-over-week stability. These metrics will be available for you in the run analysis dashboard which you can review prior to deploying the results for customers to view.

In general, you shouldn’t expect to need to re-run other types of model diagnostic runs (stability loops, holdouts, parameter recovery) unless and until you’re making major changes to the model structure and the priors which ideally will happen very infrequently, less than once a year on average.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.