Neural Ratio Estimation (NRE)¶
Introduction¶
As we have seen, the output of prior + simulator is the array of pairs
We now consider the shuffled pairs
The idea of NRE is to train a classifier to learn the ratio
which is equal to the likelihood-to-evidence ratio. The application of Bayes' theorem makes the connection between
In other words,
More specifically, the binary classifier
where we used
The classifier learns the parameters
References¶
[1]: Hermans, Joeri, Volodimir Begy, and Gilles Louppe. "Likelihood-free mcmc with amortized approximate ratio estimators." International conference on machine learning. PMLR, 2020.
[2]: Miller, Benjamin K., et al. "Truncated marginal neural ratio estimation." Advances in Neural Information Processing Systems 34 (2021): 129-143.
[3]: Anau Montel, Noemi, James Alvey, and Christoph Weniger. "Scalable inference with autoregressive neural ratio estimation." Monthly Notices of the Royal Astronomical Society 530.4 (2024): 4107-4124.