Amortized inference

“This raises the problem of amortized inference: how
to flexibly reuse inferences so as to answer a variety of related queries”

What does this mean exactly? And why does the reward function representing the prior x likelihood makes it so?

“This makes GFlowNets amortized probabilistic inference machines that can be used both to sample latent variables or parameters and theories shared across examples”

This is a paraphrase from GPT4’s explanations:

inference machines are simply trained ML models that make inferences. Probabilistic inference machines are those that output a probability distribution over possible answers. Amortized

GFlowNet as amortized inference learners [4,5,7,9 below], both from a Bayesian [7] and variational inference [9] perspectives

I don’t understand what this means. TODO: study

Leave a comment