I am reading…
when the reward function R represents the product of a prior (over some random variable) times a likelihood (measuring how well that choice of value of the random variable fits some data), the GFlowNet will learn to sample from the corresponding Bayesian posterior
https://milayb.notion.site/95434ef0e2d94c24aab90e69b30be9b3
In Bayesian Statistics, why do we go through the trouble of “updating beliefs” when we can simply collect data and get P(E)?
My thought: This is a problem of interpretation. P(E) obtained through sampling might not be extendable to the whole population, and we don’t interpret that as our “belief”. So we run Bayes’ theorem to calculate the post-evidence belief we have on the whole population.
My answer after more study: If I look at prior as a pdf (probability density function), then I really I have guess of density for each value that can be assigned to the random variable. That is by nature different from the evidence, which gives me, in the example of left-handedness, one data point of the proportion that I can use to run Bayes’ rule and update my pdf, which will be a posterior pdf. I don’t completely understand the normalization process yet, but what I know suffices for now.
A follow-up question is how Bayesian statistics and statistical tests relate? They both seem to seek to make inferences about populations based on sample data.
Answer by someone else:
Leave a comment