Logo
Unionpedia
Communication
Get it on Google Play
New! Download Unionpedia on your Android™ device!
Download
Faster access than browser!
 

Prior probability

Index Prior probability

In Bayesian statistical inference, a prior probability distribution, often simply called the prior, of an uncertain quantity is the probability distribution that would express one's beliefs about this quantity before some evidence is taken into account. [1]

54 relations: A priori probability, Admissible decision rule, Affine group, Algorithmic probability, Andrew Gelman, Annals of Statistics, Bayes' theorem, Bayesian network, Bayesian probability, Bernoulli distribution, Bernstein–von Mises theorem, Beta distribution, Coding theory, Conjugate prior, Cross entropy, Decision theory, Edwin Thompson Jaynes, Entropy (information theory), Event (probability theory), Expected value, Frequentist inference, Group action, Haar measure, Harold Jeffreys, Hyperparameter, Hyperprior, Inductive reasoning, Information theory, J. B. S. Haldane, Jeffreys prior, José-Miguel Bernardo, Journal of the Royal Statistical Society, Kullback–Leibler divergence, Latent variable, Lie group, Likelihood function, Marginal distribution, Minimum description length, Normal distribution, Observable variable, Parameter, Positive real numbers, Posterior probability, Principle of indifference, Principle of maximum entropy, Principle of transformation groups, Prior probability, Probability distribution, Regularization (mathematics), Solomonoff's theory of inductive inference, ..., Statistical inference, Translation (geometry), Uniform distribution (continuous), Variance. Expand index (4 more) »

A priori probability

An a priori probability is a probability that is derived purely by deductive reasoning.

New!!: Prior probability and A priori probability · See more »

Admissible decision rule

In statistical decision theory, an admissible decision rule is a rule for making a decision such that there is not any other rule that is always "better" than it (or at least sometimes better and never worse), in the precise sense of "better" defined below.

New!!: Prior probability and Admissible decision rule · See more »

Affine group

In mathematics, the affine group or general affine group of any affine space over a field K is the group of all invertible affine transformations from the space into itself.

New!!: Prior probability and Affine group · See more »

Algorithmic probability

In algorithmic information theory, algorithmic probability, also known as Solomonoff probability, is a mathematical method of assigning a prior probability to a given observation.

New!!: Prior probability and Algorithmic probability · See more »

Andrew Gelman

Andrew Gelman (born February 11, 1965) is an American statistician, professor of statistics and political science, and director of the Applied Statistics Center at Columbia University.

New!!: Prior probability and Andrew Gelman · See more »

Annals of Statistics

The Annals of Statistics is a peer-reviewed statistics journal published by the Institute of Mathematical Statistics.

New!!: Prior probability and Annals of Statistics · See more »

Bayes' theorem

In probability theory and statistics, Bayes’ theorem (alternatively Bayes’ law or Bayes' rule, also written as Bayes’s theorem) describes the probability of an event, based on prior knowledge of conditions that might be related to the event.

New!!: Prior probability and Bayes' theorem · See more »

Bayesian network

A Bayesian network, Bayes network, belief network, Bayes(ian) model or probabilistic directed acyclic graphical model is a probabilistic graphical model (a type of statistical model) that represents a set of variables and their conditional dependencies via a directed acyclic graph (DAG).

New!!: Prior probability and Bayesian network · See more »

Bayesian probability

Bayesian probability is an interpretation of the concept of probability, in which, instead of frequency or propensity of some phenomenon, probability is interpreted as reasonable expectation representing a state of knowledge or as quantification of a personal belief.

New!!: Prior probability and Bayesian probability · See more »

Bernoulli distribution

In probability theory and statistics, the Bernoulli distribution, named after Swiss mathematician Jacob Bernoulli, is the discrete probability distribution of a random variable which takes the value 1 with probability p and the value 0 with probability q.

New!!: Prior probability and Bernoulli distribution · See more »

Bernstein–von Mises theorem

In Bayesian inference, the Bernstein–von Mises theorem provides the basis for the important result that the posterior distribution for unknown quantities in any problem is effectively asymptotically independent of the prior distribution (assuming it obeys Cromwell's rule) as the data sample grows large.

New!!: Prior probability and Bernstein–von Mises theorem · See more »

Beta distribution

In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval parametrized by two positive shape parameters, denoted by α and β, that appear as exponents of the random variable and control the shape of the distribution.

New!!: Prior probability and Beta distribution · See more »

Coding theory

Coding theory is the study of the properties of codes and their respective fitness for specific applications.

New!!: Prior probability and Coding theory · See more »

Conjugate prior

In Bayesian probability theory, if the posterior distributions p(θ|x) are in the same probability distribution family as the prior probability distribution p(θ), the prior and posterior are then called conjugate distributions, and the prior is called a conjugate prior for the likelihood function.

New!!: Prior probability and Conjugate prior · See more »

Cross entropy

In information theory, the cross entropy between two probability distributions p and q over the same underlying set of events measures the average number of bits needed to identify an event drawn from the set, if a coding scheme is used that is optimized for an "unnatural" probability distribution q, rather than the "true" distribution p. The cross entropy for the distributions p and q over a given set is defined as follows: where H(p) is the entropy of p, and D_(p \| q) is the Kullback–Leibler divergence of q from p (also known as the relative entropy of p with respect to q — note the reversal of emphasis).

New!!: Prior probability and Cross entropy · See more »

Decision theory

Decision theory (or the theory of choice) is the study of the reasoning underlying an agent's choices.

New!!: Prior probability and Decision theory · See more »

Edwin Thompson Jaynes

Edwin Thompson Jaynes (July 5, 1922 – April 30, 1998) was the Wayman Crow Distinguished Professor of Physics at Washington University in St. Louis.

New!!: Prior probability and Edwin Thompson Jaynes · See more »

Entropy (information theory)

Information entropy is the average rate at which information is produced by a stochastic source of data.

New!!: Prior probability and Entropy (information theory) · See more »

Event (probability theory)

In probability theory, an event is a set of outcomes of an experiment (a subset of the sample space) to which a probability is assigned.

New!!: Prior probability and Event (probability theory) · See more »

Expected value

In probability theory, the expected value of a random variable, intuitively, is the long-run average value of repetitions of the experiment it represents.

New!!: Prior probability and Expected value · See more »

Frequentist inference

Frequentist inference is a type of statistical inference that draws conclusions from sample data by emphasizing the frequency or proportion of the data.

New!!: Prior probability and Frequentist inference · See more »

Group action

In mathematics, an action of a group is a formal way of interpreting the manner in which the elements of the group correspond to transformations of some space in a way that preserves the structure of that space.

New!!: Prior probability and Group action · See more »

Haar measure

In mathematical analysis, the Haar measure assigns an "invariant volume" to subsets of locally compact topological groups, consequently defining an integral for functions on those groups.

New!!: Prior probability and Haar measure · See more »

Harold Jeffreys

Sir Harold Jeffreys, FRS (22 April 1891 – 18 March 1989) was a British mathematician, statistician, geophysicist, and astronomer.

New!!: Prior probability and Harold Jeffreys · See more »

Hyperparameter

In Bayesian statistics, a hyperparameter is a parameter of a prior distribution; the term is used to distinguish them from parameters of the model for the underlying system under analysis.

New!!: Prior probability and Hyperparameter · See more »

Hyperprior

In Bayesian statistics, a hyperprior is a prior distribution on a hyperparameter, that is, on a parameter of a prior distribution.

New!!: Prior probability and Hyperprior · See more »

Inductive reasoning

Inductive reasoning (as opposed to ''deductive'' reasoning or ''abductive'' reasoning) is a method of reasoning in which the premises are viewed as supplying some evidence for the truth of the conclusion.

New!!: Prior probability and Inductive reasoning · See more »

Information theory

Information theory studies the quantification, storage, and communication of information.

New!!: Prior probability and Information theory · See more »

J. B. S. Haldane

John Burdon Sanderson Haldane (5 November 18921 December 1964) was an English scientist known for his work in the study of physiology, genetics, evolutionary biology, and in mathematics, where he made innovative contributions to the fields of statistics and biostatistics.

New!!: Prior probability and J. B. S. Haldane · See more »

Jeffreys prior

In Bayesian probability, the Jeffreys prior, named after Sir Harold Jeffreys, is a non-informative (objective) prior distribution for a parameter space; it is proportional to the square root of the determinant of the Fisher information matrix: It has the key feature that it is invariant under reparameterization of the parameter vector \vec\theta.

New!!: Prior probability and Jeffreys prior · See more »

José-Miguel Bernardo

José-Miguel Bernardo (born 12 March 1950) is a Spanish mathematician and statistician.

New!!: Prior probability and José-Miguel Bernardo · See more »

Journal of the Royal Statistical Society

The Journal of the Royal Statistical Society is a peer-reviewed scientific journal of statistics.

New!!: Prior probability and Journal of the Royal Statistical Society · See more »

Kullback–Leibler divergence

In mathematical statistics, the Kullback–Leibler divergence (also called relative entropy) is a measure of how one probability distribution diverges from a second, expected probability distribution.

New!!: Prior probability and Kullback–Leibler divergence · See more »

Latent variable

In statistics, latent variables (from Latin: present participle of lateo (“lie hidden”), as opposed to observable variables), are variables that are not directly observed but are rather inferred (through a mathematical model) from other variables that are observed (directly measured).

New!!: Prior probability and Latent variable · See more »

Lie group

In mathematics, a Lie group (pronounced "Lee") is a group that is also a differentiable manifold, with the property that the group operations are compatible with the smooth structure.

New!!: Prior probability and Lie group · See more »

Likelihood function

In frequentist inference, a likelihood function (often simply the likelihood) is a function of the parameters of a statistical model, given specific observed data.

New!!: Prior probability and Likelihood function · See more »

Marginal distribution

In probability theory and statistics, the marginal distribution of a subset of a collection of random variables is the probability distribution of the variables contained in the subset.

New!!: Prior probability and Marginal distribution · See more »

Minimum description length

The minimum description length (MDL) principle is a formalization of Occam's razor in which the best hypothesis (a model and its parameters) for a given set of data is the one that leads to the best compression of the data.

New!!: Prior probability and Minimum description length · See more »

Normal distribution

In probability theory, the normal (or Gaussian or Gauss or Laplace–Gauss) distribution is a very common continuous probability distribution.

New!!: Prior probability and Normal distribution · See more »

Observable variable

In statistics, observable variable or observable quantity (also manifest variables), as opposed to latent variable, is a variable that can be observed and directly measured.

New!!: Prior probability and Observable variable · See more »

Parameter

A parameter (from the Ancient Greek παρά, para: "beside", "subsidiary"; and μέτρον, metron: "measure"), generally, is any characteristic that can help in defining or classifying a particular system (meaning an event, project, object, situation, etc.). That is, a parameter is an element of a system that is useful, or critical, when identifying the system, or when evaluating its performance, status, condition, etc.

New!!: Prior probability and Parameter · See more »

Positive real numbers

In mathematics, the set of positive real numbers, \mathbb_.

New!!: Prior probability and Positive real numbers · See more »

Posterior probability

In Bayesian statistics, the posterior probability of a random event or an uncertain proposition is the conditional probability that is assigned after the relevant evidence or background is taken into account.

New!!: Prior probability and Posterior probability · See more »

Principle of indifference

The principle of indifference (also called principle of insufficient reason) is a rule for assigning epistemic probabilities.

New!!: Prior probability and Principle of indifference · See more »

Principle of maximum entropy

The principle of maximum entropy states that the probability distribution which best represents the current state of knowledge is the one with largest entropy, in the context of precisely stated prior data (such as a proposition that expresses testable information).

New!!: Prior probability and Principle of maximum entropy · See more »

Principle of transformation groups

The principle of transformation groups is a rule for assigning epistemic probabilities in a statistical inference problem.

New!!: Prior probability and Principle of transformation groups · See more »

Prior probability

In Bayesian statistical inference, a prior probability distribution, often simply called the prior, of an uncertain quantity is the probability distribution that would express one's beliefs about this quantity before some evidence is taken into account.

New!!: Prior probability and Prior probability · See more »

Probability distribution

In probability theory and statistics, a probability distribution is a mathematical function that provides the probabilities of occurrence of different possible outcomes in an experiment.

New!!: Prior probability and Probability distribution · See more »

Regularization (mathematics)

In mathematics, statistics, and computer science, particularly in the fields of machine learning and inverse problems, regularization is a process of introducing additional information in order to solve an ill-posed problem or to prevent overfitting.

New!!: Prior probability and Regularization (mathematics) · See more »

Solomonoff's theory of inductive inference

Ray Solomonoff's theory of universal inductive inference is a theory of prediction based on logical observations, such as predicting the next symbol based upon a given series of symbols.

New!!: Prior probability and Solomonoff's theory of inductive inference · See more »

Statistical inference

Statistical inference is the process of using data analysis to deduce properties of an underlying probability distribution.

New!!: Prior probability and Statistical inference · See more »

Translation (geometry)

In Euclidean geometry, a translation is a geometric transformation that moves every point of a figure or a space by the same distance in a given direction.

New!!: Prior probability and Translation (geometry) · See more »

Uniform distribution (continuous)

In probability theory and statistics, the continuous uniform distribution or rectangular distribution is a family of symmetric probability distributions such that for each member of the family, all intervals of the same length on the distribution's support are equally probable.

New!!: Prior probability and Uniform distribution (continuous) · See more »

Variance

In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its mean.

New!!: Prior probability and Variance · See more »

Redirects here:

A priori distribution, Bayes prior, Bayesian prior, Diffuse prior, Improper distribution, Improper prior, Logarithmic prior, Non-informative prior, Prior Probability, Prior distribution, Prior information, Prior probabilities, Prior probability distribution, Uniform prior, Uninformative prior.

References

[1] https://en.wikipedia.org/wiki/Prior_probability

OutgoingIncoming
Hey! We are on Facebook now! »