Logo
Unionpedia
Communication
Get it on Google Play
New! Download Unionpedia on your Androidâ„¢ device!
Free
Faster access than browser!
 

Kullback–Leibler divergence

Index Kullback–Leibler divergence

In mathematical statistics, the Kullback–Leibler divergence (also called relative entropy) is a measure of how one probability distribution diverges from a second, expected probability distribution. [1]

143 relations: Akaike information criterion, Alfréd Rényi, Autoencoder, Bayes factor, Bayesian experimental design, Bayesian information criterion, Bernoulli scheme, Beta distribution, Bhattacharyya distance, Biased random walk on a graph, Biclustering, Binomial distribution, Boltzmann machine, Bregman divergence, Catalog of articles in probability theory, Chernoff bound, Chow–Liu tree, CMA-ES, Commons-based peer production, Competitive regret, Computational phylogenetics, Conceptual clustering, Conditional entropy, Conditional mutual information, Consensus clustering, Cross entropy, Cross-entropy method, Data compression, Data differencing, Delta encoding, Deviance information criterion, Differential entropy, Distance, Distribution learning theory, Divergence (disambiguation), Divergence (statistics), Entropic risk measure, Entropic value at risk, Entropy (information theory), Entropy power inequality, Estimation of distribution algorithm, Evidence lower bound, Expectation–maximization algorithm, Exponential distribution, F-divergence, Fano's inequality, Features from accelerated segment test, First-order inductive learner, Fisher information, Fisher information metric, ..., Free energy principle, G-test, Gambling and information theory, Gamma distribution, Generalized filtering, Generalized gamma distribution, Gibbs' inequality, Gilbert–Shannon–Reeds model, Gompertz distribution, Hannan–Quinn information criterion, Hellinger distance, Home advantage, Hypergeometric distribution, Implicit authentication, Independent component analysis, Index of dissimilarity, Index of information theory articles, Index of physics articles (K), Inequalities in information theory, Information bottleneck method, Information field theory, Information gain in decision trees, Information projection, Information theory, Information theory and measure theory, Inverse-gamma distribution, Jensen's inequality, Jensen–Shannon divergence, John von Neumann, Kernel embedding of distributions, KL, KLD, KLIC, Kullback's inequality, Large deviations theory, Leibler, Limiting density of discrete points, List of probability topics, List of statistics articles, List of University of Illinois at Urbana–Champaign people, List of weight-of-evidence articles, Log sum inequality, Logistic regression, Logit-normal distribution, Loss functions for classification, Maximum entropy thermodynamics, Maximum spacing estimation, Minimal-entropy martingale measure, Monte Carlo localization, Multifidelity simulation, Multiple kernel learning, Multivariate kernel density estimation, Multivariate normal distribution, Mutual information, Nkld, Non-negative matrix factorization, Normal distribution, Normality test, Pinsker's inequality, Poisson distribution, Position weight matrix, Principal component analysis, Principle of maximum entropy, Prior probability, Quantities of information, Quantum mutual information, Quantum relative entropy, Radon–Nikodym theorem, Random forest, Rényi entropy, Richard Leibler, Sanov's theorem, Solomon Kullback, Statistic, Statistical distance, Statistical inference, String metric, Strong subadditivity of quantum entropy, Structured expert judgment: the classical model, T-distributed stochastic neighbor embedding, Tf–idf, Timeline of information theory, Total correlation, Total variation distance of probability measures, Universal code (data compression), Variational Bayesian methods, Variational message passing, Von Neumann entropy, VOTCA, Vuong's closeness test, Web mining, Wishart distribution, Young's inequality for products. Expand index (93 more) »

Akaike information criterion

The Akaike information criterion (AIC) is an estimator of the relative quality of statistical models for a given set of data.

New!!: Kullback–Leibler divergence and Akaike information criterion · See more »

Alfréd Rényi

Alfréd Rényi (20 March 1921 – 1 February 1970) was a Hungarian mathematician who made contributions in combinatorics, graph theory, number theory but mostly in probability theory.

New!!: Kullback–Leibler divergence and Alfréd Rényi · See more »

Autoencoder

An autoencoder is a type of artificial neural network used to learn efficient data codings in an unsupervised manner.

New!!: Kullback–Leibler divergence and Autoencoder · See more »

Bayes factor

In statistics, the use of Bayes factors is a Bayesian alternative to classical hypothesis testing.

New!!: Kullback–Leibler divergence and Bayes factor · See more »

Bayesian experimental design

Bayesian experimental design provides a general probability-theoretical framework from which other theories on experimental design can be derived.

New!!: Kullback–Leibler divergence and Bayesian experimental design · See more »

Bayesian information criterion

In statistics, the Bayesian information criterion (BIC) or Schwarz criterion (also SBC, SBIC) is a criterion for model selection among a finite set of models; the model with the lowest BIC is preferred.

New!!: Kullback–Leibler divergence and Bayesian information criterion · See more »

Bernoulli scheme

In mathematics, the Bernoulli scheme or Bernoulli shift is a generalization of the Bernoulli process to more than two possible outcomes.

New!!: Kullback–Leibler divergence and Bernoulli scheme · See more »

Beta distribution

In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval parametrized by two positive shape parameters, denoted by α and β, that appear as exponents of the random variable and control the shape of the distribution.

New!!: Kullback–Leibler divergence and Beta distribution · See more »

Bhattacharyya distance

In statistics, the Bhattacharyya distance measures the similarity of two probability distributions.

New!!: Kullback–Leibler divergence and Bhattacharyya distance · See more »

Biased random walk on a graph

In network science, a biased random walk on a graph is a time path process in which an evolving variable jumps from its current state to one of various potential new states; unlike in a pure random walk, the probabilities of the potential new states are unequal.

New!!: Kullback–Leibler divergence and Biased random walk on a graph · See more »

Biclustering

Biclustering, block clustering, co-clustering, or two-mode clustering is a data mining technique which allows simultaneous clustering of the rows and columns of a matrix.

New!!: Kullback–Leibler divergence and Biclustering · See more »

Binomial distribution

In probability theory and statistics, the binomial distribution with parameters n and p is the discrete probability distribution of the number of successes in a sequence of n independent experiments, each asking a yes–no question, and each with its own boolean-valued outcome: a random variable containing a single bit of information: success/yes/true/one (with probability p) or failure/no/false/zero (with probability q.

New!!: Kullback–Leibler divergence and Binomial distribution · See more »

Boltzmann machine

A Boltzmann machine (also called stochastic Hopfield network with hidden units) is a type of stochastic recurrent neural network (and Markov random field).

New!!: Kullback–Leibler divergence and Boltzmann machine · See more »

Bregman divergence

In mathematics, a Bregman divergence or Bregman distance is similar to a metric, but satisfies neither the triangle inequality nor symmetry.

New!!: Kullback–Leibler divergence and Bregman divergence · See more »

Catalog of articles in probability theory

This page lists articles related to probability theory.

New!!: Kullback–Leibler divergence and Catalog of articles in probability theory · See more »

Chernoff bound

In probability theory, the Chernoff bound, named after Herman Chernoff but due to Herman Rubin, gives exponentially decreasing bounds on tail distributions of sums of independent random variables.

New!!: Kullback–Leibler divergence and Chernoff bound · See more »

Chow–Liu tree

In probability theory and statistics Chow–Liu tree is an efficient method for constructing a second-order product approximation of a joint probability distribution, first described in a paper by.

New!!: Kullback–Leibler divergence and Chow–Liu tree · See more »

CMA-ES

CMA-ES stands for Covariance Matrix Adaptation Evolution Strategy.

New!!: Kullback–Leibler divergence and CMA-ES · See more »

Commons-based peer production

Commons-based peer production (CBPP) is a term coined by Harvard Law School professor Yochai Benkler.

New!!: Kullback–Leibler divergence and Commons-based peer production · See more »

Competitive regret

In decision theory, competitive regret is the relative regret compared to an oracle with limited or unlimited power in the process of distribution estimation.

New!!: Kullback–Leibler divergence and Competitive regret · See more »

Computational phylogenetics

Computational phylogenetics is the application of computational algorithms, methods, and programs to phylogenetic analyses.

New!!: Kullback–Leibler divergence and Computational phylogenetics · See more »

Conceptual clustering

Conceptual clustering is a machine learning paradigm for unsupervised classification developed mainly during the 1980s.

New!!: Kullback–Leibler divergence and Conceptual clustering · See more »

Conditional entropy

In information theory, the conditional entropy (or equivocation) quantifies the amount of information needed to describe the outcome of a random variable Y given that the value of another random variable X is known.

New!!: Kullback–Leibler divergence and Conditional entropy · See more »

Conditional mutual information

In probability theory, particularly information theory, the conditional mutual information is, in its most basic form, the expected value of the mutual information of two random variables given the value of a third.

New!!: Kullback–Leibler divergence and Conditional mutual information · See more »

Consensus clustering

Clustering is the assignment of objects into groups (called clusters) so that objects from the same cluster are more similar to each other than objects from different clusters.

New!!: Kullback–Leibler divergence and Consensus clustering · See more »

Cross entropy

In information theory, the cross entropy between two probability distributions p and q over the same underlying set of events measures the average number of bits needed to identify an event drawn from the set, if a coding scheme is used that is optimized for an "unnatural" probability distribution q, rather than the "true" distribution p. The cross entropy for the distributions p and q over a given set is defined as follows: where H(p) is the entropy of p, and D_(p \| q) is the Kullback–Leibler divergence of q from p (also known as the relative entropy of p with respect to q — note the reversal of emphasis).

New!!: Kullback–Leibler divergence and Cross entropy · See more »

Cross-entropy method

The cross-entropy (CE) method developed by Reuven Rubinstein is a general Monte Carlo approach to combinatorial and continuous multi-extremal optimization and importance sampling.

New!!: Kullback–Leibler divergence and Cross-entropy method · See more »

Data compression

In signal processing, data compression, source coding, or bit-rate reduction involves encoding information using fewer bits than the original representation.

New!!: Kullback–Leibler divergence and Data compression · See more »

Data differencing

In computer science and information theory, data differencing or differential compression is producing a technical description of the difference between two sets of data – a source and a target.

New!!: Kullback–Leibler divergence and Data differencing · See more »

Delta encoding

Delta encoding is a way of storing or transmitting data in the form of differences (deltas) between sequential data rather than complete files; more generally this is known as data differencing.

New!!: Kullback–Leibler divergence and Delta encoding · See more »

Deviance information criterion

The deviance information criterion (DIC) is a hierarchical modeling generalization of the Akaike information criterion (AIC) and the Bayesian information criterion (BIC).

New!!: Kullback–Leibler divergence and Deviance information criterion · See more »

Differential entropy

Differential entropy (also referred to as continuous entropy) is a concept in information theory that began as an attempt by Shannon to extend the idea of (Shannon) entropy, a measure of average surprisal of a random variable, to continuous probability distributions.

New!!: Kullback–Leibler divergence and Differential entropy · See more »

Distance

Distance is a numerical measurement of how far apart objects are.

New!!: Kullback–Leibler divergence and Distance · See more »

Distribution learning theory

The distributional learning theory or learning of probability distribution is a framework in computational learning theory.

New!!: Kullback–Leibler divergence and Distribution learning theory · See more »

Divergence (disambiguation)

Divergence is a function that associates a scalar with every point of a vector field.

New!!: Kullback–Leibler divergence and Divergence (disambiguation) · See more »

Divergence (statistics)

In statistics and information geometry, divergence or a contrast function is a function which establishes the "distance" of one probability distribution to the other on a statistical manifold.

New!!: Kullback–Leibler divergence and Divergence (statistics) · See more »

Entropic risk measure

In financial mathematics, the entropic risk measure is a risk measure which depends on the risk aversion of the user through the exponential utility function.

New!!: Kullback–Leibler divergence and Entropic risk measure · See more »

Entropic value at risk

In financial mathematics and stochastic optimization, the concept of risk measure is used to quantify the risk involved in a random outcome or risk position.

New!!: Kullback–Leibler divergence and Entropic value at risk · See more »

Entropy (information theory)

Information entropy is the average rate at which information is produced by a stochastic source of data.

New!!: Kullback–Leibler divergence and Entropy (information theory) · See more »

Entropy power inequality

In information theory, the entropy power inequality is a result that relates to so-called "entropy power" of random variables.

New!!: Kullback–Leibler divergence and Entropy power inequality · See more »

Estimation of distribution algorithm

Estimation of distribution algorithms (EDAs), sometimes called probabilistic model-building genetic algorithms (PMBGAs), are stochastic optimization methods that guide the search for the optimum by building and sampling explicit probabilistic models of promising candidate solutions.

New!!: Kullback–Leibler divergence and Estimation of distribution algorithm · See more »

Evidence lower bound

In statistics, the evidence lower bound (ELBO, also variational lower bound) is the difference between the distribution of a latent variable and the distribution of the respective observed variable (See Kullback–Leibler divergence).

New!!: Kullback–Leibler divergence and Evidence lower bound · See more »

Expectation–maximization algorithm

In statistics, an expectation–maximization (EM) algorithm is an iterative method to find maximum likelihood or maximum a posteriori (MAP) estimates of parameters in statistical models, where the model depends on unobserved latent variables.

New!!: Kullback–Leibler divergence and Expectation–maximization algorithm · See more »

Exponential distribution

No description.

New!!: Kullback–Leibler divergence and Exponential distribution · See more »

F-divergence

In probability theory, an ƒ-divergence is a function Df (P  || Q) that measures the difference between two probability distributions P and Q. It helps the intuition to think of the divergence as an average, weighted by the function f, of the odds ratio given by P and Q. These divergences were introduced and studied independently by, and and are sometimes known as Csiszár ƒ-divergences, Csiszár-Morimoto divergences or Ali-Silvey distances.

New!!: Kullback–Leibler divergence and F-divergence · See more »

Fano's inequality

In information theory, Fano's inequality (also known as the Fano converse and the Fano lemma) relates the average information lost in a noisy channel to the probability of the categorization error.

New!!: Kullback–Leibler divergence and Fano's inequality · See more »

Features from accelerated segment test

Features from accelerated segment test (FAST) is a corner detection method, which could be used to extract feature points and later used to track and map objects in many computer vision tasks.

New!!: Kullback–Leibler divergence and Features from accelerated segment test · See more »

First-order inductive learner

In machine learning, first-order inductive learner (FOIL) is a rule-based learning algorithm.

New!!: Kullback–Leibler divergence and First-order inductive learner · See more »

Fisher information

In mathematical statistics, the Fisher information (sometimes simply called information) is a way of measuring the amount of information that an observable random variable X carries about an unknown parameter θ of a distribution that models X. Formally, it is the variance of the score, or the expected value of the observed information.

New!!: Kullback–Leibler divergence and Fisher information · See more »

Fisher information metric

In information geometry, the Fisher information metric is a particular Riemannian metric which can be defined on a smooth statistical manifold, i.e., a smooth manifold whose points are probability measures defined on a common probability space.

New!!: Kullback–Leibler divergence and Fisher information metric · See more »

Free energy principle

The free energy principle tries to explain how (biological) systems maintain their order (non-equilibrium steady-state) by restricting themselves to a limited number of states.

New!!: Kullback–Leibler divergence and Free energy principle · See more »

G-test

In statistics, G-tests are likelihood-ratio or maximum likelihood statistical significance tests that are increasingly being used in situations where chi-squared tests were previously recommended.

New!!: Kullback–Leibler divergence and G-test · See more »

Gambling and information theory

Statistical inference might be thought of as gambling theory applied to the world around us.

New!!: Kullback–Leibler divergence and Gambling and information theory · See more »

Gamma distribution

In probability theory and statistics, the gamma distribution is a two-parameter family of continuous probability distributions.

New!!: Kullback–Leibler divergence and Gamma distribution · See more »

Generalized filtering

Generalized filtering is a generic Bayesian filtering scheme for nonlinear state-space models.

New!!: Kullback–Leibler divergence and Generalized filtering · See more »

Generalized gamma distribution

The generalized gamma distribution is a continuous probability distribution with three parameters.

New!!: Kullback–Leibler divergence and Generalized gamma distribution · See more »

Gibbs' inequality

Josiah Willard Gibbs In information theory, Gibbs' inequality is a statement about the mathematical entropy of a discrete probability distribution.

New!!: Kullback–Leibler divergence and Gibbs' inequality · See more »

Gilbert–Shannon–Reeds model

In the mathematics of shuffling playing cards, the Gilbert–Shannon–Reeds model is a probability distribution on riffle shuffle permutations that has been reported to be a good match for experimentally observed outcomes of human shuffling, and that forms the basis for a recommendation that a deck of cards should be riffled seven times in order to thoroughly randomize it.

New!!: Kullback–Leibler divergence and Gilbert–Shannon–Reeds model · See more »

Gompertz distribution

No description.

New!!: Kullback–Leibler divergence and Gompertz distribution · See more »

Hannan–Quinn information criterion

In statistics, the Hannan–Quinn information criterion (HQC) is a criterion for model selection.

New!!: Kullback–Leibler divergence and Hannan–Quinn information criterion · See more »

Hellinger distance

In probability and statistics, the Hellinger distance (closely related to, although different from, the Bhattacharyya distance) is used to quantify the similarity between two probability distributions.

New!!: Kullback–Leibler divergence and Hellinger distance · See more »

Home advantage

In team sports, the term home advantage – also called home ground, home field, home-field advantage, home court, home-court advantage, defender's advantage or home-ice advantage – describes the benefit that the home team is said to gain over the visiting team.

New!!: Kullback–Leibler divergence and Home advantage · See more »

Hypergeometric distribution

In probability theory and statistics, the hypergeometric distribution is a discrete probability distribution that describes the probability of k successes (random draws for which the object drawn has a specified feature) in n draws, without replacement, from a finite population of size N that contains exactly K objects with that feature, wherein each draw is either a success or a failure.

New!!: Kullback–Leibler divergence and Hypergeometric distribution · See more »

Implicit authentication

Implicit authentication (IA) is a technique that allows the smart device to recognize its owner by being acquainted with his/her behaviors.

New!!: Kullback–Leibler divergence and Implicit authentication · See more »

Independent component analysis

In signal processing, independent component analysis (ICA) is a computational method for separating a multivariate signal into additive subcomponents.

New!!: Kullback–Leibler divergence and Independent component analysis · See more »

Index of dissimilarity

The index of dissimilarity is a demographic measure of the evenness with which two groups are distributed across component geographic areas that make up a larger area.

New!!: Kullback–Leibler divergence and Index of dissimilarity · See more »

Index of information theory articles

This is a list of information theory topics, by Wikipedia page.

New!!: Kullback–Leibler divergence and Index of information theory articles · See more »

Index of physics articles (K)

The index of physics articles is split into multiple pages due to its size.

New!!: Kullback–Leibler divergence and Index of physics articles (K) · See more »

Inequalities in information theory

Inequalities are very important in the study of information theory.

New!!: Kullback–Leibler divergence and Inequalities in information theory · See more »

Information bottleneck method

The information bottleneck method is a technique in information theory introduced by Naftali Tishby, Fernando C. Pereira, and William Bialek.

New!!: Kullback–Leibler divergence and Information bottleneck method · See more »

Information field theory

Information field theory (IFT) is a Bayesian statistical field theory relating to signal reconstruction, cosmography, and other related areas.

New!!: Kullback–Leibler divergence and Information field theory · See more »

Information gain in decision trees

In information theory and machine learning, information gain is a synonym for Kullback–Leibler divergence.

New!!: Kullback–Leibler divergence and Information gain in decision trees · See more »

Information projection

In information theory, the information projection or I-projection of a probability distribution q onto a set of distributions P is where D_ is the Kullback–Leibler divergence from q to p. Viewing the Kullback–Leibler divergence as a measure of distance, the I-projection p^* is the "closest" distribution to q of all the distributions in P. The I-projection is useful in setting up information geometry, notably because of the following inequality, valid when P is convex: \operatorname_(p||q) \geq \operatorname_(p||p^*) + \operatorname_(p^*||q) This inequality can be interpreted as an information-geometric version of Pythagoras' triangle inequality theorem, where KL divergence is viewed as squared distance in a Euclidean space.

New!!: Kullback–Leibler divergence and Information projection · See more »

Information theory

Information theory studies the quantification, storage, and communication of information.

New!!: Kullback–Leibler divergence and Information theory · See more »

Information theory and measure theory

This article discusses how information theory (a branch of mathematics studying the transmission, processing and storage of information) is related to measure theory (a branch of mathematics related to integration and probability).

New!!: Kullback–Leibler divergence and Information theory and measure theory · See more »

Inverse-gamma distribution

In probability theory and statistics, the inverse gamma distribution is a two-parameter family of continuous probability distributions on the positive real line, which is the distribution of the reciprocal of a variable distributed according to the gamma distribution.

New!!: Kullback–Leibler divergence and Inverse-gamma distribution · See more »

Jensen's inequality

In mathematics, Jensen's inequality, named after the Danish mathematician Johan Jensen, relates the value of a convex function of an integral to the integral of the convex function.

New!!: Kullback–Leibler divergence and Jensen's inequality · See more »

Jensen–Shannon divergence

In probability theory and statistics, the Jensen–Shannon divergence is a method of measuring the similarity between two probability distributions.

New!!: Kullback–Leibler divergence and Jensen–Shannon divergence · See more »

John von Neumann

John von Neumann (Neumann János Lajos,; December 28, 1903 – February 8, 1957) was a Hungarian-American mathematician, physicist, computer scientist, and polymath.

New!!: Kullback–Leibler divergence and John von Neumann · See more »

Kernel embedding of distributions

In machine learning, the kernel embedding of distributions (also called the kernel mean or mean map) comprises a class of nonparametric methods in which a probability distribution is represented as an element of a reproducing kernel Hilbert space (RKHS).

New!!: Kullback–Leibler divergence and Kernel embedding of distributions · See more »

KL

KL, kL, kl, or kl. may refer to.

New!!: Kullback–Leibler divergence and KL · See more »

KLD

KLD may refer to.

New!!: Kullback–Leibler divergence and KLD · See more »

KLIC

KLIC may refer to.

New!!: Kullback–Leibler divergence and KLIC · See more »

Kullback's inequality

In information theory and statistics, Kullback's inequality is a lower bound on the Kullback–Leibler divergence expressed in terms of the large deviations rate function.

New!!: Kullback–Leibler divergence and Kullback's inequality · See more »

Large deviations theory

In probability theory, the theory of large deviations concerns the asymptotic behaviour of remote tails of sequences of probability distributions.

New!!: Kullback–Leibler divergence and Large deviations theory · See more »

Leibler

Leibler may refer to:; People.

New!!: Kullback–Leibler divergence and Leibler · See more »

Limiting density of discrete points

In information theory, the limiting density of discrete points is an adjustment to the formula of Claude Shannon for differential entropy.

New!!: Kullback–Leibler divergence and Limiting density of discrete points · See more »

List of probability topics

This is a list of probability topics, by Wikipedia page.

New!!: Kullback–Leibler divergence and List of probability topics · See more »

List of statistics articles

No description.

New!!: Kullback–Leibler divergence and List of statistics articles · See more »

List of University of Illinois at Urbana–Champaign people

This is a list of notable people affiliated with the University of Illinois at Urbana–Champaign, a public research university in Illinois.

New!!: Kullback–Leibler divergence and List of University of Illinois at Urbana–Champaign people · See more »

List of weight-of-evidence articles

Weight of evidence is a measure of evidence on one side of an issue as compared with the evidence on the other side of the issue, or to measure the evidence on multiple issues.

New!!: Kullback–Leibler divergence and List of weight-of-evidence articles · See more »

Log sum inequality

In information theory, the log sum inequality is an inequality which is useful for proving several theorems in information theory.

New!!: Kullback–Leibler divergence and Log sum inequality · See more »

Logistic regression

In statistics, the logistic model (or logit model) is a statistical model that is usually taken to apply to a binary dependent variable.

New!!: Kullback–Leibler divergence and Logistic regression · See more »

Logit-normal distribution

In probability theory, a logit-normal distribution is a probability distribution of a random variable whose logit has a normal distribution.

New!!: Kullback–Leibler divergence and Logit-normal distribution · See more »

Loss functions for classification

In machine learning and mathematical optimization, loss functions for classification are computationally feasible loss functions representing the price paid for inaccuracy of predictions in classification problems (problems of identifying which category a particular observation belongs to).

New!!: Kullback–Leibler divergence and Loss functions for classification · See more »

Maximum entropy thermodynamics

In physics, maximum entropy thermodynamics (colloquially, MaxEnt thermodynamics) views equilibrium thermodynamics and statistical mechanics as inference processes.

New!!: Kullback–Leibler divergence and Maximum entropy thermodynamics · See more »

Maximum spacing estimation

In statistics, maximum spacing estimation (MSE or MSP), or maximum product of spacing estimation (MPS), is a method for estimating the parameters of a univariate statistical model.

New!!: Kullback–Leibler divergence and Maximum spacing estimation · See more »

Minimal-entropy martingale measure

In probability theory, the minimal-entropy martingale measure (MEMM) is the risk-neutral probability measure that minimises the entropy difference between the objective probability measure, P, and the risk-neutral measure, Q. In incomplete markets, this is one way of choosing a risk-neutral measure (from the infinite number available) so as to still maintain the no-arbitrage conditions.

New!!: Kullback–Leibler divergence and Minimal-entropy martingale measure · See more »

Monte Carlo localization

Monte Carlo localization (MCL), also known as particle filter localization,Ioannis M. Rekleitis.

New!!: Kullback–Leibler divergence and Monte Carlo localization · See more »

Multifidelity simulation

Multifidelity methods leverage both low- and high-fidelity data in order to maximize the accuracy of model estimates, while minimizing the cost associated with parametrization.

New!!: Kullback–Leibler divergence and Multifidelity simulation · See more »

Multiple kernel learning

Multiple kernel learning refers to a set of machine learning methods that use a predefined set of kernels and learn an optimal linear or non-linear combination of kernels as part of the algorithm.

New!!: Kullback–Leibler divergence and Multiple kernel learning · See more »

Multivariate kernel density estimation

Kernel density estimation is a nonparametric technique for density estimation i.e., estimation of probability density functions, which is one of the fundamental questions in statistics.

New!!: Kullback–Leibler divergence and Multivariate kernel density estimation · See more »

Multivariate normal distribution

In probability theory and statistics, the multivariate normal distribution or multivariate Gaussian distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions.

New!!: Kullback–Leibler divergence and Multivariate normal distribution · See more »

Mutual information

In probability theory and information theory, the mutual information (MI) of two random variables is a measure of the mutual dependence between the two variables.

New!!: Kullback–Leibler divergence and Mutual information · See more »

Nkld

No description.

New!!: Kullback–Leibler divergence and Nkld · See more »

Non-negative matrix factorization

Non-negative matrix factorization (NMF or NNMF), also non-negative matrix approximation is a group of algorithms in multivariate analysis and linear algebra where a matrix is factorized into (usually) two matrices and, with the property that all three matrices have no negative elements.

New!!: Kullback–Leibler divergence and Non-negative matrix factorization · See more »

Normal distribution

In probability theory, the normal (or Gaussian or Gauss or Laplace–Gauss) distribution is a very common continuous probability distribution.

New!!: Kullback–Leibler divergence and Normal distribution · See more »

Normality test

In statistics, normality tests are used to determine if a data set is well-modeled by a normal distribution and to compute how likely it is for a random variable underlying the data set to be normally distributed.

New!!: Kullback–Leibler divergence and Normality test · See more »

Pinsker's inequality

In information theory, Pinsker's inequality, named after its inventor Mark Semenovich Pinsker, is an inequality that bounds the total variation distance (or statistical distance) in terms of the Kullback–Leibler divergence.

New!!: Kullback–Leibler divergence and Pinsker's inequality · See more »

Poisson distribution

In probability theory and statistics, the Poisson distribution (in English often rendered), named after French mathematician Siméon Denis Poisson, is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time or space if these events occur with a known constant rate and independently of the time since the last event.

New!!: Kullback–Leibler divergence and Poisson distribution · See more »

Position weight matrix

A position weight matrix (PWM), also known as a position-specific weight matrix (PSWM) or position-specific scoring matrix (PSSM), is a commonly used representation of motifs (patterns) in biological sequences.

New!!: Kullback–Leibler divergence and Position weight matrix · See more »

Principal component analysis

Principal component analysis (PCA) is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components.

New!!: Kullback–Leibler divergence and Principal component analysis · See more »

Principle of maximum entropy

The principle of maximum entropy states that the probability distribution which best represents the current state of knowledge is the one with largest entropy, in the context of precisely stated prior data (such as a proposition that expresses testable information).

New!!: Kullback–Leibler divergence and Principle of maximum entropy · See more »

Prior probability

In Bayesian statistical inference, a prior probability distribution, often simply called the prior, of an uncertain quantity is the probability distribution that would express one's beliefs about this quantity before some evidence is taken into account.

New!!: Kullback–Leibler divergence and Prior probability · See more »

Quantities of information

The mathematical theory of information is based on probability theory and statistics, and measures information with several quantities of information.

New!!: Kullback–Leibler divergence and Quantities of information · See more »

Quantum mutual information

In quantum information theory, quantum mutual information, or von Neumann mutual information, after John von Neumann, is a measure of correlation between subsystems of quantum state.

New!!: Kullback–Leibler divergence and Quantum mutual information · See more »

Quantum relative entropy

In quantum information theory, quantum relative entropy is a measure of distinguishability between two quantum states.

New!!: Kullback–Leibler divergence and Quantum relative entropy · See more »

Radon–Nikodym theorem

In mathematics, the Radon–Nikodym theorem is a result in measure theory.

New!!: Kullback–Leibler divergence and Radon–Nikodym theorem · See more »

Random forest

Random forests or random decision forests are an ensemble learning method for classification, regression and other tasks, that operate by constructing a multitude of decision trees at training time and outputting the class that is the mode of the classes (classification) or mean prediction (regression) of the individual trees.

New!!: Kullback–Leibler divergence and Random forest · See more »

Rényi entropy

In information theory, the Rényi entropy generalizes the Hartley entropy, the Shannon entropy, the collision entropy and the min entropy.

New!!: Kullback–Leibler divergence and Rényi entropy · See more »

Richard Leibler

Richard A. Leibler (March 18, 1914, Chicago – October 25, 2003, Reston, Virginia) was an American mathematician and cryptanalyst.

New!!: Kullback–Leibler divergence and Richard Leibler · See more »

Sanov's theorem

In information theory, Sanov's theorem gives a bound on the probability of observing an atypical sequence of samples from a given probability distribution.

New!!: Kullback–Leibler divergence and Sanov's theorem · See more »

Solomon Kullback

Solomon Kullback (April 3, 1907August 5, 1994) was an American cryptanalyst and mathematician, who was one of the first three employees hired by William F. Friedman at the US Army's Signal Intelligence Service (SIS) in the 1930s, along with Frank Rowlett and Abraham Sinkov.

New!!: Kullback–Leibler divergence and Solomon Kullback · See more »

Statistic

A statistic (singular) or sample statistic is a single measure of some attribute of a sample (e.g. its arithmetic mean value).

New!!: Kullback–Leibler divergence and Statistic · See more »

Statistical distance

In statistics, probability theory, and information theory, a statistical distance quantifies the distance between two statistical objects, which can be two random variables, or two probability distributions or samples, or the distance can be between an individual sample point and a population or a wider sample of points.

New!!: Kullback–Leibler divergence and Statistical distance · See more »

Statistical inference

Statistical inference is the process of using data analysis to deduce properties of an underlying probability distribution.

New!!: Kullback–Leibler divergence and Statistical inference · See more »

String metric

In mathematics and computer science, a string metric (also known as a string similarity metric or string distance function) is a metric that measures distance ("inverse similarity") between two text strings for approximate string matching or comparison and in fuzzy string searching.

New!!: Kullback–Leibler divergence and String metric · See more »

Strong subadditivity of quantum entropy

Strong subadditivity of entropy (SSA) was long known and appreciated in classical probability theory and information theory.

New!!: Kullback–Leibler divergence and Strong subadditivity of quantum entropy · See more »

Structured expert judgment: the classical model

Expert Judgment (EJ) denotes a wide variety of techniques ranging from a single undocumented opinion, through preference surveys, to formal elicitation with external validation of expert probability assessments.

New!!: Kullback–Leibler divergence and Structured expert judgment: the classical model · See more »

T-distributed stochastic neighbor embedding

T-distributed Stochastic Neighbor Embedding (t-SNE) is a machine learning algorithm for visualization developed by Laurens van der Maaten and Geoffrey Hinton.

New!!: Kullback–Leibler divergence and T-distributed stochastic neighbor embedding · See more »

Tf–idf

In information retrieval, tf–idf or TFIDF, short for term frequency–inverse document frequency, is a numerical statistic that is intended to reflect how important a word is to a document in a collection or corpus.

New!!: Kullback–Leibler divergence and Tf–idf · See more »

Timeline of information theory

A timeline of events related to information theory, quantum information theory and statistical physics, data compression, error correcting codes and related subjects.

New!!: Kullback–Leibler divergence and Timeline of information theory · See more »

Total correlation

In probability theory and in particular in information theory, total correlation (Watanabe 1960) is one of several generalizations of the mutual information.

New!!: Kullback–Leibler divergence and Total correlation · See more »

Total variation distance of probability measures

In probability theory, the total variation distance is a distance measure for probability distributions.

New!!: Kullback–Leibler divergence and Total variation distance of probability measures · See more »

Universal code (data compression)

In data compression, a universal code for integers is a prefix code that maps the positive integers onto binary codewords, with the additional property that whatever the true probability distribution on integers, as long as the distribution is monotonic (i.e., p(i) ≥ p(i + 1) for all positive i), the expected lengths of the codewords are within a constant factor of the expected lengths that the optimal code for that probability distribution would have assigned.

New!!: Kullback–Leibler divergence and Universal code (data compression) · See more »

Variational Bayesian methods

Variational Bayesian methods are a family of techniques for approximating intractable integrals arising in Bayesian inference and machine learning.

New!!: Kullback–Leibler divergence and Variational Bayesian methods · See more »

Variational message passing

Variational message passing (VMP) is an approximate inference technique for continuous- or discrete-valued Bayesian networks, with conjugate-exponential parents, developed by John Winn.

New!!: Kullback–Leibler divergence and Variational message passing · See more »

Von Neumann entropy

In quantum statistical mechanics, the von Neumann entropy, named after John von Neumann, is the extension of classical Gibbs entropy concepts to the field of quantum mechanics.

New!!: Kullback–Leibler divergence and Von Neumann entropy · See more »

VOTCA

Versatile Object-oriented Toolkit for Coarse-graining Applications (VOTCA) is a Coarse-grained modeling package, which focuses on the analysis of molecular dynamics data, the development of systematic coarse-graining techniques as well as methods used for simulating microscopic charge (and exciton) transport in disordered semiconductors.

New!!: Kullback–Leibler divergence and VOTCA · See more »

Vuong's closeness test

In statistics, the Vuong closeness test is likelihood-ratio-based test for model selection using the Kullback-Leibler information criterion.

New!!: Kullback–Leibler divergence and Vuong's closeness test · See more »

Web mining

Web mining is the application of data mining techniques to discover patterns from the World Wide Web.

New!!: Kullback–Leibler divergence and Web mining · See more »

Wishart distribution

In statistics, the Wishart distribution is a generalization to multiple dimensions of the chi-squared distribution, or, in the case of non-integer degrees of freedom, of the gamma distribution.

New!!: Kullback–Leibler divergence and Wishart distribution · See more »

Young's inequality for products

In mathematics, Young's inequality for products is a mathematical inequality about the product of two numbers.

New!!: Kullback–Leibler divergence and Young's inequality for products · See more »

Redirects here:

Discrimination information, Information gain, KL distance, KL divergence, KL-distance, KL-divergence, Kl-divergence, Kullback Leibler divergence, Kullback divergence, Kullback information, Kullback-Leibler, Kullback-Leibler Distance, Kullback-Leibler distance, Kullback-Leibler divergence, Kullback-Leibler entropy, Kullback-Leibler information, Kullback-Leibler redundancy, Kullback-Liebler, Kullback-Liebler distance, Kullback-leibler divergence, Kullback–Leibler distance, Kullback–Leibler entropy, Kullback–Leibler information, Kullback–Leibler redundancy, Principle of Minimum Discrimination Information, Relative entropy.

References

[1] https://en.wikipedia.org/wiki/Kullback–Leibler_divergence

OutgoingIncoming
Hey! We are on Facebook now! »