102 relations: Bayes' theorem, Bayesian inference, Bayesian statistics, Bessel function, Bessel's correction, Beta function, Biometrika, Box–Muller transform, Cauchy distribution, Chi-squared distribution, Cochran's theorem, Compound probability distribution, Confidence interval, Conjugate prior, Copula (probability theory), Cumulative distribution function, Curse of dimensionality, Data, Degrees of freedom (statistics), Digamma function, Dublin, Errors and residuals, Expected value, F-distribution, Folded-t and half-t distributions, Friedrich Robert Helmert, Gamma distribution, Gamma function, Generalised hyperbolic distribution, Guinness Brewery, Hotelling's T-squared distribution, Hypergeometric function, Independence (probability theory), Independent and identically distributed random variables, Indeterminate form, Inverse-gamma distribution, Irwin–Hall distribution, Jacob Lüroth, Jeffreys prior, Kurtosis, Linear function, Location parameter, Location–scale family, Marginal distribution, Mathematical Proceedings of the Cambridge Philosophical Society, Maximum entropy probability distribution, Mean, Michael Christopher Wendl, Moment (mathematics), Multivariate t-distribution, ..., Noncentral t-distribution, Noncentrality parameter, Normal distribution, Null hypothesis, One- and two-tailed tests, P-value, Parametrization, Pearson distribution, Pivotal quantity, Posterior predictive distribution, Posterior probability, Power (statistics), Precision (statistics), Prediction interval, Prior probability, Probability, Probability density function, Probability distribution, Probability mass function, Prosecutor's fallacy, Quantile, Quantile function, Random variable, Real number, Regression analysis, Robust statistics, Ronald Fisher, Sample size determination, Sampling distribution, Scale parameter, Scaled inverse chi-squared distribution, Science (journal), Skewness, Spearman's rank correlation coefficient, Spreadsheet, Springer Science+Business Media, Standard deviation, Standard score, Statistical hypothesis testing, Statistical population, Statistical significance, Statistics, Statistics in Medicine (journal), Student's t-test, Studentized residual, Symmetric probability distribution, T-statistic, Uniform, Variance, Wilks's lambda distribution, William Sealy Gosset, Wishart distribution. Expand index (52 more) » « Shrink index
In probability theory and statistics, Bayes’ theorem (alternatively Bayes’ law or Bayes' rule, also written as Bayes’s theorem) describes the probability of an event, based on prior knowledge of conditions that might be related to the event.
Bayesian inference is a method of statistical inference in which Bayes' theorem is used to update the probability for a hypothesis as more evidence or information becomes available.
Bayesian statistics, named for Thomas Bayes (1701–1761), is a theory in the field of statistics in which the evidence about the true state of the world is expressed in terms of degrees of belief known as Bayesian probabilities.
Bessel functions, first defined by the mathematician Daniel Bernoulli and then generalized by Friedrich Bessel, are the canonical solutions of Bessel's differential equation for an arbitrary complex number, the order of the Bessel function.
In statistics, Bessel's correction is the use of n − 1 instead of n in the formula for the sample variance and sample standard deviation, where n is the number of observations in a sample.
In mathematics, the beta function, also called the Euler integral of the first kind, is a special function defined by for.
Biometrika is a peer-reviewed scientific journal published by Oxford University Press for the Biometrika Trust.
The Box–Muller transform, by George Edward Pelham Box and Mervin Edgar Muller, is a pseudo-random number sampling method for generating pairs of independent, standard, normally distributed (zero expectation, unit variance) random numbers, given a source of uniformly distributed random numbers.
The Cauchy distribution, named after Augustin Cauchy, is a continuous probability distribution.
In statistics, Cochran's theorem, devised by William G. Cochran, is a theorem used to justify results relating to the probability distributions of statistics that are used in the analysis of variance.
In probability and statistics, a compound probability distribution (also known as a mixture distribution or contagious distribution) is the probability distribution that results from assuming that a random variable is distributed according to some parametrized distribution, with (some of) the parameters of that distribution themselves being random variables.
In statistics, a confidence interval (CI) is a type of interval estimate, computed from the statistics of the observed data, that might contain the true value of an unknown population parameter.
In Bayesian probability theory, if the posterior distributions p(θ|x) are in the same probability distribution family as the prior probability distribution p(θ), the prior and posterior are then called conjugate distributions, and the prior is called a conjugate prior for the likelihood function.
In probability theory and statistics, a copula is a multivariate probability distribution for which the marginal probability distribution of each variable is uniform.
In probability theory and statistics, the cumulative distribution function (CDF, also cumulative density function) of a real-valued random variable X, or just distribution function of X, evaluated at x, is the probability that X will take a value less than or equal to x. In the case of a continuous distribution, it gives the area under the probability density function from minus infinity to x. Cumulative distribution functions are also used to specify the distribution of multivariate random variables.
The curse of dimensionality refers to various phenomena that arise when analyzing and organizing data in high-dimensional spaces (often with hundreds or thousands of dimensions) that do not occur in low-dimensional settings such as the three-dimensional physical space of everyday experience.
Data is a set of values of qualitative or quantitative variables.
In statistics, the number of degrees of freedom is the number of values in the final calculation of a statistic that are free to vary.
In mathematics, the digamma function is defined as the logarithmic derivative of the gamma function: It is the first of the polygamma functions.
Dublin is the capital of and largest city in Ireland.
In statistics and optimization, errors and residuals are two closely related and easily confused measures of the deviation of an observed value of an element of a statistical sample from its "theoretical value".
In probability theory, the expected value of a random variable, intuitively, is the long-run average value of repetitions of the experiment it represents.
In statistics, the folded-t and half-t distributions are derived from Student's ''t''-distribution by taking the absolute values of variates.
Friedrich Robert Helmert (July 31, 1843 – June 15, 1917) was a German geodesist and an important writer on the theory of errors.
In probability theory and statistics, the gamma distribution is a two-parameter family of continuous probability distributions.
In mathematics, the gamma function (represented by, the capital Greek alphabet letter gamma) is an extension of the factorial function, with its argument shifted down by 1, to real and complex numbers.
The generalised hyperbolic distribution (GH) is a continuous probability distribution defined as the normal variance-mean mixture where the mixing distribution is the generalized inverse Gaussian distribution (GIG).
In statistics Hotelling's T-squared distribution (T2) is a multivariate distribution proportional to the ''F''-distribution and arises importantly as the distribution of a set of statistics which are natural generalizations of the statistics underlying Student's ''t''-distribution.
In mathematics, the Gaussian or ordinary hypergeometric function 2F1(a,b;c;z) is a special function represented by the hypergeometric series, that includes many other special functions as specific or limiting cases.
In probability theory, two events are independent, statistically independent, or stochastically independent if the occurrence of one does not affect the probability of occurrence of the other.
In probability theory and statistics, a sequence or other collection of random variables is independent and identically distributed (i.i.d. or iid or IID) if each random variable has the same probability distribution as the others and all are mutually independent.
In calculus and other branches of mathematical analysis, limits involving an algebraic combination of functions in an independent variable may often be evaluated by replacing these functions by their limits; if the expression obtained after this substitution does not give enough information to determine the original limit, it is said to take on an indeterminate form.
In probability theory and statistics, the inverse gamma distribution is a two-parameter family of continuous probability distributions on the positive real line, which is the distribution of the reciprocal of a variable distributed according to the gamma distribution.
In probability and statistics, the Irwin–Hall distribution, named after Joseph Oscar Irwin and Philip Hall, is a probability distribution for a random variable defined as the sum of a number of independent random variables, each having a uniform distribution.
Jacob Lüroth (18 February 1844, Mannheim, Germany – 14 September 1910, Munich, Germany) was a German mathematician who proved Lüroth's theorem and introduced Lüroth quartics.
In Bayesian probability, the Jeffreys prior, named after Sir Harold Jeffreys, is a non-informative (objective) prior distribution for a parameter space; it is proportional to the square root of the determinant of the Fisher information matrix: It has the key feature that it is invariant under reparameterization of the parameter vector \vec\theta.
In probability theory and statistics, kurtosis (from κυρτός, kyrtos or kurtos, meaning "curved, arching") is a measure of the "tailedness" of the probability distribution of a real-valued random variable.
In mathematics, the term linear function refers to two distinct but related notions.
In statistics, a location family is a class of probability distributions that is parametrized by a scalar- or vector-valued parameter x_0, which determines the "location" or shift of the distribution.
In probability theory, especially in mathematical statistics, a location–scale family is a family of probability distributions parametrized by a location parameter and a non-negative scale parameter.
In probability theory and statistics, the marginal distribution of a subset of a collection of random variables is the probability distribution of the variables contained in the subset.
Mathematical Proceedings of the Cambridge Philosophical Society is a mathematical journal published by Cambridge University Press for the Cambridge Philosophical Society.
In statistics and information theory, a maximum entropy probability distribution has entropy that is at least as great as that of all other members of a specified class of probability distributions.
In mathematics, mean has several different definitions depending on the context.
Michael Christopher Wendl is a mathematician and biomedical engineer who has worked on DNA sequencing theory, covering and matching problems in probability, theoretical fluid mechanics, and co-wrote Phred.
In mathematics, a moment is a specific quantitative measure, used in both mechanics and statistics, of the shape of a set of points.
In statistics, the multivariate t-distribution (or multivariate Student distribution) is a multivariate probability distribution.
As with other probability distributions with noncentrality parameters, the noncentral t-distribution generalizes a probability distribution – Student's ''t''-distribution – using a noncentrality parameter.
Noncentrality parameters are parameters of families of probability distributions that are related to other "central" families of distributions.
In probability theory, the normal (or Gaussian or Gauss or Laplace–Gauss) distribution is a very common continuous probability distribution.
In inferential statistics, the term "null hypothesis" is a general statement or default position that there is no relationship between two measured phenomena, or no association among groups.
In statistical significance testing, a one-tailed test and a two-tailed test are alternative ways of computing the statistical significance of a parameter inferred from a data set, in terms of a test statistic.
In statistical hypothesis testing, the p-value or probability value or asymptotic significance is the probability for a given statistical model that, when the null hypothesis is true, the statistical summary (such as the sample mean difference between two compared groups) would be the same as or of greater magnitude than the actual observed results.
Parametrization (or parameterization; also parameterisation, parametrisation) is the process of finding parametric equations of a curve, a surface, or, more generally, a manifold or a variety, defined by an implicit equation.
The Pearson distribution is a family of continuous probability distributions.
In statistics, a pivotal quantity or pivot is a function of observations and unobservable parameters such that the function's probability distribution does not depend on the unknown parameters (including nuisance parameters).
In Bayesian statistics, the posterior predictive distribution is the distribution of possible unobserved values conditional on the observed values.
In Bayesian statistics, the posterior probability of a random event or an uncertain proposition is the conditional probability that is assigned after the relevant evidence or background is taken into account.
The power of a binary hypothesis test is the probability that the test correctly rejects the null hypothesis (H0) when a specific alternative hypothesis (H1) is true.
In statistics, precision is the reciprocal of the variance, and the precision matrix (also known as concentration matrix) is the matrix inverse of the covariance matrix.
In statistical inference, specifically predictive inference, a prediction interval is an estimate of an interval in which a future observation will fall, with a certain probability, given what has already been observed.
In Bayesian statistical inference, a prior probability distribution, often simply called the prior, of an uncertain quantity is the probability distribution that would express one's beliefs about this quantity before some evidence is taken into account.
Probability is the measure of the likelihood that an event will occur.
In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function, whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) can be interpreted as providing a relative likelihood that the value of the random variable would equal that sample.
In probability theory and statistics, a probability distribution is a mathematical function that provides the probabilities of occurrence of different possible outcomes in an experiment.
In probability and statistics, a probability mass function (pmf) is a function that gives the probability that a discrete random variable is exactly equal to some value.
The prosecutor's fallacy is a fallacy of statistical reasoning, typically used by the prosecution to argue for the guilt of a defendant during a criminal trial.
In statistics and probability quantiles are cut points dividing the range of a probability distribution into contiguous intervals with equal probabilities, or dividing the observations in a sample in the same way.
In probability and statistics, the quantile function specifies, for a given probability in the probability distribution of a random variable, the value at which the probability of the random variable is less than or equal to the given probability.
In probability and statistics, a random variable, random quantity, aleatory variable, or stochastic variable is a variable whose possible values are outcomes of a random phenomenon.
In mathematics, a real number is a value of a continuous quantity that can represent a distance along a line.
In statistical modeling, regression analysis is a set of statistical processes for estimating the relationships among variables.
Robust statistics are statistics with good performance for data drawn from a wide range of probability distributions, especially for distributions that are not normal.
Sir Ronald Aylmer Fisher (17 February 1890 – 29 July 1962), who published as R. A. Fisher, was a British statistician and geneticist.
Sample size determination is the act of choosing the number of observations or replicates to include in a statistical sample.
In statistics, a sampling distribution or finite-sample distribution is the probability distribution of a given random-sample-based statistic.
In probability theory and statistics, a scale parameter is a special kind of numerical parameter of a parametric family of probability distributions.
The scaled inverse chi-squared distribution is the distribution for x.
Science, also widely referred to as Science Magazine, is the peer-reviewed academic journal of the American Association for the Advancement of Science (AAAS) and one of the world's top academic journals.
In probability theory and statistics, skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable about its mean.
In statistics, Spearman's rank correlation coefficient or Spearman's rho, named after Charles Spearman and often denoted by the Greek letter \rho (rho) or as r_s, is a nonparametric measure of rank correlation (statistical dependence between the rankings of two variables).
A spreadsheet is an interactive computer application for organization, analysis and storage of data in tabular form.
Springer Science+Business Media or Springer, part of Springer Nature since 2015, is a global publishing company that publishes books, e-books and peer-reviewed journals in science, humanities, technical and medical (STM) publishing.
In statistics, the standard deviation (SD, also represented by the Greek letter sigma σ or the Latin letter s) is a measure that is used to quantify the amount of variation or dispersion of a set of data values.
In statistics, the standard score is the signed number of standard deviations by which the value of an observation or data point differs from the mean value of what is being observed or measured.
A statistical hypothesis, sometimes called confirmatory data analysis, is a hypothesis that is testable on the basis of observing a process that is modeled via a set of random variables.
In statistics, a population is a set of similar items or events which is of interest for some question or experiment.
In statistical hypothesis testing, a result has statistical significance when it is very unlikely to have occurred given the null hypothesis.
Statistics is a branch of mathematics dealing with the collection, analysis, interpretation, presentation, and organization of data.
Statistics in Medicine is a peer-reviewed statistics journal published by Wiley.
The t-test is any statistical hypothesis test in which the test statistic follows a Student's ''t''-distribution under the null hypothesis.
In statistics, a studentized residual is the quotient resulting from the division of a residual by an estimate of its standard deviation.
In statistics, a symmetric probability distribution is a probability distribution—an assignment of probabilities to possible occurrences—which is unchanged when its probability density function or probability mass function is reflected around a vertical line at some value of the random variable represented by the distribution.
In statistics, the t-statistic is the ratio of the departure of the estimated value of a parameter from its hypothesized value to its standard error.
A uniform is a type of clothing worn by members of an organization while participating in that organization's activity.
In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its mean.
In statistics, Wilks's lambda distribution (named for Samuel S. Wilks), is a probability distribution used in multivariate hypothesis testing, especially with regard to the likelihood-ratio test and multivariate analysis of variance (MANOVA).
William Sealy Gosset (13 June 1876 – 16 October 1937) was an English statistician.
In statistics, the Wishart distribution is a generalization to multiple dimensions of the chi-squared distribution, or, in the case of non-integer degrees of freedom, of the gamma distribution.
Gosset's t distribution, Student distribution, Student t, Student t distribution, Student t-distribution, Student's T, Student's distribution, Student's t, Student's t distribution, Student-t, Student-t distribution, Students t distribution, Student’s t-distribution, T student, T-chart, T-table, TINV.