102 relations: Absolute convergence, Algebraic formula for the variance, Almost surely, Antoine Gombaud, Arithmetic mean, Berry–Esseen theorem, Bias of an estimator, Blaise Pascal, Cauchy distribution, Cauchy–Schwarz inequality, Center of mass, Central moment, Central tendency, Characteristic function (probability theory), Chebyshev's inequality, Christiaan Huygens, Classical mechanics, Coin flipping, Conditional convergence, Conditional expectation, Convex function, Covariance, Cumulative distribution function, Decision theory, Dependent and independent variables, Dice, Disjoint union, Dominated convergence theorem, Economics, Equiprobability, Errors and residuals, Estimation theory, Estimator, Event (probability theory), Expectation (epistemic), Expectation value (quantum mechanics), Fatou's lemma, Finance, Frequency (statistics), Fubini's theorem, Geometric series, Gordon–Loeb model, Hölder's inequality, Heavy-tailed distribution, Improper integral, Independence (probability theory), Indicator function, Inner product space, Jensen's inequality, Law of large numbers, ..., Law of the unconscious statistician, Law of total expectation, Lebesgue integration, Lebesgue–Stieltjes integration, Limit of a sequence, Linear map, Location parameter, Loss function, Machine learning, Markov's inequality, Measurable function, Minkowski inequality, Moment (mathematics), Moment-generating function, Monotone convergence theorem, Monte Carlo method, Natural logarithm, Nonlinear expectation, Outcome (probability), Pierre de Fermat, Pierre-Simon Laplace, Pip (counting), Plancherel theorem, Pointwise convergence, Probability density function, Probability distribution, Probability measure, Probability space, Probability theory, Problem of points, Quantum mechanics, Quantum state, Random variable, Regression analysis, Richard Hamming, Riemann series theorem, Risk aversion, Risk neutral preferences, Roulette, Sample (statistics), Sample size determination, Security, Series (mathematics), Simple function, St. Petersburg paradox, Statistical dispersion, Uncertainty principle, Variance, Von Neumann–Morgenstern utility theorem, Wald's equation, Weighted arithmetic mean, William Allen Whitworth. Expand index (52 more) » « Shrink index
In mathematics, an infinite series of numbers is said to converge absolutely (or to be absolutely convergent) if the sum of the absolute values of the summands is finite.
In probability theory and statistics, there are several algebraic formulae for the variance available for deriving the variance of a random variable.
In probability theory, one says that an event happens almost surely (sometimes abbreviated as a.s.) if it happens with probability one.
Antoine Gombaud, Chevalier de Méré (1607 – 29 December 1684) was a French writer, born in Poitou.
In mathematics and statistics, the arithmetic mean (stress on third syllable of "arithmetic"), or simply the mean or average when the context is clear, is the sum of a collection of numbers divided by the number of numbers in the collection.
In probability theory, the central limit theorem states that, under certain circumstances, the probability distribution of the scaled mean of a random sample converges to a normal distribution as the sample size increases to infinity.
In statistics, the bias (or bias function) of an estimator is the difference between this estimator's expected value and the true value of the parameter being estimated.
Blaise Pascal (19 June 1623 – 19 August 1662) was a French mathematician, physicist, inventor, writer and Catholic theologian.
The Cauchy distribution, named after Augustin Cauchy, is a continuous probability distribution.
In mathematics, the Cauchy–Schwarz inequality, also known as the Cauchy–Bunyakovsky–Schwarz inequality, is a useful inequality encountered in many different settings, such as linear algebra, analysis, probability theory, vector algebra and other areas.
In physics, the center of mass of a distribution of mass in space is the unique point where the weighted relative position of the distributed mass sums to zero, or the point where if a force is applied it moves in the direction of the force without rotating.
In probability theory and statistics, a central moment is a moment of a probability distribution of a random variable about the random variable's mean; that is, it is the expected value of a specified integer power of the deviation of the random variable from the mean.
In statistics, a central tendency (or measure of central tendency) is a central or typical value for a probability distribution.
In probability theory and statistics, the characteristic function of any real-valued random variable completely defines its probability distribution.
In probability theory, Chebyshev's inequality (also spelled as Tchebysheff's inequality, Нера́венство Чебышёва, also called Bienaymé-Chebyshev inequality) guarantees that, for a wide class of probability distributions, no more than a certain fraction of values can be more than a certain distance from the mean.
Christiaan Huygens (Hugenius; 14 April 1629 – 8 July 1695) was a Dutch physicist, mathematician, astronomer and inventor, who is widely regarded as one of the greatest scientists of all time and a major figure in the scientific revolution.
Classical mechanics describes the motion of macroscopic objects, from projectiles to parts of machinery, and astronomical objects, such as spacecraft, planets, stars and galaxies.
Coin flipping, coin tossing, or heads or tails is the practice of throwing a coin in the air and checking which side is showing when it lands to choose between two alternatives, sometimes to resolve a dispute between two parties.
In mathematics, a series or integral is said to be conditionally convergent if it converges, but it does not converge absolutely.
In probability theory, the conditional expectation, conditional expected value, or conditional mean of a random variable is its expected value – the value it would take “on average” over an arbitrarily large number of occurrences – given that a certain set of "conditions" is known to occur.
In mathematics, a real-valued function defined on an ''n''-dimensional interval is called convex (or convex downward or concave upward) if the line segment between any two points on the graph of the function lies above or on the graph, in a Euclidean space (or more generally a vector space) of at least two dimensions.
In probability theory and statistics, covariance is a measure of the joint variability of two random variables.
In probability theory and statistics, the cumulative distribution function (CDF, also cumulative density function) of a real-valued random variable X, or just distribution function of X, evaluated at x, is the probability that X will take a value less than or equal to x. In the case of a continuous distribution, it gives the area under the probability density function from minus infinity to x. Cumulative distribution functions are also used to specify the distribution of multivariate random variables.
Decision theory (or the theory of choice) is the study of the reasoning underlying an agent's choices.
In mathematical modeling, statistical modeling and experimental sciences, the values of dependent variables depend on the values of independent variables.
Dice (singular die or dice; from Old French dé; from Latin datum "something which is given or played") are small throwable objects with multiple resting positions, used for generating random numbers.
In set theory, the disjoint union (or discriminated union) of a family of sets is a modified union operation that indexes the elements according to which set they originated in.
In measure theory, Lebesgue's dominated convergence theorem provides sufficient conditions under which almost everywhere convergence of a sequence of functions implies convergence in the L1 norm.
Economics is the social science that studies the production, distribution, and consumption of goods and services.
Equiprobability is a property for a collection of events that each have the same probability of occurring.
In statistics and optimization, errors and residuals are two closely related and easily confused measures of the deviation of an observed value of an element of a statistical sample from its "theoretical value".
Estimation theory is a branch of statistics that deals with estimating the values of parameters based on measured empirical data that has a random component.
In statistics, an estimator is a rule for calculating an estimate of a given quantity based on observed data: thus the rule (the estimator), the quantity of interest (the estimand) and its result (the estimate) are distinguished.
In probability theory, an event is a set of outcomes of an experiment (a subset of the sample space) to which a probability is assigned.
In the case of uncertainty, expectation is what is considered the most likely to happen.
In quantum mechanics, the expectation value is the probabilistic expected value of the result (measurement) of an experiment.
In mathematics, Fatou's lemma establishes an inequality relating the Lebesgue integral of the limit inferior of a sequence of functions to the limit inferior of integrals of these functions.
Finance is a field that is concerned with the allocation (investment) of assets and liabilities (known as elements of the balance statement) over space and time, often under conditions of risk or uncertainty.
In statistics the frequency (or absolute frequency) of an event i is the number n_i of times the event occurred in an experiment or study.
In mathematical analysis Fubini's theorem, introduced by, is a result that gives conditions under which it is possible to compute a double integral using iterated integrals.
In mathematics, a geometric series is a series with a constant ratio between successive terms.
The Gordon–Loeb /ˈgȯr-dən ˈlōb/ Model is a mathematical economic model analyzing the optimal investment level in information security.
In mathematical analysis, Hölder's inequality, named after Otto Hölder, is a fundamental inequality between integrals and an indispensable tool for the study of ''Lp'' spaces.
In probability theory, heavy-tailed distributions are probability distributions whose tails are not exponentially bounded: that is, they have heavier tails than the exponential distribution.
In mathematical analysis, an improper integral is the limit of a definite integral as an endpoint of the interval(s) of integration approaches either a specified real number, \infty, -\infty, or in some instances as both endpoints approach limits.
In probability theory, two events are independent, statistically independent, or stochastically independent if the occurrence of one does not affect the probability of occurrence of the other.
In mathematics, an indicator function or a characteristic function is a function defined on a set X that indicates membership of an element in a subset A of X, having the value 1 for all elements of A and the value 0 for all elements of X not in A. It is usually denoted by a symbol 1 or I, sometimes in boldface or blackboard boldface, with a subscript specifying the subset.
In linear algebra, an inner product space is a vector space with an additional structure called an inner product.
In mathematics, Jensen's inequality, named after the Danish mathematician Johan Jensen, relates the value of a convex function of an integral to the integral of the convex function.
In probability theory, the law of large numbers (LLN) is a theorem that describes the result of performing the same experiment a large number of times.
In probability theory and statistics, the law of the unconscious statistician (sometimes abbreviated LOTUS) is a theorem used to calculate the expected value of a function g(X) of a random variable X when one knows the probability distribution of X but one does not explicitly know the distribution of g(X).
The proposition in probability theory known as the law of total expectation, the law of iterated expectations, the tower rule, and the smoothing theorem, among other names, states that if X is a random variable whose expected value \operatorname(X) is defined, and Y is any random variable on the same probability space, then i.e., the expected value of the conditional expected value of X given Y is the same as the expected value of X. One special case states that if _i is a finite or countable partition of the sample space, then.
In mathematics, the integral of a non-negative function of a single variable can be regarded, in the simplest case, as the area between the graph of that function and the -axis.
In measure-theoretic analysis and related branches of mathematics, Lebesgue–Stieltjes integration generalizes Riemann–Stieltjes and Lebesgue integration, preserving the many advantages of the former in a more general measure-theoretic framework.
As the positive integer n becomes larger and larger, the value n\cdot \sin\bigg(\frac1\bigg) becomes arbitrarily close to 1.
In mathematics, a linear map (also called a linear mapping, linear transformation or, in some contexts, linear function) is a mapping between two modules (including vector spaces) that preserves (in the sense defined below) the operations of addition and scalar multiplication.
In statistics, a location family is a class of probability distributions that is parametrized by a scalar- or vector-valued parameter x_0, which determines the "location" or shift of the distribution.
In mathematical optimization, statistics, econometrics, decision theory, machine learning and computational neuroscience, a loss function or cost function is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost" associated with the event.
Machine learning is a subset of artificial intelligence in the field of computer science that often uses statistical techniques to give computers the ability to "learn" (i.e., progressively improve performance on a specific task) with data, without being explicitly programmed.
In probability theory, Markov's inequality gives an upper bound for the probability that a non-negative function of a random variable is greater than or equal to some positive constant.
In mathematics and in particular measure theory, a measurable function is a function between two measurable spaces such that the preimage of any measurable set is measurable, analogously to the definition that a function between topological spaces is continuous if the preimage of each open set is open.
In mathematical analysis, the Minkowski inequality establishes that the L''p'' spaces are normed vector spaces.
In mathematics, a moment is a specific quantitative measure, used in both mechanics and statistics, of the shape of a set of points.
In probability theory and statistics, the moment-generating function of a real-valued random variable is an alternative specification of its probability distribution.
In the mathematical field of real analysis, the monotone convergence theorem is any of a number of related theorems proving the convergence of monotonic sequences (sequences that are increasing or decreasing) that are also bounded.
Monte Carlo methods (or Monte Carlo experiments) are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results.
The natural logarithm of a number is its logarithm to the base of the mathematical constant ''e'', where e is an irrational and transcendental number approximately equal to.
In probability theory, a nonlinear expectation is a nonlinear generalization of the expectation.
In probability theory, an outcome is a possible result of an experiment.
Pierre de Fermat (Between 31 October and 6 December 1607 – 12 January 1665) was a French lawyer at the Parlement of Toulouse, France, and a mathematician who is given credit for early developments that led to infinitesimal calculus, including his technique of adequality.
Pierre-Simon, marquis de Laplace (23 March 1749 – 5 March 1827) was a French scholar whose work was important to the development of mathematics, statistics, physics and astronomy.
Pips are small but easily countable items.
In mathematics, the Plancherel theorem is a result in harmonic analysis, proven by Michel Plancherel in 1910.
In mathematics, pointwise convergence is one of various senses in which a sequence of functions can converge to a particular function.
In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function, whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) can be interpreted as providing a relative likelihood that the value of the random variable would equal that sample.
In probability theory and statistics, a probability distribution is a mathematical function that provides the probabilities of occurrence of different possible outcomes in an experiment.
In mathematics, a probability measure is a real-valued function defined on a set of events in a probability space that satisfies measure properties such as countable additivity.
In probability theory, a probability space or a probability triple (\Omega, \mathcal, P) is a mathematical construct that models a real-world process (or “experiment”) consisting of states that occur randomly.
Probability theory is the branch of mathematics concerned with probability.
The problem of points, also called the problem of division of the stakes, is a classical problem in probability theory.
Quantum mechanics (QM; also known as quantum physics, quantum theory, the wave mechanical model, or matrix mechanics), including quantum field theory, is a fundamental theory in physics which describes nature at the smallest scales of energy levels of atoms and subatomic particles.
In quantum physics, quantum state refers to the state of an isolated quantum system.
In probability and statistics, a random variable, random quantity, aleatory variable, or stochastic variable is a variable whose possible values are outcomes of a random phenomenon.
In statistical modeling, regression analysis is a set of statistical processes for estimating the relationships among variables.
Richard Wesley Hamming (February 11, 1915 – January 7, 1998) was an American mathematician whose work had many implications for computer engineering and telecommunications.
In mathematics, the Riemann series theorem (also called the Riemann rearrangement theorem), named after 19th-century German mathematician Bernhard Riemann, says that if an infinite series of real numbers is conditionally convergent, then its terms can be arranged in a permutation so that the new series converges to an arbitrary real number, or diverges.
In economics and finance, risk aversion is the behavior of humans (especially consumers and investors), when exposed to uncertainty, in attempting to lower that uncertainty.
In economics and finance, risk neutral preferences are preferences that are neither risk averse nor risk seeking.
Roulette is a casino game named after the French word meaning little wheel.
In statistics and quantitative research methodology, a data sample is a set of data collected and/or selected from a statistical population by a defined procedure.
Sample size determination is the act of choosing the number of observations or replicates to include in a statistical sample.
Security is freedom from, or resilience against, potential harm (or other unwanted coercive change) from external forces.
In mathematics, a series is, roughly speaking, a description of the operation of adding infinitely many quantities, one after the other, to a given starting quantity.
In the mathematical field of real analysis, a simple function is a real-valued function over a subset of the real line, similar to a step function.
In statistics, dispersion (also called variability, scatter, or spread) is the extent to which a distribution is stretched or squeezed.
In quantum mechanics, the uncertainty principle (also known as Heisenberg's uncertainty principle) is any of a variety of mathematical inequalities asserting a fundamental limit to the precision with which certain pairs of physical properties of a particle, known as complementary variables, such as position x and momentum p, can be known.
In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its mean.
In decision theory, the von Neumann-Morgenstern utility theorem shows that, under certain axioms of rational behavior, a decision-maker faced with risky (probabilistic) outcomes of different choices will behave as if he or she is maximizing the expected value of some function defined over the potential outcomes at some specified point in the future.
In probability theory, Wald's equation, Wald's identity or Wald's lemma is an important identity that simplifies the calculation of the expected value of the sum of a random number of random quantities.
The weighted arithmetic mean is similar to an ordinary arithmetic mean (the most common type of average), except that instead of each of the data points contributing equally to the final average, some data points contribute more than others.
William Allen Whitworth (1 February 1840 – 12 March 1905) was an English mathematician and a priest in the Church of England.
E value, E(X), Expect value, Expectation (mathematics), Expectation (statistics), Expectation Operator, Expectation Value, Expectation number, Expectation operator, Expectation value, Expected Monetary Value, Expected Value, Expected monetary value, Expected number, Expected payout, Expected values, Iterated expectation, Linearity of expectation, Mathematical expectancy, Mathematical expectation, Sklansky Buck, Unconditional expectation.