Install
Faster access than browser!

Statistical hypothesis testing

A statistical hypothesis, sometimes called confirmatory data analysis, is a hypothesis that is testable on the basis of observing a process that is modeled via a set of random variables. [1]

121 relations: Akaike information criterion, Almost sure hypothesis testing, Alternative hypothesis, Analysis of variance, Argument from ignorance, Bayes estimator, Bayes factor, Bayes' theorem, Bayesian inference, Bayesian statistics, Behrens–Fisher problem, Bible Analyzer, Bigfoot, Biostatistics, Bootstrapping (statistics), Categorical variable, Checking whether a coin is fair, Chi-squared test, Clairvoyance, Clever Hans, Complete spatial randomness, Confidence interval, Contingency table, Correlation does not imply causation, Counternull, Data mining, David Hume, Decision theory, Design of experiments, Detection theory, Edward Arnold (publisher), Effect size, Egon Pearson, Epistemology, Estimation theory, Exact test, Exploratory data analysis, Fallacy, False discovery rate, False positives and false negatives, Falsifiability, Family-wise error rate, Fiducial inference, Fisher's method, Forecasting, Frequentist inference, Game theory, Geiger counter, Granger causality, Hawthorne effect, ... Expand index (71 more) »

Akaike information criterion

The Akaike information criterion (AIC) is an estimator of the relative quality of statistical models for a given set of data.

Almost sure hypothesis testing

In statistics, almost sure hypothesis testing or a.s. hypothesis testing utilizes almost sure convergence in order to determine the validity of a statistical hypothesis with probability one, with probability 1.

Alternative hypothesis

In statistical hypothesis testing, the alternative hypothesis (or maintained hypothesis or research hypothesis) and the null hypothesis are the two rival hypotheses which are compared by a statistical hypothesis test.

Analysis of variance

Analysis of variance (ANOVA) is a collection of statistical models and their associated estimation procedures (such as the "variation" among and between groups) used to analyze the differences among group means in a sample.

Argument from ignorance

Argument from ignorance (from argumentum ad ignorantiam), also known as appeal to ignorance (in which ignorance represents "a lack of contrary evidence") is a fallacy in informal logic.

Bayes estimator

In estimation theory and decision theory, a Bayes estimator or a Bayes action is an estimator or decision rule that minimizes the posterior expected value of a loss function (i.e., the posterior expected loss).

Bayes factor

In statistics, the use of Bayes factors is a Bayesian alternative to classical hypothesis testing.

Bayes' theorem

In probability theory and statistics, Bayes’ theorem (alternatively Bayes’ law or Bayes' rule, also written as Bayes’s theorem) describes the probability of an event, based on prior knowledge of conditions that might be related to the event.

Bayesian inference

Bayesian inference is a method of statistical inference in which Bayes' theorem is used to update the probability for a hypothesis as more evidence or information becomes available.

Bayesian statistics

Bayesian statistics, named for Thomas Bayes (1701–1761), is a theory in the field of statistics in which the evidence about the true state of the world is expressed in terms of degrees of belief known as Bayesian probabilities.

Behrens–Fisher problem

In statistics, the Behrens–Fisher problem, named after Walter Behrens and Ronald Fisher, is the problem of interval estimation and hypothesis testing concerning the difference between the means of two normally distributed populations when the variances of the two populations are not assumed to be equal, based on two independent samples.

Bible Analyzer

Bible Analyzer is a freeware, cross-platform Bible study computer software application for Microsoft Windows, Macintosh OS X, and Ubuntu Linux.

Bigfoot

In North American folklore, Bigfoot or Sasquatch is a hairy, upright-walking,ape-like being who reportedly dwells in the wilderness and leaves behind large footprints.

Biostatistics

Biostatistics is the application of statistics to a wide range of topics in biology.

Bootstrapping (statistics)

In statistics, bootstrapping is any test or metric that relies on random sampling with replacement.

Categorical variable

In statistics, a categorical variable is a variable that can take on one of a limited, and usually fixed number of possible values, assigning each individual or other unit of observation to a particular group or nominal category on the basis of some qualitative property.

Checking whether a coin is fair

In statistics, the question of checking whether a coin is fair is one whose importance lies, firstly, in providing a simple problem on which to illustrate basic ideas of statistical inference and, secondly, in providing a simple problem that can be used to compare various competing methods of statistical inference, including decision theory.

Chi-squared test

A chi-squared test, also written as test, is any statistical hypothesis test where the sampling distribution of the test statistic is a chi-squared distribution when the null hypothesis is true.

Clairvoyance

Clairvoyance (from French clair meaning "clear" and voyance meaning "vision") is the alleged ability to gain information about an object, person, location, or physical event through extrasensory perception.

Clever Hans

Clever Hans (in German: der Kluge Hans) was an Orlov Trotter horse that was claimed to have been able to perform arithmetic and other intellectual tasks.

Complete spatial randomness

Complete spatial randomness (CSR) describes a point process whereby point events occur within a given study area in a completely random fashion.

Confidence interval

In statistics, a confidence interval (CI) is a type of interval estimate, computed from the statistics of the observed data, that might contain the true value of an unknown population parameter.

Contingency table

In statistics, a contingency table (also known as a cross tabulation or crosstab) is a type of table in a matrix format that displays the (multivariate) frequency distribution of the variables.

Correlation does not imply causation

In statistics, many statistical tests calculate correlations between variables and when two variables are found to be correlated, it is tempting to assume that this shows that one variable causes the other.

Counternull

In statistics, and especially in the statistical analysis of psychological data, the counternull is a statistic used to aid the understanding and presentation of research results.

Data mining

Data mining is the process of discovering patterns in large data sets involving methods at the intersection of machine learning, statistics, and database systems.

David Hume

David Hume (born David Home; 7 May 1711 NS (26 April 1711 OS) – 25 August 1776) was a Scottish philosopher, historian, economist, and essayist, who is best known today for his highly influential system of philosophical empiricism, skepticism, and naturalism.

Decision theory

Decision theory (or the theory of choice) is the study of the reasoning underlying an agent's choices.

Design of experiments

The design of experiments (DOE, DOX, or experimental design) is the design of any task that aims to describe or explain the variation of information under conditions that are hypothesized to reflect the variation.

Detection theory

Detection theory or signal detection theory is a means to measure the ability to differentiate between information-bearing patterns (called stimulus in living organisms, signal in machines) and random patterns that distract from the information (called noise, consisting of background stimuli and random activity of the detection machine and of the nervous system of the operator).

Edward Arnold (publisher)

Edward Arnold Publishers Ltd was a British publishing house with its head office in London.

Effect size

In statistics, an effect size is a quantitative measure of the magnitude of a phenomenon.

Egon Pearson

Egon Sharpe Pearson, CBE FRS (11 August 1895 – 12 June 1980) was one of three children and the son of Karl Pearson and, like his father, a leading British statistician.

Epistemology

Epistemology is the branch of philosophy concerned with the theory of knowledge.

Estimation theory

Estimation theory is a branch of statistics that deals with estimating the values of parameters based on measured empirical data that has a random component.

Exact test

In statistics, an exact (significance) test is a test where all assumptions, upon which the derivation of the distribution of the test statistic is based, are met as opposed to an approximate test (in which the approximation may be made as close as desired by making the sample size big enough).

Exploratory data analysis

In statistics, exploratory data analysis (EDA) is an approach to analyzing data sets to summarize their main characteristics, often with visual methods.

Fallacy

A fallacy is the use of invalid or otherwise faulty reasoning, or "wrong moves" in the construction of an argument.

False discovery rate

The false discovery rate (FDR) is a method of conceptualizing the rate of type I errors in null hypothesis testing when conducting multiple comparisons.

False positives and false negatives

In medical testing, and more generally in binary classification, a false positive is an error in data reporting in which a test result improperly indicates presence of a condition, such as a disease (the result is positive), when in reality it is not present, while a false negative is an error in which a test result improperly indicates no presence of a condition (the result is negative), when in reality it is present.

Falsifiability

A statement, hypothesis, or theory has falsifiability (or is falsifiable) if it can logically be proven false by contradicting it with a basic statement.

Family-wise error rate

In statistics, family-wise error rate (FWER) is the probability of making one or more false discoveries, or type I errors when performing multiple hypotheses tests.

Fiducial inference

Fiducial inference is one of a number of different types of statistical inference.

Fisher's method

In statistics, Fisher's method, also known as Fisher's combined probability test, is a technique for data fusion or "meta-analysis" (analysis of analyses).

Forecasting

Forecasting is the process of making predictions of the future based on past and present data and most commonly by analysis of trends.

Frequentist inference

Frequentist inference is a type of statistical inference that draws conclusions from sample data by emphasizing the frequency or proportion of the data.

Game theory

Game theory is "the study of mathematical models of conflict and cooperation between intelligent rational decision-makers".

Geiger counter

The Geiger counter is an instrument used for detecting and measuring ionizing radiation used widely in applications such as radiation dosimetry, radiological protection, experimental physics and the nuclear industry.

Granger causality

The Granger causality test is a statistical hypothesis test for determining whether one time series is useful in forecasting another, first proposed in 1969.

Hawthorne effect

The Hawthorne effect (also referred to as the observer effect) is a type of reactivity in which individuals modify an aspect of their behavior in response to their awareness of being observed.

How to Lie with Statistics

How to Lie with Statistics is a book written by Darrell Huff in 1954 presenting an introduction to statistics for the general reader.

Human sex ratio

In anthropology and demography, the human sex ratio is the ratio of males to females in a population.

Hypothesis

A hypothesis (plural hypotheses) is a proposed explanation for a phenomenon.

Independence (probability theory)

In probability theory, two events are independent, statistically independent, or stochastically independent if the occurrence of one does not affect the probability of occurrence of the other.

Jerzy Neyman

Jerzy Neyman (April 16, 1894 – August 5, 1981), born Jerzy Spława-Neyman, was a Polish mathematician and statistician who spent the first part of his professional career at various institutions in Warsaw, Poland and then at University College London, and the second part at the University of California, Berkeley.

John Arbuthnot

John Arbuthnot (baptised 29 April 1667 &ndash; 27 February 1735), often known simply as Dr Arbuthnot, was a Scottish physician, satirist and polymath in London.

John Tukey

John Wilder Tukey (June 16, 1915 – July 26, 2000) was an American mathematician best known for development of the FFT algorithm and box plot.

Karl Pearson

Karl Pearson HFRSE LLD (originally named Carl; 27 March 1857 – 27 April 1936) was an English mathematician and biostatistician. He has been credited with establishing the discipline of mathematical statistics. He founded the world's first university statistics department at University College London in 1911, and contributed significantly to the field of biometrics, meteorology, theories of social Darwinism and eugenics. Pearson was also a protégé and biographer of Sir Francis Galton.

Likelihood-ratio test

In statistics, a likelihood ratio test (LR test) is a statistical test used for comparing the goodness of fit of two statistical models — a null model against an alternative model.

Location test

A location test is a statistical hypothesis test that compares the location parameter of a statistical population to a given constant, or that compares the location parameters of two statistical populations to each other.

Look-elsewhere effect

The look-elsewhere effect is a phenomenon in the statistical analysis of scientific experiments, particularly in complex particle physics experiments, where an apparently statistically significant observation may have actually arisen by chance because of the size of the parameter space to be searched.

M.

Meta-analysis

A meta-analysis is a statistical analysis that combines the results of multiple scientific studies.

Model selection

Model selection is the task of selecting a statistical model from a set of candidate models, given data.

Modifiable areal unit problem

The modifiable areal unit problem (MAUP) is a source of statistical bias that can significantly impact the results of statistical hypothesis tests.

Multiple comparisons problem

In statistics, the multiple comparisons, multiplicity or multiple testing problem occurs when one considers a set of statistical inferences simultaneously or infers a subset of parameters selected based on the observed values.

Muriel Bristol

Muriel Bristol (21 April 1888 – 15 March 1950), Ph.D., was a phycologist who worked at the Rothamsted Experimental Station in 1919.

Neyman–Pearson lemma

In statistics, the Neyman–Pearson lemma was introduced by Jerzy Neyman and Egon Pearson in a paper in 1933.

Nonparametric statistics

Nonparametric statistics is the branch of statistics that is not based solely on parameterized families of probability distributions (common examples of parameters are the mean and variance).

Normal distribution

In probability theory, the normal (or Gaussian or Gauss or Laplace–Gauss) distribution is a very common continuous probability distribution.

Null hypothesis

In inferential statistics, the term "null hypothesis" is a general statement or default position that there is no relationship between two measured phenomena, or no association among groups.

Objectivity (science)

Objectivity in science is a value that informs how science is practiced and how scientific truths are discovered.

Observable variable

In statistics, observable variable or observable quantity (also manifest variables), as opposed to latent variable, is a variable that can be observed and directly measured.

Omnibus test

Omnibus tests are a kind of statistical test.

Optimal decision

An optimal decision is a decision that leads to at least as good a known or expected outcome as all other available decision options.

Oscar Kempthorne

Oscar Kempthorne (January 31, 1919 – November 15, 2000) was a statistician and geneticist known for his research on randomization-analysis and the design of experiments, which had wide influence on research in agriculture, genetics, and other areas of science.

P-value

In statistical hypothesis testing, the p-value or probability value or asymptotic significance is the probability for a given statistical model that, when the null hypothesis is true, the statistical summary (such as the sample mean difference between two compared groups) would be the same as or of greater magnitude than the actual observed results.

Paul E. Meehl

Paul Everett Meehl (3 January 1920 – 14 February 2003) was a clinical psychologist and Professor of Psychology at the University of Minnesota.

Pearson's chi-squared test

Pearson's chi-squared test (χ) is a statistical test applied to sets of categorical data to evaluate how likely it is that any observed difference between the sets arose by chance.

Philosophical Transactions of the Royal Society

Philosophical Transactions, titled Philosophical Transactions of the Royal Society (often abbreviated as Phil. Trans.) from 1776, is a scientific journal published by the Royal Society.

Philosophical Transactions of the Royal Society A

Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences is a fortnightly peer-reviewed scientific journal published by the Royal Society.

Philosophy of science

Philosophy of science is a sub-field of philosophy concerned with the foundations, methods, and implications of science.

Pierre-Simon Laplace

Pierre-Simon, marquis de Laplace (23 March 1749 – 5 March 1827) was a French scholar whose work was important to the development of mathematics, statistics, physics and astronomy.

Placebo

A placebo is a substance or treatment of no intended therapeutic value.

Poisson distribution

In probability theory and statistics, the Poisson distribution (in English often rendered), named after French mathematician Siméon Denis Poisson, is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time or space if these events occur with a known constant rate and independently of the time since the last event.

Posterior probability

In Bayesian statistics, the posterior probability of a random event or an uncertain proposition is the conditional probability that is assigned after the relevant evidence or background is taken into account.

Power (statistics)

The power of a binary hypothesis test is the probability that the test correctly rejects the null hypothesis (H0) when a specific alternative hypothesis (H1) is true.

Principle of indifference

The principle of indifference (also called principle of insufficient reason) is a rule for assigning epistemic probabilities.

Prior

Prior, derived from the Latin for "earlier, first", (or prioress for nuns) is an ecclesiastical title for a superior, usually lower in rank than an abbot or abbess.

Prior probability

In Bayesian statistical inference, a prior probability distribution, often simply called the prior, of an uncertain quantity is the probability distribution that would express one's beliefs about this quantity before some evidence is taken into account.

Probability

Probability is the measure of the likelihood that an event will occur.

In logic, proof by contradiction is a form of proof, and more specifically a form of indirect proof, that establishes the truth or validity of a proposition.

Publication bias

Publication bias is a type of bias that occurs in published academic research.

Radioactive decay (also known as nuclear decay or radioactivity) is the process by which an unstable atomic nucleus loses energy (in terms of mass in its rest frame) by emitting radiation, such as an alpha particle, beta particle with neutrino or only a neutrino in the case of electron capture, gamma ray, or electron in the case of internal conversion.

Random variable

In probability and statistics, a random variable, random quantity, aleatory variable, or stochastic variable is a variable whose possible values are outcomes of a random phenomenon.

Raphael Weldon

Walter Frank Raphael Weldon DSc FRS (15 March 1860 in Highgate, London – 13 April 1906 in Oxford) generally called Raphael Weldon, was an English evolutionary biologist and a founder of biometry.

Resampling (statistics)

In statistics, resampling is any of a variety of methods for doing one of the following.

Ronald Fisher

Sir Ronald Aylmer Fisher (17 February 1890 – 29 July 1962), who published as R. A. Fisher, was a British statistician and geneticist.

Sample (statistics)

In statistics and quantitative research methodology, a data sample is a set of data collected and/or selected from a statistical population by a defined procedure.

Sample size determination

Sample size determination is the act of choosing the number of observations or replicates to include in a statistical sample.

Scientific method

Scientific method is an empirical method of knowledge acquisition, which has characterized the development of natural science since at least the 17th century, involving careful observation, which includes rigorous skepticism about what one observes, given that cognitive assumptions about how the world works influence how one interprets a percept; formulating hypotheses, via induction, based on such observations; experimental testing and measurement of deductions drawn from the hypotheses; and refinement (or elimination) of the hypotheses based on the experimental findings.

Sensitivity and specificity

Sensitivity and specificity are statistical measures of the performance of a binary classification test, also known in statistics as a classification function.

Sign test

The sign test is a statistical method to test for consistent differences between pairs of observations, such as the weight of subjects before and after treatment.

Size (statistics)

In statistics, the size of a test is the probability of falsely rejecting the null hypothesis.

Statistical assumption

Statistics, like all mathematical disciplines, does not infer valid conclusions from nothing.

Statistical inference

Statistical inference is the process of using data analysis to deduce properties of an underlying probability distribution.

Statistical model

A statistical model is a mathematical model that embodies a set of statistical assumptions concerning the generation of some sample data and similar data from a larger population.

Statistical process control

Statistical process control (SPC) is a method of quality control which employs statistical methods to monitor and control a process.

Statistical significance

In statistical hypothesis testing, a result has statistical significance when it is very unlikely to have occurred given the null hypothesis.

Statistics

Statistics is a branch of mathematics dealing with the collection, analysis, interpretation, presentation, and organization of data.

Straw man

A straw man is a common form of argument and is an informal fallacy based on giving the impression of refuting an opponent's argument, while actually refuting an argument that was not presented by that opponent.

Student's t-distribution

In probability and statistics, Student's t-distribution (or simply the t-distribution) is any member of a family of continuous probability distributions that arises when estimating the mean of a normally distributed population in situations where the sample size is small and population standard deviation is unknown.

Student's t-test

The t-test is any statistical hypothesis test in which the test statistic follows a Student's ''t''-distribution under the null hypothesis.

Subjectivity

Subjectivity is a central philosophical concept, related to consciousness, agency, personhood, reality, and truth, which has been variously defined by sources.

No description.

Test statistic

A test statistic is a statistic (a quantity derived from the sample) used in statistical hypothesis testing.

Trial

In law, a trial is a coming together of parties to a dispute, to present information (in the form of evidence) in a tribunal, a formal setting with the authority to adjudicate claims or disputes.

Type I and type II errors

In statistical hypothesis testing, a type I error is the rejection of a true null hypothesis (also known as a "false positive" finding), while a type II error is failing to reject a false null hypothesis (also known as a "false negative" finding).

Uniformly most powerful test

In statistical hypothesis testing, a uniformly most powerful (UMP) test is a hypothesis test which has the greatest power \beta among all possible tests of a given size α. For example, according to the Neyman–Pearson lemma, the likelihood-ratio test is UMP for testing simple (point) hypotheses.

William Sealy Gosset

William Sealy Gosset (13 June 1876 – 16 October 1937) was an English statistician.

You can't have your cake and eat it

You can't have your cake and eat it (too) is a popular English idiomatic proverb or figure of speech.

References

Hey! We are on Facebook now! »