136 relations: Abstract machine, Ackermann function, Adleman–Pomerance–Rumely primality test, AKS primality test, Alexander Schrijver, Algorithm, Alphabetical order, Amortized analysis, Approximation algorithm, Array data structure, Asymptotic analysis, Average-case complexity, Big O notation, Binary number, Binary search algorithm, Binary tree, Bit, Boolean satisfiability problem, Boyer–Moore string-search algorithm, BPP (complexity), BQP, Brute-force search, Bubble sort, Clique problem, Cobham's thesis, Comparison sort, Complexity class, Computable function, Computational complexity, Computational complexity theory, Computational hardness assumption, Computer science, Conjunctive normal form, Content-addressable memory, Convolution theorem, Decision problem, Dictionary, Disjoint-set data structure, DLOGTIME, Double exponential function, DSPACE, DTIME, Dynamic programming, E (complexity), Element (mathematics), Elsevier, Euclidean algorithm, Exponential time hypothesis, Exponentiation by squaring, EXPTIME, ..., Fast Fourier transform, Floor and ceiling functions, Formal language, Function (mathematics), Game theory, General number field sieve, Graph (discrete mathematics), Graph coloring, Graph isomorphism problem, Gröbner basis, Greatest common divisor, Grover's algorithm, Heapsort, Infra-exponential, Insertion sort, Instruction set architecture, Integer factorization, Introsort, Iterated logarithm, Journal of Computer and System Sciences, K-d tree, Karmarkar's algorithm, L-notation, László Lovász, Lecture Notes in Computer Science, Linear programming, Logarithm, Machine learning, Matching (graph theory), Mathematical optimization, Matrix chain multiplication, Maximum subarray problem, Merge sort, Monge array, Non-deterministic Turing machine, NP (complexity), NP-completeness, NP-hardness, P (complexity), P versus NP problem, Parallel algorithm, Parallel computing, Parallel random-access machine, Parameterized complexity, Partial correlation, Patience sorting, Planted clique, Polygon triangulation, Polylogarithmic function, Polynomial, Presburger arithmetic, Priority queue, Probabilistic Turing machine, Property testing, Pseudo-polynomial time, Quantifier elimination, Quantum algorithm, Quantum Turing machine, Quicksort, Raimund Seidel, Random graph, Randomized algorithm, Real closed field, Recurrence relation, Reduction (complexity), Robustness (computer science), RP (complexity), Selection sort, Self-balancing binary search tree, Set cover problem, Shellsort, Smoothsort, Society for Industrial and Applied Mathematics, Sorting algorithm, Springer Science+Business Media, Steiner tree problem, Stirling's approximation, Time complexity, Travelling salesman problem, Tree sort, Turing machine, Ukkonen's algorithm, Upper and lower bounds, Worst-case complexity, ZPP (complexity), 2-EXPTIME. Expand index (86 more) » « Shrink index
An abstract machine, also called an abstract computer, is a theoretical model of a computer hardware or software system used in automata theory.
In computability theory, the Ackermann function, named after Wilhelm Ackermann, is one of the simplest and earliest-discovered examples of a total computable function that is not primitive recursive.
In computational number theory, the Adleman–Pomerance–Rumely primality test is an algorithm for determining whether a number is prime.
The AKS primality test (also known as Agrawal–Kayal–Saxena primality test and cyclotomic AKS test) is a deterministic primality-proving algorithm created and published by Manindra Agrawal, Neeraj Kayal, and Nitin Saxena, computer scientists at the Indian Institute of Technology Kanpur, on August 6, 2002, in a paper titled "PRIMES is in P".
Alexander (Lex) Schrijver (born 4 May 1948 in Amsterdam) is a Dutch mathematician and computer scientist, a professor of discrete mathematics and optimization at the University of Amsterdam and a fellow at the Centrum Wiskunde & Informatica in Amsterdam.
In mathematics and computer science, an algorithm is an unambiguous specification of how to solve a class of problems.
Alphabetical order is a system whereby strings of characters are placed in order based on the position of the characters in the conventional ordering of an alphabet.
In computer science, amortized analysis is a method for analyzing a given algorithm's complexity, or how much of a resource, especially time or memory, it takes to execute.
In computer science and operations research, approximation algorithms are efficient algorithms that find approximate solutions to NP-hard optimization problems with provable guarantees on the distance of the returned solution to the optimal one.
In computer science, an array data structure, or simply an array, is a data structure consisting of a collection of elements (values or variables), each identified by at least one array index or key.
In mathematical analysis, asymptotic analysis, also known as asymptotics, is a method of describing limiting behavior.
In computational complexity theory, the average-case complexity of an algorithm is the amount of some computational resource (typically time) used by the algorithm, averaged over all possible inputs.
Big O notation is a mathematical notation that describes the limiting behaviour of a function when the argument tends towards a particular value or infinity.
In mathematics and digital electronics, a binary number is a number expressed in the base-2 numeral system or binary numeral system, which uses only two symbols: typically 0 (zero) and 1 (one).
In computer science, binary search, also known as half-interval search,logarithmic search, or binary chop, is a search algorithm that finds the position of a target value within a sorted array.
In computer science, a binary tree is a tree data structure in which each node has at most two children, which are referred to as the and the.
The bit (a portmanteau of binary digit) is a basic unit of information used in computing and digital communications.
In computer science, the Boolean satisfiability problem (sometimes called propositional satisfiability problem and abbreviated as SATISFIABILITY or SAT) is the problem of determining if there exists an interpretation that satisfies a given Boolean formula.
In computer science, the Boyer–Moore string-search algorithm is an efficient string-searching algorithm that is the standard benchmark for practical string-search literature.
In computational complexity theory, BPP, which stands for bounded-error probabilistic polynomial time is the class of decision problems solvable by a probabilistic Turing machine in polynomial time with an error probability bounded away from 1/2 for all instances.
In computational complexity theory, BQP (bounded-error quantum polynomial time) is the class of decision problems solvable by a quantum computer in polynomial time, with an error probability of at most 1/3 for all instances.
In computer science, brute-force search or exhaustive search, also known as generate and test, is a very general problem-solving technique that consists of systematically enumerating all possible candidates for the solution and checking whether each candidate satisfies the problem's statement.
Bubble sort, sometimes referred to as sinking sort, is a simple sorting algorithm that repeatedly steps through the list to be sorted, compares each pair of adjacent items and swaps them if they are in the wrong order.
In computer science, the clique problem is the computational problem of finding cliques (subsets of vertices, all adjacent to each other, also called complete subgraphs) in a graph.
Cobham's thesis, also known as Cobham–Edmonds thesis (named after Alan Cobham and Jack Edmonds), asserts that computational problems can be feasibly computed on some computational device only if they can be computed in polynomial time; that is, if they lie in the complexity class P (PTIME).
A comparison sort is a type of sorting algorithm that only reads the list elements through a single abstract comparison operation (often a "less than or equal to" operator or a three-way comparison) that determines which of two elements should occur first in the final sorted list.
In computational complexity theory, a complexity class is a set of problems of related resource-based complexity.
Computable functions are the basic objects of study in computability theory.
In computer science, the computational complexity, or simply complexity of an algorithm is the amount of resources required for running it.
Computational complexity theory is a branch of the theory of computation in theoretical computer science that focuses on classifying computational problems according to their inherent difficulty, and relating those classes to each other.
In computational complexity theory, a computational hardness assumption is the hypothesis that a particular problem cannot be solved efficiently (where efficiently typically means "in polynomial time").
Computer science deals with the theoretical foundations of information and computation, together with practical techniques for the implementation and application of these foundations.
In Boolean logic, a formula is in conjunctive normal form (CNF) or clausal normal form if it is a conjunction of one or more clauses, where a clause is a disjunction of literals; otherwise put, it is an AND of ORs.
Content-addressable memory (CAM) is a special type of computer memory used in certain very-high-speed searching applications.
In mathematics, the convolution theorem states that under suitable conditions the Fourier transform of a convolution is the pointwise product of Fourier transforms.
In computability theory and computational complexity theory, a decision problem is a problem that can be posed as a yes-no question of the input values.
A dictionary, sometimes known as a wordbook, is a collection of words in one or more specific languages, often arranged alphabetically (or by radical and stroke for ideographic languages), which may include information on definitions, usage, etymologies, pronunciations, translation, etc.
In computer science, a disjoint-set data structure (also called a union–find data structure or merge–find set) is a data structure that tracks a set of elements partitioned into a number of disjoint (non-overlapping) subsets.
In computational complexity theory, DLOGTIME is the complexity class of all computational problems solvable in a logarithmic amount of computation time on a deterministic Turing machine.
A double exponential function is a constant raised to the power of an exponential function.
In computational complexity theory, DSPACE or SPACE is the computational resource describing the resource of memory space for a deterministic Turing machine.
In computational complexity theory, DTIME (or TIME) is the computational resource of computation time for a deterministic Turing machine.
Dynamic programming is both a mathematical optimization method and a computer programming method.
In computational complexity theory, the complexity class E is the set of decision problems that can be solved by a deterministic Turing machine in time 2O(n) and is therefore equal to the complexity class DTIME(2O(n)).
In mathematics, an element, or member, of a set is any one of the distinct objects that make up that set.
Elsevier is an information and analytics company and one of the world's major providers of scientific, technical, and medical information.
. EXAMPLES CAN BE FOUND BELOW, E.G., IN THE "Matrix method" SECTION.
In computational complexity theory, the exponential time hypothesis is an unproven computational hardness assumption that was formulated by.
In mathematics and computer programming, exponentiating by squaring is a general method for fast computation of large positive integer powers of a number, or more generally of an element of a semigroup, like a polynomial or a square matrix.
In computational complexity theory, the complexity class EXPTIME (sometimes called EXP or DEXPTIME) is the set of all decision problems that have exponential runtime, i.e., that are solvable by a deterministic Turing machine in O(2p(n)) time, where p(n) is a polynomial function of n. In terms of DTIME, We know and also, by the time hierarchy theorem and the space hierarchy theorem, that so at least one of the first three inclusions and at least one of the last three inclusions must be proper, but it is not known which ones are.
A fast Fourier transform (FFT) is an algorithm that samples a signal over a period of time (or space) and divides it into its frequency components.
In mathematics and computer science, the floor function is the function that takes as input a real number x and gives as output the greatest integer less than or equal to x, denoted \operatorname(x).
In mathematics, computer science, and linguistics, a formal language is a set of strings of symbols together with a set of rules that are specific to it.
In mathematics, a function was originally the idealization of how a varying quantity depends on another quantity.
Game theory is "the study of mathematical models of conflict and cooperation between intelligent rational decision-makers".
In number theory, the general number field sieve (GNFS) is the most efficient classical algorithm known for factoring integers larger than.
In mathematics, and more specifically in graph theory, a graph is a structure amounting to a set of objects in which some pairs of the objects are in some sense "related".
In graph theory, graph coloring is a special case of graph labeling; it is an assignment of labels traditionally called "colors" to elements of a graph subject to certain constraints.
The graph isomorphism problem is the computational problem of determining whether two finite graphs are isomorphic.
In mathematics, and more specifically in computer algebra, computational algebraic geometry, and computational commutative algebra, a Gröbner basis is a particular kind of generating set of an ideal in a polynomial ring over a field.
In mathematics, the greatest common divisor (gcd) of two or more integers, which are not all zero, is the largest positive integer that divides each of the integers.
Grover's algorithm is a quantum algorithm that finds with high probability the unique input to a black box function that produces a particular output value, using just O(\sqrt) evaluations of the function, where N is the size of the function's domain.
In computer science, heapsort is a comparison-based sorting algorithm.
A growth rate is said to be infra-exponential or subexponential if it is dominated by all exponential growth rates, however great the doubling time.
Insertion sort is a simple sorting algorithm that builds the final sorted array (or list) one item at a time.
An instruction set architecture (ISA) is an abstract model of a computer.
In number theory, integer factorization is the decomposition of a composite number into a product of smaller integers.
Introsort or introspective sort is a hybrid sorting algorithm that provides both fast average performance and (asymptotically) optimal worst-case performance.
In computer science, the iterated logarithm of n, written n (usually read "log star"), is the number of times the logarithm function must be iteratively applied before the result is less than or equal to 1.
The Journal of Computer and System Sciences (JCSS) is a peer-reviewed scientific journal in the field of computer science.
In computer science, a k-d tree (short for k-dimensional tree) is a space-partitioning data structure for organizing points in a k-dimensional space.
Karmarkar's algorithm is an algorithm introduced by Narendra Karmarkar in 1984 for solving linear programming problems.
L-notation is an asymptotic notation analogous to big-O notation, denoted as L_n for a bound variable n tending to infinity.
László Lovász (born March 9, 1948) is a Hungarian mathematician, best known for his work in combinatorics, for which he was awarded the Wolf Prize and the Knuth Prize in 1999, and the Kyoto Prize in 2010.
Springer Lecture Notes in Computer Science (LNCS) is a series of computer science books published by Springer Science+Business Media (formerly Springer-Verlag) since 1973.
Linear programming (LP, also called linear optimization) is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements are represented by linear relationships.
In mathematics, the logarithm is the inverse function to exponentiation.
Machine learning is a subset of artificial intelligence in the field of computer science that often uses statistical techniques to give computers the ability to "learn" (i.e., progressively improve performance on a specific task) with data, without being explicitly programmed.
In the mathematical discipline of graph theory, a matching or independent edge set in a graph is a set of edges without common vertices.
In mathematics, computer science and operations research, mathematical optimization or mathematical programming, alternatively spelled optimisation, is the selection of a best element (with regard to some criterion) from some set of available alternatives.
Matrix chain multiplication (or Matrix Chain Ordering Problem, MCOP) is an optimization problem that can be solved using dynamic programming.
In computer science, the maximum subarray problem is the task of finding the contiguous subarray within a one-dimensional array, a, of numbers which has the largest sum, where, for 1 \leq i \leq j \leq n, M.
In computer science, merge sort (also commonly spelled mergesort) is an efficient, general-purpose, comparison-based sorting algorithm.
In mathematics applied to computer science, Monge arrays, or Monge matrices, are mathematical objects named for their discoverer, the French mathematician Gaspard Monge.
In theoretical computer science, a Turing machine is a theoretical machine that is used in thought experiments to examine the abilities and limitations of computers.
In computational complexity theory, NP (for nondeterministic polynomial time) is a complexity class used to describe certain types of decision problems.
In computational complexity theory, an NP-complete decision problem is one belonging to both the NP and the NP-hard complexity classes.
NP-hardness (''n''on-deterministic ''p''olynomial-time hardness), in computational complexity theory, is the defining property of a class of problems that are, informally, "at least as hard as the hardest problems in NP".
In computational complexity theory, P, also known as PTIME or DTIME(nO(1)), is a fundamental complexity class.
The P versus NP problem is a major unsolved problem in computer science.
In computer science, a parallel algorithm, as opposed to a traditional serial algorithm, is an algorithm which can be executed a piece at a time on many different processing devices, and then combined together again at the end to get the correct result.
Parallel computing is a type of computation in which many calculations or the execution of processes are carried out concurrently.
In computer science, a parallel random-access machine (PRAM) is a shared-memory abstract machine.
In computer science, parameterized complexity is a branch of computational complexity theory that focuses on classifying computational problems according to their inherent difficulty with respect to multiple parameters of the input or output.
In probability theory and statistics, partial correlation measures the degree of association between two random variables, with the effect of a set of controlling random variables removed.
In computer science, patience sorting is a sorting algorithm inspired by, and named after, the card game patience.
In computational complexity theory, a planted clique or hidden clique in an undirected graph is a clique formed from another graph by selecting a subset of vertices and adding edges between each pair of vertices in the subset.
In computational geometry, polygon triangulation is the decomposition of a polygonal area (simple polygon) P into a set of triangles, Chapter 3: Polygon Triangulation: pp.45–61.
A polylogarithmic function in n is a polynomial in the logarithm of n, In computer science, polylogarithmic functions occur as the order of time or memory used by some algorithms (e.g., "it has polylogarithmic order").
In mathematics, a polynomial is an expression consisting of variables (also called indeterminates) and coefficients, that involves only the operations of addition, subtraction, multiplication, and non-negative integer exponents of variables.
Presburger arithmetic is the first-order theory of the natural numbers with addition, named in honor of Mojżesz Presburger, who introduced it in 1929.
In computer science, a priority queue is an abstract data type which is like a regular queue or stack data structure, but where additionally each element has a "priority" associated with it.
In computability theory, a probabilistic Turing machine is a non-deterministic Turing machine which chooses between the available transitions at each point according to some probability distribution.
In computer science, a property testing algorithm for a decision problem is an algorithm whose query complexity to its input is much smaller than the instance size of the problem.
In computational complexity theory, a numeric algorithm runs in pseudo-polynomial time if its running time is a polynomial in the numeric value of the input (the largest integer present in the input) — but not necessarily in the length of the input (the number of bits required to represent it), which is the case for polynomial time algorithms.
Quantifier elimination is a concept of simplification used in mathematical logic, model theory, and theoretical computer science.
In quantum computing, a quantum algorithm is an algorithm which runs on a realistic model of quantum computation, the most commonly used model being the quantum circuit model of computation.
A quantum Turing machine (QTM), also a universal quantum computer, is an abstract machine used to model the effect of a quantum computer.
Quicksort (sometimes called partition-exchange sort) is an efficient sorting algorithm, serving as a systematic method for placing the elements of an array in order.
Raimund G. Seidel is a German and Austrian theoretical computer scientist and an expert in computational geometry.
In mathematics, random graph is the general term to refer to probability distributions over graphs.
A randomized algorithm is an algorithm that employs a degree of randomness as part of its logic.
In mathematics, a real closed field is a field F that has the same first-order properties as the field of real numbers.
In mathematics, a recurrence relation is an equation that recursively defines a sequence or multidimensional array of values, once one or more initial terms are given: each further term of the sequence or array is defined as a function of the preceding terms.
In computability theory and computational complexity theory, a reduction is an algorithm for transforming one problem into another problem.
In computer science, robustness is the ability of a computer system to cope with errors during execution1990.
In computational complexity theory, randomized polynomial time (RP) is the complexity class of problems for which a probabilistic Turing machine exists with these properties.
In computer science, selection sort is a sorting algorithm, specifically an in-place comparison sort.
In computer science, a self-balancing (or height-balanced) binary search tree is any node-based binary search tree that automatically keeps its height (maximal number of levels below the root) small in the face of arbitrary item insertions and deletions.
The set cover problem is a classical question in combinatorics, computer science and complexity theory.
Shellsort, also known as Shell sort or Shell's method, is an in-place comparison sort.
In computer science, smoothsort is a comparison-based sorting algorithm.
The Society for Industrial and Applied Mathematics (SIAM) is an academic association dedicated to the use of mathematics in industry.
In computer science, a sorting algorithm is an algorithm that puts elements of a list in a certain order.
Springer Science+Business Media or Springer, part of Springer Nature since 2015, is a global publishing company that publishes books, e-books and peer-reviewed journals in science, humanities, technical and medical (STM) publishing.
Steiner tree problem, or minimum Steiner tree problem, named after Jakob Steiner, is an umbrella term for a class of problems in combinatorial optimization.
In mathematics, Stirling's approximation (or Stirling's formula) is an approximation for factorials.
In computer science, the time complexity is the computational complexity that describes the amount of time it takes to run an algorithm.
The travelling salesman problem (TSP) asks the following question: "Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city and returns to the origin city?" It is an NP-hard problem in combinatorial optimization, important in operations research and theoretical computer science.
A tree sort is a sort algorithm that builds a binary search tree from the elements to be sorted, and then traverses the tree (in-order) so that the elements come out in sorted order.
A Turing machine is a mathematical model of computation that defines an abstract machine, which manipulates symbols on a strip of tape according to a table of rules.
In computer science, Ukkonen's algorithm is a linear-time, online algorithm for constructing suffix trees, proposed by Esko Ukkonen in 1995.
In mathematics, especially in order theory, an upper bound of a subset S of some partially ordered set (K, ≤) is an element of K which is greater than or equal to every element of S. The term lower bound is defined dually as an element of K which is less than or equal to every element of S. A set with an upper bound is said to be bounded from above by that bound, a set with a lower bound is said to be bounded from below by that bound.
In computer science, the worst-case complexity (usually denoted in asymptotic notation) measures the resources (e.g. running time, memory) an algorithm requires in the worst-case.
In complexity theory, ZPP (zero-error probabilistic polynomial time) is the complexity class of problems for which a probabilistic Turing machine exists with these properties.
In computational complexity theory, the complexity class 2-EXPTIME (sometimes called 2-EXP) is the set of all decision problems solvable by a deterministic Turing machine in O(22p(n)) time, where p(n) is a polynomial function of n. In terms of DTIME, We know 2-EXPTIME can also be reformulated as the space class AEXPSPACE, the problems that can be solved by an alternating Turing machine in exponential space.
Algorithmic time complexity, Computation time, Computational time, Constant time, Cubic time, Double exponential time, Exponential algorithm, Exponential time, Fast algorithms, Linear time, Linear time agorithm, Linear-time, Linear-time algorithm, Linearithm, Linearithmic, Linearithmic function, Linearithmic time, Logarithmic time, N log n, Nlogn, Polylogarithmic time, Polynomial complexity, Polynomial time, Polynomial time algorithm, Polynomial-time, Polynomial-time algorithm, Polynomial-time solution, Polynomial-time solutions, Quadratic time, Quasi-polynomial time, Quasilinear time, Run-time complexity, Running time, SUBEXP, Strongly polynomial, Strongly polynomial time, Sub-exponential time, Sub-linear time, Subexponential time, Sublinear time, Sublinear time algorithm, Sublinear-time, Subquadratic time, Super-polynomial time, Superpolynomial, Weakly polynomial.