80 relations: Abstract machine, Algorithm, Algorithmic efficiency, Amortized analysis, Analysis of parallel algorithms, Arbitrary-precision arithmetic, Arithmetic progression, Asymptotic analysis, Asymptotic computational complexity, Benchmark (computing), Best, worst and average case, Big O notation, Binary search algorithm, Cambridge University Press, Collation, Computation, Computational complexity, Computational complexity theory, Computational problem, Computer, Computer file, Computer program, Computer science, Control flow, Cross-platform, Cryptography, Deterministic system, Donald Knuth, DSPACE, DTIME, Elegance, Empirical evidence, Exponential growth, Factorization, Function (mathematics), Hybrid algorithm, Implementation, Information, Information-based complexity, Insertion sort, Instruction set architecture, Introduction to Algorithms, Iterated logarithm, Iteration, Kilobyte, Linear search, Linearity, List (abstract data type), Logarithm, Log–log plot, ..., Master theorem (analysis of algorithms), Mathematical induction, Memory segmentation, Merge sort, Model of computation, Nanosecond, NP-completeness, Numerical analysis, Operating system, Profiling (computer programming), Program optimization, Programming language, Pseudocode, Quadratic growth, Quantum computing, Quicksort, Reduction (mathematics), Robert Tarjan, Rule of thumb, Scalability, Smoothed analysis, Springer Science+Business Media, System resource, Termination analysis, The Art of Computer Programming, Time complexity, Timsort, Triviality (mathematics), Turing machine, Upper and lower bounds. Expand index (30 more) » « Shrink index
An abstract machine, also called an abstract computer, is a theoretical model of a computer hardware or software system used in automata theory.
In mathematics and computer science, an algorithm is an unambiguous specification of how to solve a class of problems.
In computer science, algorithmic efficiency is a property of an algorithm which relates to the number of computational resources used by the algorithm.
In computer science, amortized analysis is a method for analyzing a given algorithm's complexity, or how much of a resource, especially time or memory, it takes to execute.
This article discusses the analysis of parallel algorithms.
In computer science, arbitrary-precision arithmetic, also called bignum arithmetic, multiple-precision arithmetic, or sometimes infinite-precision arithmetic, indicates that calculations are performed on numbers whose digits of precision are limited only by the available memory of the host system.
In mathematics, an arithmetic progression (AP) or arithmetic sequence is a sequence of numbers such that the difference between the consecutive terms is constant.
In mathematical analysis, asymptotic analysis, also known as asymptotics, is a method of describing limiting behavior.
In computational complexity theory, asymptotic computational complexity is the usage of asymptotic analysis for the estimation of computational complexity of algorithms and computational problems, commonly associated with the usage of the big O notation.
In computing, a benchmark is the act of running a computer program, a set of programs, or other operations, in order to assess the relative performance of an object, normally by running a number of standard tests and trials against it.
In computer science, best, worst, and average cases of a given algorithm express what the resource usage is at least, at most and on average, respectively.
Big O notation is a mathematical notation that describes the limiting behaviour of a function when the argument tends towards a particular value or infinity.
In computer science, binary search, also known as half-interval search,logarithmic search, or binary chop, is a search algorithm that finds the position of a target value within a sorted array.
Cambridge University Press (CUP) is the publishing business of the University of Cambridge.
Collation is the assembly of written information into a standard order.
Computation is any type of calculation that includes both arithmetical and non-arithmetical steps and follows a well-defined model, for example an algorithm.
In computer science, the computational complexity, or simply complexity of an algorithm is the amount of resources required for running it.
Computational complexity theory is a branch of the theory of computation in theoretical computer science that focuses on classifying computational problems according to their inherent difficulty, and relating those classes to each other.
In theoretical computer science, a computational problem is a mathematical object representing a collection of questions that computers might be able to solve.
A computer is a device that can be instructed to carry out sequences of arithmetic or logical operations automatically via computer programming.
A computer file is a computer resource for recording data discretely in a computer storage device.
A computer program is a collection of instructions for performing a specific task that is designed to solve a specific class of problems.
Computer science deals with the theoretical foundations of information and computation, together with practical techniques for the implementation and application of these foundations.
In computer science, control flow (or flow of control) is the order in which individual statements, instructions or function calls of an imperative program are executed or evaluated.
In computing, cross-platform software (also multi-platform software or platform-independent software) is computer software that is implemented on multiple computing platforms.
Cryptography or cryptology (from κρυπτός|translit.
In mathematics, computer science and physics, a deterministic system is a system in which no randomness is involved in the development of future states of the system.
Donald Ervin Knuth (born January 10, 1938) is an American computer scientist, mathematician, and professor emeritus at Stanford University.
In computational complexity theory, DSPACE or SPACE is the computational resource describing the resource of memory space for a deterministic Turing machine.
In computational complexity theory, DTIME (or TIME) is the computational resource of computation time for a deterministic Turing machine.
Elegance is beauty that shows unusual effectiveness and simplicity.
Empirical evidence, also known as sensory experience, is the information received by means of the senses, particularly by observation and documentation of patterns and behavior through experimentation.
Exponential growth is exhibited when the rate of change—the change per instant or unit of time—of the value of a mathematical function is proportional to the function's current value, resulting in its value at any time being an exponential function of time, i.e., a function in which the time value is the exponent.
In mathematics, factorization (also factorisation in some forms of British English) or factoring consists of writing a number or another mathematical object as a product of several factors, usually smaller or simpler objects of the same kind.
In mathematics, a function was originally the idealization of how a varying quantity depends on another quantity.
A hybrid algorithm is an algorithm that combines two or more other algorithms that solve the same problem, either choosing one (depending on the data), or switching between them over the course of the algorithm.
Implementation is the realization of an application, or execution of a plan, idea, model, design, specification, standard, algorithm, or policy.
Information is any entity or form that provides the answer to a question of some kind or resolves uncertainty.
Information-based complexity (IBC) studies optimal algorithms and computational complexity for the continuous problems which arise in physical science, economics, engineering, and mathematical finance.
Insertion sort is a simple sorting algorithm that builds the final sorted array (or list) one item at a time.
An instruction set architecture (ISA) is an abstract model of a computer.
Introduction to Algorithms is a book by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein.
In computer science, the iterated logarithm of n, written n (usually read "log star"), is the number of times the logarithm function must be iteratively applied before the result is less than or equal to 1.
Iteration is the act of repeating a process, to generate a (possibly unbounded) sequence of outcomes, with the aim of approaching a desired goal, target or result.
The kilobyte is a multiple of the unit byte for digital information.
In computer science, linear search or sequential search is a method for finding a target value within a list.
Linearity is the property of a mathematical relationship or function which means that it can be graphically represented as a straight line.
In computer science, a list or sequence is an abstract data type that represents a countable number of ordered values, where the same value may occur more than once.
In mathematics, the logarithm is the inverse function to exponentiation.
In science and engineering, a log–log graph or log–log plot is a two-dimensional graph of numerical data that uses logarithmic scales on both the horizontal and vertical axes.
In the analysis of algorithms, the master theorem for divide-and-conquer recurrences provides an asymptotic analysis (using Big O notation) for recurrence relations of types that occur in the analysis of many divide and conquer algorithms.
Mathematical induction is a mathematical proof technique.
Memory segmentation is the division of a computer's primary memory into segments or sections.
In computer science, merge sort (also commonly spelled mergesort) is an efficient, general-purpose, comparison-based sorting algorithm.
In computer science, and more specifically in computability theory and computational complexity theory, a model of computation is a model which describes how a set of outputs are computed given a set of inputs.
A nanosecond (ns) is an SI unit of time equal to one thousand-millionth of a second (or one billionth of a second), that is, 1/1,000,000,000 of a second, or 10 seconds.
In computational complexity theory, an NP-complete decision problem is one belonging to both the NP and the NP-hard complexity classes.
Numerical analysis is the study of algorithms that use numerical approximation (as opposed to general symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics).
An operating system (OS) is system software that manages computer hardware and software resources and provides common services for computer programs.
In software engineering, profiling ("program profiling", "software profiling") is a form of dynamic program analysis that measures, for example, the space (memory) or time complexity of a program, the usage of particular instructions, or the frequency and duration of function calls.
In computer science, program optimization or software optimization is the process of modifying a software system to make some aspect of it work more efficiently or use fewer resources.
A programming language is a formal language that specifies a set of instructions that can be used to produce various kinds of output.
Pseudocode is an informal high-level description of the operating principle of a computer program or other algorithm.
In mathematics, a function or sequence is said to exhibit quadratic growth when its values are proportional to the square of the function argument or sequence position.
Quantum computing is computing using quantum-mechanical phenomena, such as superposition and entanglement.
Quicksort (sometimes called partition-exchange sort) is an efficient sorting algorithm, serving as a systematic method for placing the elements of an array in order.
In mathematics, reduction refers to the rewriting of an expression into a simpler form.
Robert Endre Tarjan (born April 30, 1948) is an American computer scientist and mathematician.
The English phrase rule of thumb refers to a principle with broad application that is not intended to be strictly accurate or reliable for every situation.
Scalability is the capability of a system, network, or process to handle a growing amount of work, or its potential to be enlarged to accommodate that growth.
Smoothed analysis is a way of measuring the complexity of an algorithm.
Springer Science+Business Media or Springer, part of Springer Nature since 2015, is a global publishing company that publishes books, e-books and peer-reviewed journals in science, humanities, technical and medical (STM) publishing.
In computing, a system resource, or simply resource, is any physical or virtual component of limited availability within a computer system.
In computer science, a termination analysis is program analysis which attempts to determine whether the evaluation of a given program will definitely terminate.
The Art of Computer Programming (sometimes known by its initials TAOCP) is a comprehensive monograph written by Donald Knuth that covers many kinds of programming algorithms and their analysis.
In computer science, the time complexity is the computational complexity that describes the amount of time it takes to run an algorithm.
Timsort is a hybrid stable sorting algorithm, derived from merge sort and insertion sort, designed to perform well on many kinds of real-world data.
In mathematics, the adjective trivial is frequently used for objects (for example, groups or topological spaces) that have a very simple structure.
A Turing machine is a mathematical model of computation that defines an abstract machine, which manipulates symbols on a strip of tape according to a table of rules.
In mathematics, especially in order theory, an upper bound of a subset S of some partially ordered set (K, ≤) is an element of K which is greater than or equal to every element of S. The term lower bound is defined dually as an element of K which is less than or equal to every element of S. A set with an upper bound is said to be bounded from above by that bound, a set with a lower bound is said to be bounded from below by that bound.
"uniform cost model", Algorithm analysis, Complexity analysis, Computational expense, Computationally expensive, Cost model (computer science), Design and analysis of algorithms, Logarithmic cost model, Problem size, Run-time analysis, Runtime analysis, Time/space complexity, Uniform cost model.