280 relations: Actor model theory, Address space, Advanced Micro Devices, Algorithm, Algorithmic skeleton, AltiVec, Amdahl's law, Analytical Engine, Apple Inc., Application checkpointing, Application programming interface, Application-specific integrated circuit, Atomic commit, Bandwidth (computing), Barnes–Hut simulation, Barrier (computer science), Bayesian network, Beowulf cluster, Berkeley Open Infrastructure for Network Computing, Bioinformatics, Bit-level parallelism, Blue Gene, Branch and bound, BrookGPU, Brute-force attack, Burroughs Corporation, Bus (computing), Bus contention, Bus snooping, C (programming language), C to HDL, C.mmp, Cache coherence, Calculus of communicating systems, Capacitance, Carnegie Mellon University, Carry flag, Cell (microprocessor), Central processing unit, Charles Babbage, Combinational logic, Commercial off-the-shelf, Communicating sequential processes, Compiler, Compute kernel, Computer architecture, Computer cluster, Computer data storage, Computer graphics, Computer industry, ..., Computer multitasking, Computer network, Computer performance, Computerworld, Computing, Concurrency (computer science), Concurrent computing, Consistency model, Content Addressable Parallel Processor, Control unit, Cooley–Tukey FFT algorithm, Core dump, CPU cache, CPU socket, CPU-bound, Cray, Cray-1, Critical path method, Critical section, Crossbar switch, CUDA, Daniel Slotnick, Data dependency, Data parallelism, Database, Dataflow architecture, David Patterson (computer scientist), Deadlock, Desktop computer, Directive (programming), Distributed computing, Distributed memory, Distributed shared memory, Domain-specific language, Donald Becker, Dynamic programming, Embarrassingly parallel, Ernest Hilgard, Error detection and correction, Ethernet, Execution unit, Explicit parallelism, Fault-tolerant computer system, Fiber (computer science), Field-programmable gate array, Finite element method, Finite-state machine, Floating-point arithmetic, Flynn's taxonomy, Folding@home, Fortran, Freescale Semiconductor, Frequency scaling, Futures and promises, George Gurdjieff, Gigabit Ethernet, Glossary of computer hardware terms, Graph traversal, Graphical model, Graphics processing unit, Gröbner basis, Grid computing, Gustafson's law, Handel-C, Hardware description language, Haskell (programming language), Hidden Markov model, Holy Grail, Honeywell, Hyper-threading, Hypercube graph, HyperTransport, IBM, ILLIAC IV, Implicit parallelism, Impulse C, InfiniBand, Instruction pipelining, Instruction set architecture, Instruction-level parallelism, Instructions per cycle, Integer, Intel, Inter-process communication, Internet, Internet protocol suite, Π-calculus, John Cocke, John L. Hennessy, Latency (engineering), Lattice Boltzmann methods, Lawrence Livermore National Laboratory, Leslie Lamport, Library (computing), Linear algebra, Linearizability, List of concurrent and parallel programming languages, List of distributed computing conferences, List of distributed computing projects, List of important publications in concurrent, parallel, and distributed computing, Load balancing (computing), Local area network, Lock (computer science), Lockstep (computing), Luigi Federico Menabrea, Manycore processor, Marvin Minsky, Massively parallel, Mathematical finance, Matrix (mathematics), Mean time between failures, Memory latency, Memory virtualization, Mesh networking, Message passing, Message Passing Interface, Meteorology, Michael Gazzaniga, Michael J. Flynn, Michio Kaku, Middleware, MIT Computer Science and Artificial Intelligence Laboratory, Mitrionics, Molecular dynamics, Monte Carlo method, Moore's law, Multi-core processor, Multics, Multiplexing, Mutual exclusion, Myrinet, N-body problem, Network topology, Non-blocking algorithm, Non-uniform memory access, Nvidia, Nvidia Tesla, Object (computer science), OpenCL, OpenHMPP, OpenMP, Operating system, Out-of-order execution, Parallel algorithm, Parallel programming model, Parallel slowdown, PeakStream, Pentium 4, Petri net, PlayStation 3, POSIX Threads, Process (computing), Process calculus, Propagation delay, Protein folding, Race condition, RapidMind, Reconfigurable computing, Reduced instruction set computer, Redundancy (engineering), Register renaming, Regular grid, Resource management (computing), RIKEN MDGRAPE-3, Ring network, Rob Pike, Robert E. Ornstein, Routing, Scoreboarding, Semaphore (programming), Sequence analysis, SequenceL, Sequential consistency, Serial computer, Serializability, Server (computing), SETI@home, Seymour Papert, Shared memory, Signal processing, SISAL, Society of Mind, Software, Software bug, Software lockout, Software transactional memory, Sony, Sorting algorithm, Speedup, Star network, Streaming SIMD Extensions, Supercomputer, Superscalar processor, Symmetric multiprocessing, Synchronization (computer science), Synchronous programming language, System C, SystemC, Systolic array, Task parallelism, Tejas and Jayhawk, Temporal logic of actions, Temporal multithreading, The Future of the Mind, Thomas Sterling (computing), Thread (computing), Time-sharing, Tomasulo algorithm, TOP500, Trace theory, Transistor, Transputer, Tree (graph theory), Uniform memory access, United States Air Force, Unstructured grid, Upper and lower bounds, Variable (computer science), Vector processor, Verilog, Very-large-scale integration, VHDL, Voltage, Word (computer architecture), X86-64, Yale Patt, 16-bit, 4-bit, 64-bit computing, 8-bit. Expand index (230 more) » « Shrink index
In theoretical computer science, Actor model theory concerns theoretical issues for the Actor model.
In computing, an address space defines a range of discrete addresses, each of which may correspond to a network host, peripheral device, disk sector, a memory cell or other logical or physical entity.
Advanced Micro Devices, Inc. (AMD) is an American multinational semiconductor company based in Santa Clara, California, that develops computer processors and related technologies for business and consumer markets.
In mathematics and computer science, an algorithm is an unambiguous specification of how to solve a class of problems.
In computing, algorithmic skeletons, or parallelism patterns, are a high-level parallel programming model for parallel and distributed computing.
AltiVec is a single-precision floating point and integer SIMD instruction set designed and owned by Apple, IBM, and Freescale Semiconductor (formerly Motorola's Semiconductor Products Sector) — the AIM alliance.
In computer architecture, Amdahl's law (or Amdahl's argument) is a formula which gives the theoretical speedup in latency of the execution of a task at fixed workload that can be expected of a system whose resources are improved.
The Analytical Engine was a proposed mechanical general-purpose computer designed by English mathematician and computer pioneer Charles Babbage.
Apple Inc. is an American multinational technology company headquartered in Cupertino, California, that designs, develops, and sells consumer electronics, computer software, and online services.
Checkpointing is a technique to add fault tolerance into computing systems.
In computer programming, an application programming interface (API) is a set of subroutine definitions, protocols, and tools for building software.
An Application-Specific Integrated Circuit (ASIC), is an integrated circuit (IC) customized for a particular use, rather than intended for general-purpose use.
In the field of computer science, an atomic commit is an operation that applies a set of distinct changes as a single operation.
In computing, bandwidth is the maximum rate of data transfer across a given path.
The Barnes–Hut simulation (Josh Barnes and Piet Hut) is an approximation algorithm for performing an ''n''-body simulation.
In parallel computing, a barrier is a type of synchronization method.
A Bayesian network, Bayes network, belief network, Bayes(ian) model or probabilistic directed acyclic graphical model is a probabilistic graphical model (a type of statistical model) that represents a set of variables and their conditional dependencies via a directed acyclic graph (DAG).
A Beowulf cluster is a computer cluster of what are normally identical, commodity-grade computers networked into a small local area network with libraries and programs installed which allow processing to be shared among them.
The Berkeley Open Infrastructure for Network Computing (BOINC, pronounced – rhymes with "oink"), an open-source middleware system, supports volunteer and grid computing.
Bioinformatics is an interdisciplinary field that develops methods and software tools for understanding biological data.
Bit-level parallelism is a form of parallel computing based on increasing processor word size.
Blue Gene is an IBM project aimed at designing supercomputers that can reach operating speeds in the PFLOPS (petaFLOPS) range, with low power consumption.
Branch and bound (BB, B&B, or BnB) is an algorithm design paradigm for discrete and combinatorial optimization problems, as well as mathematical optimization.
The Brook programming language and its implementation BrookGPU were early and influential attempts to enable general-purpose computing on graphics processing units.
In cryptography, a brute-force attack consists of an attacker trying many passwords or passphrases with the hope of eventually guessing correctly.
The Burroughs Corporation was a major American manufacturer of business equipment.
In computer architecture, a bus (a contraction of the Latin omnibus) is a communication system that transfers data between components inside a computer, or between computers.
Bus contention, in computer design, is an undesirable state of the bus in which more than one device on the bus attempts to place values on the bus at the same time.
Bus snooping or bus sniffing is a scheme that a coherency controller (snooper) in a cache monitors or snoops the bus transactions, and its goal is to maintain a cache coherency in distributed shared memory systems.
C (as in the letter ''c'') is a general-purpose, imperative computer programming language, supporting structured programming, lexical variable scope and recursion, while a static type system prevents many unintended operations.
C to HDL tools convert C or C-like computer program code into a hardware description language (HDL) such as VHDL or Verilog.
The C.mmp was an early MIMD multiprocessor system developed at Carnegie Mellon University by William Wulf (1971).
In computer architecture, cache coherence is the uniformity of shared resource data that ends up stored in multiple local caches.
The calculus of communicating systems (CCS) is a process calculus introduced by Robin Milner around 1980 and the title of a book describing the calculus.
Capacitance is the ratio of the change in an electric charge in a system to the corresponding change in its electric potential.
Carnegie Mellon University (commonly known as CMU) is a private research university in Pittsburgh, Pennsylvania.
In computer processors the carry flag (usually indicated as the C flag) is a single bit in a system status (flag) register used to indicate when an arithmetic carry or borrow has been generated out of the most significant ALU bit position.
Cell is a multi-core microprocessor microarchitecture that combines a general-purpose Power Architecture core of modest performance with streamlined coprocessing elements which greatly accelerate multimedia and vector processing applications, as well as many other forms of dedicated computation.
A central processing unit (CPU) is the electronic circuitry within a computer that carries out the instructions of a computer program by performing the basic arithmetic, logical, control and input/output (I/O) operations specified by the instructions.
Charles Babbage (26 December 1791 – 18 October 1871) was an English polymath.
In digital circuit theory, combinational logic (sometimes also referred to as time-independent logic) is a type of digital logic which is implemented by Boolean circuits, where the output is a pure function of the present input only.
Commercial off-the-shelf or commercially available off-the-shelf (COTS) satisfy the needs of the purchasing organization, without the need to commission custom-made, or bespoke, solutions.
In computer science, communicating sequential processes (CSP) is a formal language for describing patterns of interaction in concurrent systems.
A compiler is computer software that transforms computer code written in one programming language (the source language) into another programming language (the target language).
In computing, a compute kernel is a routine compiled for high throughput accelerators (such as GPUs, DSPs or FPGAs), separate from (but used by) a main program.
In computer engineering, computer architecture is a set of rules and methods that describe the functionality, organization, and implementation of computer systems.
A computer cluster is a set of loosely or tightly connected computers that work together so that, in many respects, they can be viewed as a single system.
Computer data storage, often called storage or memory, is a technology consisting of computer components and recording media that are used to retain digital data.
Computer graphics are pictures and films created using computers.
The computer or information technology, or IT industry is the range of businesses involved in designing computer hardware and computer networking infrastructures, developing computer software, manufacturing computer components, and providing information technology (IT) services.
In computing, multitasking is the concurrent execution of multiple tasks (also known as processes) over a certain period of time.
A computer network, or data network, is a digital telecommunications network which allows nodes to share resources.
Computer performance is the amount of work accomplished by a computer system.
Computerworld is a publication website and digital magazine for information technology (IT) and business technology professionals.
Computing is any goal-oriented activity requiring, benefiting from, or creating computers.
In computer science, concurrency refers to the ability of different parts or units of a program, algorithm, or problem to be executed out-of-order or in partial order, without affecting the final outcome.
Concurrent computing is a form of computing in which several computations are executed during overlapping time periods—concurrently—instead of sequentially (one completing before the next starts).
In computer science, Consistency models are used in distributed systems like distributed shared memory systems or distributed data stores (such as a filesystems, databases, optimistic replication systems or Web caching).
A Content Addressable Parallel Processor (CAPP) is a type of parallel processor which uses content-addressing memory (CAM) principles.
The control unit (CU) is a component of a computer's central processing unit (CPU) that directs the operation of the processor.
The Cooley–Tukey algorithm, named after J. W. Cooley and John Tukey, is the most common fast Fourier transform (FFT) algorithm.
In computing, a core dump, crash dump, memory dump, or system dump consists of the recorded state of the working memory of a computer program at a specific time, generally when the program has crashed or otherwise terminated abnormally.
A CPU cache is a hardware cache used by the central processing unit (CPU) of a computer to reduce the average cost (time or energy) to access data from the main memory.
In computer hardware, a CPU socket or CPU slot comprises one or more mechanical components providing mechanical and electrical connections between a microprocessor and a printed circuit board (PCB).
In computer science, a computer is CPU-bound (or compute-bound) when the time for it to complete a task is determined principally by the speed of the central processor: processor utilization is high, perhaps at 100% usage for many seconds or minutes.
Cray Inc. is an American supercomputer manufacturer headquartered in Seattle, Washington.
The Cray-1 was a supercomputer designed, manufactured and marketed by Cray Research.
The critical path method (CPM), or critical path analysis (CPA), is an algorithm for scheduling a set of project activities.
In concurrent programming, concurrent accesses to shared resources can lead to unexpected or erroneous behavior, so parts of the program where the shared resource is accessed are protected.
In electronics, a crossbar switch (cross-point switch, matrix switch) is a collection of switches arranged in a matrix configuration.
CUDA is a parallel computing platform and application programming interface (API) model created by Nvidia.
Daniel Leonid Slotnick (1931–1985) was a mathematician and computer architect.
A data dependency in computer science is a situation in which a program statement (instruction) refers to the data of a preceding statement.
Data parallelism is parallelization across multiple processors in parallel computing environments.
A database is an organized collection of data, stored and accessed electronically.
Dataflow architecture is a computer architecture that directly contrasts the traditional von Neumann architecture or control flow architecture.
David Andrew Patterson (born November 16, 1947) is an American computer pioneer and academic who has held the position of Professor of Computer Science at the University of California, Berkeley since 1976.
In concurrent computing, a deadlock is a state in which each member of a group is waiting for some other member to take action, such as sending a message or more commonly releasing a lock.
A desktop computer is a personal computer designed for regular use at a single location on or near a desk or table due to its size and power requirements.
In computer programming, a directive or pragma (from "pragmatic") is a language construct that specifies how a compiler (or other translator) should process its input.
Distributed computing is a field of computer science that studies distributed systems.
In computer science, distributed memory refers to a multiprocessor computer system in which each processor has its own private memory.
In computer science, distributed shared memory (DSM) is a form of memory architecture where physically separated memories can be addressed as one logically shared address space.
A domain-specific language (DSL) is a computer language specialized to a particular application domain.
Donald Becker is an American computer programmer who wrote Ethernet drivers for the Linux operating system.
Dynamic programming is both a mathematical optimization method and a computer programming method.
In parallel computing, an embarrassingly parallel workload or problem (also called perfectly parallel or pleasingly parallel) is one where little or no effort is needed to separate the problem into a number of parallel tasks.
Ernest Ropiequet "Jack" Hilgard (July 25, 1904 – October 22, 2001) was an American psychologist and professor at Stanford University.
In information theory and coding theory with applications in computer science and telecommunication, error detection and correction or error control are techniques that enable reliable delivery of digital data over unreliable communication channels.
Ethernet is a family of computer networking technologies commonly used in local area networks (LAN), metropolitan area networks (MAN) and wide area networks (WAN).
In computer engineering, an execution unit (also called a functional unit) is a part of the central processing unit (CPU) that performs the operations and calculations as instructed by the computer program.
In computer programming, explicit parallelism is the representation of concurrent computations by means of primitives in the form of special-purpose directives or function calls.
Fault-tolerant computer systems are systems designed around the concepts of fault tolerance.
In computer science, a fiber is a particularly lightweight thread of execution.
A field-programmable gate array (FPGA) is an integrated circuit designed to be configured by a customer or a designer after manufacturing hence "field-programmable".
The finite element method (FEM), is a numerical method for solving problems of engineering and mathematical physics.
A finite-state machine (FSM) or finite-state automaton (FSA, plural: automata), finite automaton, or simply a state machine, is a mathematical model of computation.
In computing, floating-point arithmetic is arithmetic using formulaic representation of real numbers as an approximation so as to support a trade-off between range and precision.
Flynn's taxonomy is a classification of computer architectures, proposed by Michael J. Flynn in 1966.
Folding@home (FAH or F@h) is a distributed computing project for disease research that simulates protein folding, computational drug design, and other types of molecular dynamics.
Fortran (formerly FORTRAN, derived from Formula Translation) is a general-purpose, compiled imperative programming language that is especially suited to numeric computation and scientific computing.
Freescale Semiconductor, Inc. was an American multinational corporation headquartered in Austin, Texas, with design, research and development, manufacturing and sales operations in more than 75 locations in 19 countries.
In computer architecture, frequency scaling (also known as frequency ramping) is the technique of increasing a processor's frequency so as to enhance the performance of the system containing the processor in question.
In computer science, future, promise, delay, and deferred refer to constructs used for synchronizing program execution in some concurrent programming languages.
George Ivanovich Gurdjieff (31 March 1866/ 14 January 1872/ 28 November 1877 – 29 October 1949) commonly known as G. I. Gurdjieff, was a mystic, philosopher, spiritual teacher, and composer of Armenian and Greek descent, born in Alexandrapol (now Gyumri), Armenia.
In computer networking, Gigabit Ethernet (GbE or 1 GigE) is a term describing various technologies for transmitting Ethernet frames at a rate of a gigabit per second (1,000,000,000 bits per second), as defined by the IEEE 802.3-2008 standard.
This is a glossary of terms relating to computer hardware – physical computer hardware, architectural issues, and peripherals.
In computer science, graph traversal (also known as graph search) refers to the process of visiting (checking and/or updating) each vertex in a graph.
A graphical model or probabilistic graphical model (PGM) or structured probabilistic model is a probabilistic model for which a graph expresses the conditional dependence structure between random variables.
A graphics processing unit (GPU) is a specialized electronic circuit designed to rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device.
In mathematics, and more specifically in computer algebra, computational algebraic geometry, and computational commutative algebra, a Gröbner basis is a particular kind of generating set of an ideal in a polynomial ring over a field.
Grid computing is the collection of computer resources from multiple locations to reach a common goal.
In computer architecture, Gustafson's law (or Gustafson–Barsis's law) gives the theoretical speedup in latency of the execution of a task at fixed execution time that can be expected of a system whose resources are improved.
Handel-C is a high-level programming language which targets low-level hardware, most commonly used in the programming of FPGAs.
In computer engineering, a hardware description language (HDL) is a specialized computer language used to describe the structure and behavior of electronic circuits, and most commonly, digital logic circuits.
Haskell is a standardized, general-purpose compiled purely functional programming language, with non-strict semantics and strong static typing.
Hidden Markov Model (HMM) is a statistical Markov model in which the system being modeled is assumed to be a Markov process with unobserved (i.e. hidden) states.
The Holy Grail is a vessel that serves as an important motif in Arthurian literature.
Honeywell International Inc. is an American multinational conglomerate company that produces a variety of commercial and consumer products, engineering services and aerospace systems for a wide variety of customers, from private consumers to major corporations and governments.
Hyper-threading (officially called Hyper-Threading Technology or HT Technology, and abbreviated as HTT or HT) is Intel's proprietary simultaneous multithreading (SMT) implementation used to improve parallelization of computations (doing multiple tasks at once) performed on x86 microprocessors.
In graph theory, the hypercube graph is the graph formed from the vertices and edges of an -dimensional hypercube.
HyperTransport (HT), formerly known as Lightning Data Transport (LDT), is a technology for interconnection of computer processors.
The International Business Machines Corporation (IBM) is an American multinational technology company headquartered in Armonk, New York, United States, with operations in over 170 countries.
The ILLIAC IV was the first massively parallel computer.
In computer science, implicit parallelism is a characteristic of a programming language that allows a compiler or interpreter to automatically exploit the parallelism inherent to the computations expressed by some of the language's constructs.
Impulse C is a subset of the C programming language combined with a C-compatible function library supporting parallel programming, in particular for programming of applications targeting FPGA devices.
InfiniBand (abbreviated IB) is a computer-networking communications standard used in high-performance computing that features very high throughput and very low latency.
Instruction pipelining is a technique for implementing instruction-level parallelism within a single processor.
An instruction set architecture (ISA) is an abstract model of a computer.
Instruction-level parallelism (ILP) is a measure of how many of the instructions in a computer program can be executed simultaneously.
In computer architecture, instructions per cycle (IPC) is one aspect of a processor's performance: the average number of instructions executed for each clock cycle.
An integer (from the Latin ''integer'' meaning "whole")Integer 's first literal meaning in Latin is "untouched", from in ("not") plus tangere ("to touch").
Intel Corporation (stylized as intel) is an American multinational corporation and technology company headquartered in Santa Clara, California, in the Silicon Valley.
In computer science, inter-process communication or interprocess communication (IPC) refers specifically to the mechanisms an operating system provides to allow the processes to manage shared data.
The Internet is the global system of interconnected computer networks that use the Internet protocol suite (TCP/IP) to link devices worldwide.
The Internet protocol suite is the conceptual model and set of communications protocols used on the Internet and similar computer networks.
In theoretical computer science, the -calculus (or pi-calculus) is a process calculus.
John Cocke (May 30, 1925 – July 16, 2002) was an American computer scientist recognized for his large contribution to computer architecture and optimizing compiler design.
John Leroy Hennessy (born September 22, 1952) is an American computer scientist, academician, businessman and Chairman of Alphabet Inc..
Latency is a time interval between the stimulation and response, or, from a more general point of view, a time delay between the cause and the effect of some physical change in the system being observed.
Lattice Boltzmann methods (LBM) (or thermal Lattice Boltzmann methods (TLBM)) is a class of computational fluid dynamics (CFD) methods for fluid simulation.
Lawrence Livermore National Laboratory (LLNL) is an American federal research facility in Livermore, California, United States, founded by the University of California, Berkeley in 1952.
Leslie B. Lamport (born February 7, 1941) is an American computer scientist.
In computer science, a library is a collection of non-volatile resources used by computer programs, often for software development.
Linear algebra is the branch of mathematics concerning linear equations such as linear functions such as and their representations through matrices and vector spaces.
In concurrent programming, an operation (or set of operations) is atomic, linearizable, indivisible or uninterruptible if it appears to the rest of the system to occur at once without being interrupted.
This article lists concurrent and parallel programming languages, categorizing them by a defining paradigm.
This is a selected list of international academic conferences in the fields of distributed computing, parallel computing, and concurrent computing.
This is a list of distributed computing and grid computing projects.
This is a list of important publications in concurrent, parallel, and distributed computing, organized by field.
In computing, load balancing improves the distribution of workloads across multiple computing resources, such as computers, a computer cluster, network links, central processing units, or disk drives.
A local area network (LAN) is a computer network that interconnects computers within a limited area such as a residence, school, laboratory, university campus or office building.
In computer science, a lock or mutex (from mutual exclusion) is a synchronization mechanism for enforcing limits on access to a resource in an environment where there are many threads of execution.
Lockstep systems are fault-tolerant computer systems that run the same set of operations at the same time in parallel.
Luigi Federico Menabrea (4 September 1809 – 24 May 1896), later made 1st Count Menabrea and 1st Marquess of Valdora, was an Italian general, statesman and mathematician who served as the Prime Minister of Italy from 1867 to 1869.
Manycore processors are specialist multi-core processors designed for a high degree of parallel processing, containing a large number of simpler, independent processor cores (e.g. 10s, 100s, or 1,000s).
Marvin Lee Minsky (August 9, 1927 – January 24, 2016) was an American cognitive scientist concerned largely with research of artificial intelligence (AI), co-founder of the Massachusetts Institute of Technology's AI laboratory, and author of several texts concerning AI and philosophy.
In computing, massively parallel refers to the use of a large number of processors (or separate computers) to perform a set of coordinated computations in parallel (simultaneously).
Mathematical finance, also known as quantitative finance, is a field of applied mathematics, concerned with mathematical modeling of financial markets.
In mathematics, a matrix (plural: matrices) is a rectangular array of numbers, symbols, or expressions, arranged in rows and columns.
Mean time between failures (MTBF) is the predicted elapsed time between inherent failures of a mechanical or electronic system, during normal system operation.
In computing, memory latency is the time (the latency) between initiating a request for a byte or word in memory until it is retrieved by a processor.
In computer science, memory virtualization decouples volatile random access memory (RAM) resources from individual systems in the data center, and then aggregates those resources into a virtualized memory pool available to any computer in the cluster.
A mesh network is a local network topology in which the infrastructure nodes (i.e. bridges, switches and other infrastructure devices) connect directly, dynamically and non-hierarchically to as many other nodes as possible and cooperate with one another to efficiently route data from/to clients.
In computer science, message passing is a technique for invoking behavior (i.e., running a program) on a computer.
Message Passing Interface (MPI) is a standardized and portable message-passing standard designed by a group of researchers from academia and industry to function on a wide variety of parallel computing architectures.
Meteorology is a branch of the atmospheric sciences which includes atmospheric chemistry and atmospheric physics, with a major focus on weather forecasting.
Michael S. Gazzaniga (born December 12, 1939) is a professor of psychology at the University of California, Santa Barbara, where he heads the new SAGE Center for the Study of the Mind.
Michael J. Flynn (born May 20, 1934) is an American professor emeritus at Stanford University.
Michio Kaku (born 24 January 1947) is an American theoretical physicist, futurist, and popularizer of science.
Middleware is computer software that provides services to software applications beyond those available from the operating system.
MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) is a research institute at the Massachusetts Institute of Technology formed by the 2003 merger of the Laboratory for Computer Science and the Artificial Intelligence Laboratory.
Mitrionics is a Swedish company manufacturing softcore reconfigurable processors.
Molecular dynamics (MD) is a computer simulation method for studying the physical movements of atoms and molecules.
Monte Carlo methods (or Monte Carlo experiments) are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results.
Moore's law is the observation that the number of transistors in a dense integrated circuit doubles about every two years.
A multi-core processor is a single computing component with two or more independent processing units called cores, which read and execute program instructions.
Multics (Multiplexed Information and Computing Service) is an influential early time-sharing operating system, based around the concept of a single-level memory.
In telecommunications and computer networks, multiplexing (sometimes contracted to muxing) is a method by which multiple analog or digital signals are combined into one signal over a shared medium.
In computer science, mutual exclusion is a property of concurrency control, which is instituted for the purpose of preventing race conditions; it is the requirement that one thread of execution never enter its critical section at the same time that another concurrent thread of execution enters its own critical section.
Myrinet, ANSI/VITA 26-1998, is a high-speed local area networking system designed by the company Myricom to be used as an interconnect between multiple machines to form computer clusters.
In physics, the -body problem is the problem of predicting the individual motions of a group of celestial objects interacting with each other gravitationally.
Network topology is the arrangement of the elements (links, nodes, etc.) of a communication network.
In computer science, an algorithm is called non-blocking if failure or suspension of any thread cannot cause failure or suspension of another thread; for some operations, these algorithms provide a useful alternative to traditional blocking implementations.
Non-uniform memory access (NUMA) is a computer memory design used in multiprocessing, where the memory access time depends on the memory location relative to the processor.
Nvidia Corporation (most commonly referred to as Nvidia, stylized as NVIDIA, or (due to their logo) nVIDIA) is an American technology company incorporated in Delaware and based in Santa Clara, California.
Nvidia Tesla is Nvidia's brand name for their products targeting stream processing or general-purpose GPU.
In computer science, an object can be a variable, a data structure, a function, or a method, and as such, is a value in memory referenced by an identifier.
OpenCL (Open Computing Language) is a framework for writing programs that execute across heterogeneous platforms consisting of central processing units (CPUs), graphics processing units (GPUs), digital signal processors (DSPs), field-programmable gate arrays (FPGAs) and other processors or hardware accelerators.
OpenHMPP (HMPP for Hybrid Multicore Parallel Programming) - programming standard for heterogeneous computing.
OpenMP (Open Multi-Processing) is an application programming interface (API) that supports multi-platform shared memory multiprocessing programming in C, C++, and Fortran, on most platforms, instruction set architectures and operating systems, including Solaris, AIX, HP-UX, Linux, macOS, and Windows.
An operating system (OS) is system software that manages computer hardware and software resources and provides common services for computer programs.
In computer engineering, out-of-order execution (or more formally dynamic execution) is a paradigm used in most high-performance central processing units to make use of instruction cycles that would otherwise be wasted.
In computer science, a parallel algorithm, as opposed to a traditional serial algorithm, is an algorithm which can be executed a piece at a time on many different processing devices, and then combined together again at the end to get the correct result.
In computing, a parallel programming model is an abstraction of parallel computer architecture, with which it is convenient to express algorithms and their composition in programs.
Parallel slowdown is a phenomenon in parallel computing where parallelization of a parallel algorithm beyond a certain point causes the program to run slower (take more time to run to completion).
PeakStream was a parallel processing software company located in Redwood Shores, California founded by Matthew Papakipos and Asher Waldfogel in April 2005 and backed by Sequoia Capital and Kleiner Perkins.
Pentium 4 is a brand by Intel for an entire series of single-core CPUs for desktops, laptops and entry-level servers.
A Petri net, also known as a place/transition (PT) net, is one of several mathematical modeling languages for the description of distributed systems.
The PlayStation 3 (PS3) is a home video game console developed by Sony Computer Entertainment.
POSIX Threads, usually referred to as pthreads, is an execution model that exists independently from a language, as well as a parallel execution model.
In computing, a process is an instance of a computer program that is being executed.
In computer science, the process calculi (or process algebras) are a diverse family of related approaches for formally modelling concurrent systems.
Propagation delay is a technical term that can have a different meaning depending on the context.
Protein folding is the physical process by which a protein chain acquires its native 3-dimensional structure, a conformation that is usually biologically functional, in an expeditious and reproducible manner.
A race condition or race hazard is the behavior of an electronics, software, or other system where the output is dependent on the sequence or timing of other uncontrollable events.
RapidMind Inc. was a privately held company founded and headquartered in Waterloo, Ontario, Canada, acquired by Intel in 2009.
Reconfigurable computing is a computer architecture combining some of the flexibility of software with the high performance of hardware by processing with very flexible high speed computing fabrics like field-programmable gate arrays (FPGAs).
A reduced instruction set computer, or RISC (pronounced 'risk'), is one whose instruction set architecture (ISA) allows it to have fewer cycles per instruction (CPI) than a complex instruction set computer (CISC).
In engineering, redundancy is the duplication of critical components or functions of a system with the intention of increasing reliability of the system, usually in the form of a backup or fail-safe, or to improve actual system performance, such as in the case of GNSS receivers, or multi-threaded computer processing.
In computer architecture, register renaming is a technique that eliminates the false data dependencies arising from the reuse of architectural registers by successive instructions that do not have any real data dependencies between them.
A regular grid is a tessellation of n-dimensional Euclidean space by congruent parallelotopes (e.g. bricks).
In computer programming, resource management refers to techniques for managing resources (components with limited availability).
MDGRAPE-3 is an ultra-high performance petascale supercomputer system developed by the RIKEN research institute in Japan.
A ring network is a network topology in which each node connects to exactly two other nodes, forming a single continuous pathway for signals through each node - a ring.
Robert "Rob" C. Pike (born 1956) is a Canadian programmer and author.
Robert Evan Ornstein (born 1942) The web page gives the birth year as 1942.
Routing is the process of selecting a path for traffic in a network, or between or across multiple networks.
Scoreboarding is a centralized method, used in the CDC 6600 computer, for dynamically scheduling a pipeline so that the instructions can execute out of order when there are no conflicts and the hardware is available.
In computer science, a semaphore is a variable or abstract data type used to control access to a common resource by multiple processes in a concurrent system such as a multitasking operating system.
In bioinformatics, sequence analysis is the process of subjecting a DNA, RNA or peptide sequence to any of a wide range of analytical methods to understand its features, function, structure, or evolution.
SequenceL is a general purpose functional programming language and auto-parallelizing (Parallel computing) compiler and tool set, whose primary design objectives are performance on multi-core processor hardware, ease of programming, platform portability/optimization, and code clarity and readability.
Sequential consistency is one of the consistency models used in the domain of concurrent computing (e.g. in distributed shared memory, distributed transactions, etc.). It was first defined as the property that requires that To understand this statement, it is essential to understand one key property of sequential consistency: execution order of program in the same processor (or thread) is the same as the program order, while execution order of program between processors (or threads) is undefined.
A serial computer is a computer typified by bit-serial architecture — i.e., internally operating on one bit or digit for each clock cycle.
In concurrency control of databases,Philip A. Bernstein, Vassos Hadzilacos, Nathan Goodman (1987): (free PDF download), Addison Wesley Publishing Company, Gerhard Weikum, Gottfried Vossen (2001):, Elsevier, transaction processing (transaction management), and various transactional applications (e.g., transactional memoryMaurice Herlihy and J. Eliot B. Moss. Transactional memory: architectural support for lock-free data structures. Proceedings of the 20th annual international symposium on Computer architecture (ISCA '93). Volume 21, Issue 2, May 1993. and software transactional memory), both centralized and distributed, a transaction schedule is serializable if its outcome (e.g., the resulting database state) is equal to the outcome of its transactions executed serially, i.e. without overlapping in time.
In computing, a server is a computer program or a device that provides functionality for other programs or devices, called "clients".
SETI@home ("SETI at home") is an Internet-based public volunteer computing project employing the BOINC software platform created by the Berkeley SETI Research Center and is hosted by the Space Sciences Laboratory, at the University of California, Berkeley.
Seymour Aubrey Papert (February 29, 1928 – July 31, 2016) was a South African-born American mathematician, computer scientist, and educator, who spent most of his career teaching and researching at MIT.
In computer science, shared memory is memory that may be simultaneously accessed by multiple programs with an intent to provide communication among them or avoid redundant copies.
Signal processing concerns the analysis, synthesis, and modification of signals, which are broadly defined as functions conveying "information about the behavior or attributes of some phenomenon", such as sound, images, and biological measurements.
SISAL ("Streams and Iteration in a Single Assignment Language") is a general-purpose single assignment functional programming language with strict semantics, implicit parallelism, and efficient array handling.
The Society of Mind is both the title of a 1986 book and the name of a theory of natural intelligence as written and developed by Marvin Minsky.
Computer software, or simply software, is a generic term that refers to a collection of data or computer instructions that tell the computer how to work, in contrast to the physical hardware from which the system is built, that actually performs the work.
A software bug is an error, flaw, failure or fault in a computer program or system that causes it to produce an incorrect or unexpected result, or to behave in unintended ways.
In multiprocessor computer systems, software lockout is the issue of performance degradation due to the idle wait times spent by the CPUs in kernel-level critical sections.
In computer science, software transactional memory (STM) is a concurrency control mechanism analogous to database transactions for controlling access to shared memory in concurrent computing.
is a Japanese multinational conglomerate corporation headquartered in Kōnan, Minato, Tokyo.
In computer science, a sorting algorithm is an algorithm that puts elements of a list in a certain order.
In computer architecture, speedup is a number that measures the relative performance of two systems processing the same problem.
A Star network is one of the most common computer network topologies.
In computing, Streaming SIMD Extensions (SSE) is an SIMD instruction set extension to the x86 architecture, designed by Intel and introduced in 1999 in their Pentium III series of processors shortly after the appearance of AMD's 3DNow!.
A supercomputer is a computer with a high level of performance compared to a general-purpose computer.
A superscalar processor is a CPU that implements a form of parallelism called instruction-level parallelism within a single processor.
Symmetric multiprocessing (SMP) involves a multiprocessor computer hardware and software architecture where two or more identical processors are connected to a single, shared main memory, have full access to all input and output devices, and are controlled by a single operating system instance that treats all processors equally, reserving none for special purposes.
In computer science, synchronization refers to one of two distinct but related concepts: synchronization of processes, and synchronization of Data.
A synchronous programming language is a computer programming language optimized for programming reactive systems.
System C Healthcare Limited is a British supplier of health information technology solutions and services, based in Maidstone, Kent, specialising in the health and social care sectors.
SystemC is a set of C++ classes and macros which provide an event-driven simulation interface (see also discrete event simulation).
In parallel computer architectures, a systolic array is a homogeneous network of tightly coupled data processing units (DPUs) called cells or nodes.
Task parallelism (also known as function parallelism and control parallelism) is a form of parallelization of computer code across multiple processors in parallel computing environments.
Tejas was a code name for Intel's microprocessor which was to be a successor to the latest Pentium 4 with the Prescott core.
Temporal logic of actions (TLA) is a logic developed by Leslie Lamport, which combines temporal logic with a logic of actions.
Temporal multithreading is one of the two main forms of multithreading that can be implemented on computer processor hardware, the other being simultaneous multithreading.
The Future of the Mind: The Scientific Quest to Understand, Enhance, and Empower the Mind is a popular science book by the futurist and physicist Michio Kaku.
Thomas Sterling is Professor of Computer Science at Indiana University, a Faculty Associate at California Institute of Technology, and a Distinguished Visiting Scientist at Oak Ridge National Laboratory.
In computer science, a thread of execution is the smallest sequence of programmed instructions that can be managed independently by a scheduler, which is typically a part of the operating system.
In computing, time-sharing is the sharing of a computing resource among many users by means of multiprogramming and multi-tasking at the same time.
Tomasulo’s algorithm is a computer architecture hardware algorithm for dynamic scheduling of instructions that allows out-of-order execution and enables more efficient use of multiple execution units.
The TOP500 project ranks and details the 500 most powerful non-distributed computer systems in the world.
In mathematics and computer science, trace theory aims to provide a concrete mathematical underpinning for the study of concurrent computation and process calculi.
A transistor is a semiconductor device used to amplify or switch electronic signals and electrical power.
The transputer is a series of pioneering microprocessors from the 1980s, featuring integrated memory and serial communication links, intended for parallel computing.
In mathematics, and more specifically in graph theory, a tree is an undirected graph in which any two vertices are connected by exactly one path.
Uniform memory access (UMA) is a shared memory architecture used in parallel computers.
The United States Air Force (USAF) is the aerial and space warfare service branch of the United States Armed Forces.
An unstructured (or irregular) grid is a tessellation of a part of the Euclidean plane or Euclidean space by simple shapes, such as triangles or tetrahedra, in an irregular pattern.
In mathematics, especially in order theory, an upper bound of a subset S of some partially ordered set (K, ≤) is an element of K which is greater than or equal to every element of S. The term lower bound is defined dually as an element of K which is less than or equal to every element of S. A set with an upper bound is said to be bounded from above by that bound, a set with a lower bound is said to be bounded from below by that bound.
In computer programming, a variable or scalar is a storage location (identified by a memory address) paired with an associated symbolic name (an identifier), which contains some known or unknown quantity of information referred to as a value.
In computing, a vector processor or array processor is a central processing unit (CPU) that implements an instruction set containing instructions that operate on one-dimensional arrays of data called vectors, compared to scalar processors, whose instructions operate on single data items.
Verilog, standardized as IEEE 1364, is a hardware description language (HDL) used to model electronic systems.
Very-large-scale integration (VLSI) is the process of creating an integrated circuit (IC) by combining hundreds of thousands of transistors or devices into a single chip.
VHDL (VHSIC Hardware Description Language) is a hardware description language used in electronic design automation to describe digital and mixed-signal systems such as field-programmable gate arrays and integrated circuits.
Voltage, electric potential difference, electric pressure or electric tension (formally denoted or, but more often simply as V or U, for instance in the context of Ohm's or Kirchhoff's circuit laws) is the difference in electric potential between two points.
In computing, a word is the natural unit of data used by a particular processor design.
x86-64 (also known as x64, x86_64, AMD64 and Intel 64) is the 64-bit version of the x86 instruction set.
Yale Nance Patt is an American professor of electrical and computer engineering at The University of Texas at Austin.
16-bit microcomputers are computers in which 16-bit microprocessors were the norm.
A group of four bits is also called a nibble and has 24.
In computer architecture, 64-bit computing is the use of processors that have datapath widths, integer size, and memory address widths of 64 bits (eight octets).
8-bit is also a generation of microcomputers in which 8-bit microprocessors were the norm.
Computer Parallelism, Concurrent (programming), Concurrent event, Concurrent language, Concurrent process, History of parallel computing, Message-driven parallel programming, Multicomputer, Multiple processing elements, Parallel Computing, Parallel Programming, Parallel architecture, Parallel code, Parallel computation, Parallel computer, Parallel computer hardware, Parallel computers, Parallel execution units, Parallel hardware, Parallel language, Parallel machine, Parallel processing (computing), Parallel processing computer, Parallel processor, Parallel program, Parallel programming, Parallel programming language, Parallelisation, Parallelism (computing), Parallelization, Parallelized, Parellel computing.