Get it on Google Play
New! Download Unionpedia on your Android™ device!
Faster access than browser!

Parallel computing

Index Parallel computing

Parallel computing is a type of computation in which many calculations or the execution of processes are carried out concurrently. [1]

280 relations: Actor model theory, Address space, Advanced Micro Devices, Algorithm, Algorithmic skeleton, AltiVec, Amdahl's law, Analytical Engine, Apple Inc., Application checkpointing, Application programming interface, Application-specific integrated circuit, Atomic commit, Bandwidth (computing), Barnes–Hut simulation, Barrier (computer science), Bayesian network, Beowulf cluster, Berkeley Open Infrastructure for Network Computing, Bioinformatics, Bit-level parallelism, Blue Gene, Branch and bound, BrookGPU, Brute-force attack, Burroughs Corporation, Bus (computing), Bus contention, Bus snooping, C (programming language), C to HDL, C.mmp, Cache coherence, Calculus of communicating systems, Capacitance, Carnegie Mellon University, Carry flag, Cell (microprocessor), Central processing unit, Charles Babbage, Combinational logic, Commercial off-the-shelf, Communicating sequential processes, Compiler, Compute kernel, Computer architecture, Computer cluster, Computer data storage, Computer graphics, Computer industry, ..., Computer multitasking, Computer network, Computer performance, Computerworld, Computing, Concurrency (computer science), Concurrent computing, Consistency model, Content Addressable Parallel Processor, Control unit, Cooley–Tukey FFT algorithm, Core dump, CPU cache, CPU socket, CPU-bound, Cray, Cray-1, Critical path method, Critical section, Crossbar switch, CUDA, Daniel Slotnick, Data dependency, Data parallelism, Database, Dataflow architecture, David Patterson (computer scientist), Deadlock, Desktop computer, Directive (programming), Distributed computing, Distributed memory, Distributed shared memory, Domain-specific language, Donald Becker, Dynamic programming, Embarrassingly parallel, Ernest Hilgard, Error detection and correction, Ethernet, Execution unit, Explicit parallelism, Fault-tolerant computer system, Fiber (computer science), Field-programmable gate array, Finite element method, Finite-state machine, Floating-point arithmetic, Flynn's taxonomy, Folding@home, Fortran, Freescale Semiconductor, Frequency scaling, Futures and promises, George Gurdjieff, Gigabit Ethernet, Glossary of computer hardware terms, Graph traversal, Graphical model, Graphics processing unit, Gröbner basis, Grid computing, Gustafson's law, Handel-C, Hardware description language, Haskell (programming language), Hidden Markov model, Holy Grail, Honeywell, Hyper-threading, Hypercube graph, HyperTransport, IBM, ILLIAC IV, Implicit parallelism, Impulse C, InfiniBand, Instruction pipelining, Instruction set architecture, Instruction-level parallelism, Instructions per cycle, Integer, Intel, Inter-process communication, Internet, Internet protocol suite, Π-calculus, John Cocke, John L. Hennessy, Latency (engineering), Lattice Boltzmann methods, Lawrence Livermore National Laboratory, Leslie Lamport, Library (computing), Linear algebra, Linearizability, List of concurrent and parallel programming languages, List of distributed computing conferences, List of distributed computing projects, List of important publications in concurrent, parallel, and distributed computing, Load balancing (computing), Local area network, Lock (computer science), Lockstep (computing), Luigi Federico Menabrea, Manycore processor, Marvin Minsky, Massively parallel, Mathematical finance, Matrix (mathematics), Mean time between failures, Memory latency, Memory virtualization, Mesh networking, Message passing, Message Passing Interface, Meteorology, Michael Gazzaniga, Michael J. Flynn, Michio Kaku, Middleware, MIT Computer Science and Artificial Intelligence Laboratory, Mitrionics, Molecular dynamics, Monte Carlo method, Moore's law, Multi-core processor, Multics, Multiplexing, Mutual exclusion, Myrinet, N-body problem, Network topology, Non-blocking algorithm, Non-uniform memory access, Nvidia, Nvidia Tesla, Object (computer science), OpenCL, OpenHMPP, OpenMP, Operating system, Out-of-order execution, Parallel algorithm, Parallel programming model, Parallel slowdown, PeakStream, Pentium 4, Petri net, PlayStation 3, POSIX Threads, Process (computing), Process calculus, Propagation delay, Protein folding, Race condition, RapidMind, Reconfigurable computing, Reduced instruction set computer, Redundancy (engineering), Register renaming, Regular grid, Resource management (computing), RIKEN MDGRAPE-3, Ring network, Rob Pike, Robert E. Ornstein, Routing, Scoreboarding, Semaphore (programming), Sequence analysis, SequenceL, Sequential consistency, Serial computer, Serializability, Server (computing), SETI@home, Seymour Papert, Shared memory, Signal processing, SISAL, Society of Mind, Software, Software bug, Software lockout, Software transactional memory, Sony, Sorting algorithm, Speedup, Star network, Streaming SIMD Extensions, Supercomputer, Superscalar processor, Symmetric multiprocessing, Synchronization (computer science), Synchronous programming language, System C, SystemC, Systolic array, Task parallelism, Tejas and Jayhawk, Temporal logic of actions, Temporal multithreading, The Future of the Mind, Thomas Sterling (computing), Thread (computing), Time-sharing, Tomasulo algorithm, TOP500, Trace theory, Transistor, Transputer, Tree (graph theory), Uniform memory access, United States Air Force, Unstructured grid, Upper and lower bounds, Variable (computer science), Vector processor, Verilog, Very-large-scale integration, VHDL, Voltage, Word (computer architecture), X86-64, Yale Patt, 16-bit, 4-bit, 64-bit computing, 8-bit. Expand index (230 more) »

Actor model theory

In theoretical computer science, Actor model theory concerns theoretical issues for the Actor model.

New!!: Parallel computing and Actor model theory · See more »

Address space

In computing, an address space defines a range of discrete addresses, each of which may correspond to a network host, peripheral device, disk sector, a memory cell or other logical or physical entity.

New!!: Parallel computing and Address space · See more »

Advanced Micro Devices

Advanced Micro Devices, Inc. (AMD) is an American multinational semiconductor company based in Santa Clara, California, that develops computer processors and related technologies for business and consumer markets.

New!!: Parallel computing and Advanced Micro Devices · See more »


In mathematics and computer science, an algorithm is an unambiguous specification of how to solve a class of problems.

New!!: Parallel computing and Algorithm · See more »

Algorithmic skeleton

In computing, algorithmic skeletons, or parallelism patterns, are a high-level parallel programming model for parallel and distributed computing.

New!!: Parallel computing and Algorithmic skeleton · See more »


AltiVec is a single-precision floating point and integer SIMD instruction set designed and owned by Apple, IBM, and Freescale Semiconductor (formerly Motorola's Semiconductor Products Sector) — the AIM alliance.

New!!: Parallel computing and AltiVec · See more »

Amdahl's law

In computer architecture, Amdahl's law (or Amdahl's argument) is a formula which gives the theoretical speedup in latency of the execution of a task at fixed workload that can be expected of a system whose resources are improved.

New!!: Parallel computing and Amdahl's law · See more »

Analytical Engine

The Analytical Engine was a proposed mechanical general-purpose computer designed by English mathematician and computer pioneer Charles Babbage.

New!!: Parallel computing and Analytical Engine · See more »

Apple Inc.

Apple Inc. is an American multinational technology company headquartered in Cupertino, California, that designs, develops, and sells consumer electronics, computer software, and online services.

New!!: Parallel computing and Apple Inc. · See more »

Application checkpointing

Checkpointing is a technique to add fault tolerance into computing systems.

New!!: Parallel computing and Application checkpointing · See more »

Application programming interface

In computer programming, an application programming interface (API) is a set of subroutine definitions, protocols, and tools for building software.

New!!: Parallel computing and Application programming interface · See more »

Application-specific integrated circuit

An Application-Specific Integrated Circuit (ASIC), is an integrated circuit (IC) customized for a particular use, rather than intended for general-purpose use.

New!!: Parallel computing and Application-specific integrated circuit · See more »

Atomic commit

In the field of computer science, an atomic commit is an operation that applies a set of distinct changes as a single operation.

New!!: Parallel computing and Atomic commit · See more »

Bandwidth (computing)

In computing, bandwidth is the maximum rate of data transfer across a given path.

New!!: Parallel computing and Bandwidth (computing) · See more »

Barnes–Hut simulation

The Barnes–Hut simulation (Josh Barnes and Piet Hut) is an approximation algorithm for performing an ''n''-body simulation.

New!!: Parallel computing and Barnes–Hut simulation · See more »

Barrier (computer science)

In parallel computing, a barrier is a type of synchronization method.

New!!: Parallel computing and Barrier (computer science) · See more »

Bayesian network

A Bayesian network, Bayes network, belief network, Bayes(ian) model or probabilistic directed acyclic graphical model is a probabilistic graphical model (a type of statistical model) that represents a set of variables and their conditional dependencies via a directed acyclic graph (DAG).

New!!: Parallel computing and Bayesian network · See more »

Beowulf cluster

A Beowulf cluster is a computer cluster of what are normally identical, commodity-grade computers networked into a small local area network with libraries and programs installed which allow processing to be shared among them.

New!!: Parallel computing and Beowulf cluster · See more »

Berkeley Open Infrastructure for Network Computing

The Berkeley Open Infrastructure for Network Computing (BOINC, pronounced – rhymes with "oink"), an open-source middleware system, supports volunteer and grid computing.

New!!: Parallel computing and Berkeley Open Infrastructure for Network Computing · See more »


Bioinformatics is an interdisciplinary field that develops methods and software tools for understanding biological data.

New!!: Parallel computing and Bioinformatics · See more »

Bit-level parallelism

Bit-level parallelism is a form of parallel computing based on increasing processor word size.

New!!: Parallel computing and Bit-level parallelism · See more »

Blue Gene

Blue Gene is an IBM project aimed at designing supercomputers that can reach operating speeds in the PFLOPS (petaFLOPS) range, with low power consumption.

New!!: Parallel computing and Blue Gene · See more »

Branch and bound

Branch and bound (BB, B&B, or BnB) is an algorithm design paradigm for discrete and combinatorial optimization problems, as well as mathematical optimization.

New!!: Parallel computing and Branch and bound · See more »


The Brook programming language and its implementation BrookGPU were early and influential attempts to enable general-purpose computing on graphics processing units.

New!!: Parallel computing and BrookGPU · See more »

Brute-force attack

In cryptography, a brute-force attack consists of an attacker trying many passwords or passphrases with the hope of eventually guessing correctly.

New!!: Parallel computing and Brute-force attack · See more »

Burroughs Corporation

The Burroughs Corporation was a major American manufacturer of business equipment.

New!!: Parallel computing and Burroughs Corporation · See more »

Bus (computing)

In computer architecture, a bus (a contraction of the Latin omnibus) is a communication system that transfers data between components inside a computer, or between computers.

New!!: Parallel computing and Bus (computing) · See more »

Bus contention

Bus contention, in computer design, is an undesirable state of the bus in which more than one device on the bus attempts to place values on the bus at the same time.

New!!: Parallel computing and Bus contention · See more »

Bus snooping

Bus snooping or bus sniffing is a scheme that a coherency controller (snooper) in a cache monitors or snoops the bus transactions, and its goal is to maintain a cache coherency in distributed shared memory systems.

New!!: Parallel computing and Bus snooping · See more »

C (programming language)

C (as in the letter ''c'') is a general-purpose, imperative computer programming language, supporting structured programming, lexical variable scope and recursion, while a static type system prevents many unintended operations.

New!!: Parallel computing and C (programming language) · See more »

C to HDL

C to HDL tools convert C or C-like computer program code into a hardware description language (HDL) such as VHDL or Verilog.

New!!: Parallel computing and C to HDL · See more »


The C.mmp was an early MIMD multiprocessor system developed at Carnegie Mellon University by William Wulf (1971).

New!!: Parallel computing and C.mmp · See more »

Cache coherence

In computer architecture, cache coherence is the uniformity of shared resource data that ends up stored in multiple local caches.

New!!: Parallel computing and Cache coherence · See more »

Calculus of communicating systems

The calculus of communicating systems (CCS) is a process calculus introduced by Robin Milner around 1980 and the title of a book describing the calculus.

New!!: Parallel computing and Calculus of communicating systems · See more »


Capacitance is the ratio of the change in an electric charge in a system to the corresponding change in its electric potential.

New!!: Parallel computing and Capacitance · See more »

Carnegie Mellon University

Carnegie Mellon University (commonly known as CMU) is a private research university in Pittsburgh, Pennsylvania.

New!!: Parallel computing and Carnegie Mellon University · See more »

Carry flag

In computer processors the carry flag (usually indicated as the C flag) is a single bit in a system status (flag) register used to indicate when an arithmetic carry or borrow has been generated out of the most significant ALU bit position.

New!!: Parallel computing and Carry flag · See more »

Cell (microprocessor)

Cell is a multi-core microprocessor microarchitecture that combines a general-purpose Power Architecture core of modest performance with streamlined coprocessing elements which greatly accelerate multimedia and vector processing applications, as well as many other forms of dedicated computation.

New!!: Parallel computing and Cell (microprocessor) · See more »

Central processing unit

A central processing unit (CPU) is the electronic circuitry within a computer that carries out the instructions of a computer program by performing the basic arithmetic, logical, control and input/output (I/O) operations specified by the instructions.

New!!: Parallel computing and Central processing unit · See more »

Charles Babbage

Charles Babbage (26 December 1791 – 18 October 1871) was an English polymath.

New!!: Parallel computing and Charles Babbage · See more »

Combinational logic

In digital circuit theory, combinational logic (sometimes also referred to as time-independent logic) is a type of digital logic which is implemented by Boolean circuits, where the output is a pure function of the present input only.

New!!: Parallel computing and Combinational logic · See more »

Commercial off-the-shelf

Commercial off-the-shelf or commercially available off-the-shelf (COTS) satisfy the needs of the purchasing organization, without the need to commission custom-made, or bespoke, solutions.

New!!: Parallel computing and Commercial off-the-shelf · See more »

Communicating sequential processes

In computer science, communicating sequential processes (CSP) is a formal language for describing patterns of interaction in concurrent systems.

New!!: Parallel computing and Communicating sequential processes · See more »


A compiler is computer software that transforms computer code written in one programming language (the source language) into another programming language (the target language).

New!!: Parallel computing and Compiler · See more »

Compute kernel

In computing, a compute kernel is a routine compiled for high throughput accelerators (such as GPUs, DSPs or FPGAs), separate from (but used by) a main program.

New!!: Parallel computing and Compute kernel · See more »

Computer architecture

In computer engineering, computer architecture is a set of rules and methods that describe the functionality, organization, and implementation of computer systems.

New!!: Parallel computing and Computer architecture · See more »

Computer cluster

A computer cluster is a set of loosely or tightly connected computers that work together so that, in many respects, they can be viewed as a single system.

New!!: Parallel computing and Computer cluster · See more »

Computer data storage

Computer data storage, often called storage or memory, is a technology consisting of computer components and recording media that are used to retain digital data.

New!!: Parallel computing and Computer data storage · See more »

Computer graphics

Computer graphics are pictures and films created using computers.

New!!: Parallel computing and Computer graphics · See more »

Computer industry

The computer or information technology, or IT industry is the range of businesses involved in designing computer hardware and computer networking infrastructures, developing computer software, manufacturing computer components, and providing information technology (IT) services.

New!!: Parallel computing and Computer industry · See more »

Computer multitasking

In computing, multitasking is the concurrent execution of multiple tasks (also known as processes) over a certain period of time.

New!!: Parallel computing and Computer multitasking · See more »

Computer network

A computer network, or data network, is a digital telecommunications network which allows nodes to share resources.

New!!: Parallel computing and Computer network · See more »

Computer performance

Computer performance is the amount of work accomplished by a computer system.

New!!: Parallel computing and Computer performance · See more »


Computerworld is a publication website and digital magazine for information technology (IT) and business technology professionals.

New!!: Parallel computing and Computerworld · See more »


Computing is any goal-oriented activity requiring, benefiting from, or creating computers.

New!!: Parallel computing and Computing · See more »

Concurrency (computer science)

In computer science, concurrency refers to the ability of different parts or units of a program, algorithm, or problem to be executed out-of-order or in partial order, without affecting the final outcome.

New!!: Parallel computing and Concurrency (computer science) · See more »

Concurrent computing

Concurrent computing is a form of computing in which several computations are executed during overlapping time periods—concurrently—instead of sequentially (one completing before the next starts).

New!!: Parallel computing and Concurrent computing · See more »

Consistency model

In computer science, Consistency models are used in distributed systems like distributed shared memory systems or distributed data stores (such as a filesystems, databases, optimistic replication systems or Web caching).

New!!: Parallel computing and Consistency model · See more »

Content Addressable Parallel Processor

A Content Addressable Parallel Processor (CAPP) is a type of parallel processor which uses content-addressing memory (CAM) principles.

New!!: Parallel computing and Content Addressable Parallel Processor · See more »

Control unit

The control unit (CU) is a component of a computer's central processing unit (CPU) that directs the operation of the processor.

New!!: Parallel computing and Control unit · See more »

Cooley–Tukey FFT algorithm

The Cooley–Tukey algorithm, named after J. W. Cooley and John Tukey, is the most common fast Fourier transform (FFT) algorithm.

New!!: Parallel computing and Cooley–Tukey FFT algorithm · See more »

Core dump

In computing, a core dump, crash dump, memory dump, or system dump consists of the recorded state of the working memory of a computer program at a specific time, generally when the program has crashed or otherwise terminated abnormally.

New!!: Parallel computing and Core dump · See more »

CPU cache

A CPU cache is a hardware cache used by the central processing unit (CPU) of a computer to reduce the average cost (time or energy) to access data from the main memory.

New!!: Parallel computing and CPU cache · See more »

CPU socket

In computer hardware, a CPU socket or CPU slot comprises one or more mechanical components providing mechanical and electrical connections between a microprocessor and a printed circuit board (PCB).

New!!: Parallel computing and CPU socket · See more »


In computer science, a computer is CPU-bound (or compute-bound) when the time for it to complete a task is determined principally by the speed of the central processor: processor utilization is high, perhaps at 100% usage for many seconds or minutes.

New!!: Parallel computing and CPU-bound · See more »


Cray Inc. is an American supercomputer manufacturer headquartered in Seattle, Washington.

New!!: Parallel computing and Cray · See more »


The Cray-1 was a supercomputer designed, manufactured and marketed by Cray Research.

New!!: Parallel computing and Cray-1 · See more »

Critical path method

The critical path method (CPM), or critical path analysis (CPA), is an algorithm for scheduling a set of project activities.

New!!: Parallel computing and Critical path method · See more »

Critical section

In concurrent programming, concurrent accesses to shared resources can lead to unexpected or erroneous behavior, so parts of the program where the shared resource is accessed are protected.

New!!: Parallel computing and Critical section · See more »

Crossbar switch

In electronics, a crossbar switch (cross-point switch, matrix switch) is a collection of switches arranged in a matrix configuration.

New!!: Parallel computing and Crossbar switch · See more »


CUDA is a parallel computing platform and application programming interface (API) model created by Nvidia.

New!!: Parallel computing and CUDA · See more »

Daniel Slotnick

Daniel Leonid Slotnick (1931–1985) was a mathematician and computer architect.

New!!: Parallel computing and Daniel Slotnick · See more »

Data dependency

A data dependency in computer science is a situation in which a program statement (instruction) refers to the data of a preceding statement.

New!!: Parallel computing and Data dependency · See more »

Data parallelism

Data parallelism is parallelization across multiple processors in parallel computing environments.

New!!: Parallel computing and Data parallelism · See more »


A database is an organized collection of data, stored and accessed electronically.

New!!: Parallel computing and Database · See more »

Dataflow architecture

Dataflow architecture is a computer architecture that directly contrasts the traditional von Neumann architecture or control flow architecture.

New!!: Parallel computing and Dataflow architecture · See more »

David Patterson (computer scientist)

David Andrew Patterson (born November 16, 1947) is an American computer pioneer and academic who has held the position of Professor of Computer Science at the University of California, Berkeley since 1976.

New!!: Parallel computing and David Patterson (computer scientist) · See more »


In concurrent computing, a deadlock is a state in which each member of a group is waiting for some other member to take action, such as sending a message or more commonly releasing a lock.

New!!: Parallel computing and Deadlock · See more »

Desktop computer

A desktop computer is a personal computer designed for regular use at a single location on or near a desk or table due to its size and power requirements.

New!!: Parallel computing and Desktop computer · See more »

Directive (programming)

In computer programming, a directive or pragma (from "pragmatic") is a language construct that specifies how a compiler (or other translator) should process its input.

New!!: Parallel computing and Directive (programming) · See more »

Distributed computing

Distributed computing is a field of computer science that studies distributed systems.

New!!: Parallel computing and Distributed computing · See more »

Distributed memory

In computer science, distributed memory refers to a multiprocessor computer system in which each processor has its own private memory.

New!!: Parallel computing and Distributed memory · See more »

Distributed shared memory

In computer science, distributed shared memory (DSM) is a form of memory architecture where physically separated memories can be addressed as one logically shared address space.

New!!: Parallel computing and Distributed shared memory · See more »

Domain-specific language

A domain-specific language (DSL) is a computer language specialized to a particular application domain.

New!!: Parallel computing and Domain-specific language · See more »

Donald Becker

Donald Becker is an American computer programmer who wrote Ethernet drivers for the Linux operating system.

New!!: Parallel computing and Donald Becker · See more »

Dynamic programming

Dynamic programming is both a mathematical optimization method and a computer programming method.

New!!: Parallel computing and Dynamic programming · See more »

Embarrassingly parallel

In parallel computing, an embarrassingly parallel workload or problem (also called perfectly parallel or pleasingly parallel) is one where little or no effort is needed to separate the problem into a number of parallel tasks.

New!!: Parallel computing and Embarrassingly parallel · See more »

Ernest Hilgard

Ernest Ropiequet "Jack" Hilgard (July 25, 1904 – October 22, 2001) was an American psychologist and professor at Stanford University.

New!!: Parallel computing and Ernest Hilgard · See more »

Error detection and correction

In information theory and coding theory with applications in computer science and telecommunication, error detection and correction or error control are techniques that enable reliable delivery of digital data over unreliable communication channels.

New!!: Parallel computing and Error detection and correction · See more »


Ethernet is a family of computer networking technologies commonly used in local area networks (LAN), metropolitan area networks (MAN) and wide area networks (WAN).

New!!: Parallel computing and Ethernet · See more »

Execution unit

In computer engineering, an execution unit (also called a functional unit) is a part of the central processing unit (CPU) that performs the operations and calculations as instructed by the computer program.

New!!: Parallel computing and Execution unit · See more »

Explicit parallelism

In computer programming, explicit parallelism is the representation of concurrent computations by means of primitives in the form of special-purpose directives or function calls.

New!!: Parallel computing and Explicit parallelism · See more »

Fault-tolerant computer system

Fault-tolerant computer systems are systems designed around the concepts of fault tolerance.

New!!: Parallel computing and Fault-tolerant computer system · See more »

Fiber (computer science)

In computer science, a fiber is a particularly lightweight thread of execution.

New!!: Parallel computing and Fiber (computer science) · See more »

Field-programmable gate array

A field-programmable gate array (FPGA) is an integrated circuit designed to be configured by a customer or a designer after manufacturing hence "field-programmable".

New!!: Parallel computing and Field-programmable gate array · See more »

Finite element method

The finite element method (FEM), is a numerical method for solving problems of engineering and mathematical physics.

New!!: Parallel computing and Finite element method · See more »

Finite-state machine

A finite-state machine (FSM) or finite-state automaton (FSA, plural: automata), finite automaton, or simply a state machine, is a mathematical model of computation.

New!!: Parallel computing and Finite-state machine · See more »

Floating-point arithmetic

In computing, floating-point arithmetic is arithmetic using formulaic representation of real numbers as an approximation so as to support a trade-off between range and precision.

New!!: Parallel computing and Floating-point arithmetic · See more »

Flynn's taxonomy

Flynn's taxonomy is a classification of computer architectures, proposed by Michael J. Flynn in 1966.

New!!: Parallel computing and Flynn's taxonomy · See more »


Folding@home (FAH or F@h) is a distributed computing project for disease research that simulates protein folding, computational drug design, and other types of molecular dynamics.

New!!: Parallel computing and Folding@home · See more »


Fortran (formerly FORTRAN, derived from Formula Translation) is a general-purpose, compiled imperative programming language that is especially suited to numeric computation and scientific computing.

New!!: Parallel computing and Fortran · See more »

Freescale Semiconductor

Freescale Semiconductor, Inc. was an American multinational corporation headquartered in Austin, Texas, with design, research and development, manufacturing and sales operations in more than 75 locations in 19 countries.

New!!: Parallel computing and Freescale Semiconductor · See more »

Frequency scaling

In computer architecture, frequency scaling (also known as frequency ramping) is the technique of increasing a processor's frequency so as to enhance the performance of the system containing the processor in question.

New!!: Parallel computing and Frequency scaling · See more »

Futures and promises

In computer science, future, promise, delay, and deferred refer to constructs used for synchronizing program execution in some concurrent programming languages.

New!!: Parallel computing and Futures and promises · See more »

George Gurdjieff

George Ivanovich Gurdjieff (31 March 1866/ 14 January 1872/ 28 November 1877 – 29 October 1949) commonly known as G. I. Gurdjieff, was a mystic, philosopher, spiritual teacher, and composer of Armenian and Greek descent, born in Alexandrapol (now Gyumri), Armenia.

New!!: Parallel computing and George Gurdjieff · See more »

Gigabit Ethernet

In computer networking, Gigabit Ethernet (GbE or 1 GigE) is a term describing various technologies for transmitting Ethernet frames at a rate of a gigabit per second (1,000,000,000 bits per second), as defined by the IEEE 802.3-2008 standard.

New!!: Parallel computing and Gigabit Ethernet · See more »

Glossary of computer hardware terms

This is a glossary of terms relating to computer hardware – physical computer hardware, architectural issues, and peripherals.

New!!: Parallel computing and Glossary of computer hardware terms · See more »

Graph traversal

In computer science, graph traversal (also known as graph search) refers to the process of visiting (checking and/or updating) each vertex in a graph.

New!!: Parallel computing and Graph traversal · See more »

Graphical model

A graphical model or probabilistic graphical model (PGM) or structured probabilistic model is a probabilistic model for which a graph expresses the conditional dependence structure between random variables.

New!!: Parallel computing and Graphical model · See more »

Graphics processing unit

A graphics processing unit (GPU) is a specialized electronic circuit designed to rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device.

New!!: Parallel computing and Graphics processing unit · See more »

Gröbner basis

In mathematics, and more specifically in computer algebra, computational algebraic geometry, and computational commutative algebra, a Gröbner basis is a particular kind of generating set of an ideal in a polynomial ring over a field.

New!!: Parallel computing and Gröbner basis · See more »

Grid computing

Grid computing is the collection of computer resources from multiple locations to reach a common goal.

New!!: Parallel computing and Grid computing · See more »

Gustafson's law

In computer architecture, Gustafson's law (or Gustafson–Barsis's law) gives the theoretical speedup in latency of the execution of a task at fixed execution time that can be expected of a system whose resources are improved.

New!!: Parallel computing and Gustafson's law · See more »


Handel-C is a high-level programming language which targets low-level hardware, most commonly used in the programming of FPGAs.

New!!: Parallel computing and Handel-C · See more »

Hardware description language

In computer engineering, a hardware description language (HDL) is a specialized computer language used to describe the structure and behavior of electronic circuits, and most commonly, digital logic circuits.

New!!: Parallel computing and Hardware description language · See more »

Haskell (programming language)

Haskell is a standardized, general-purpose compiled purely functional programming language, with non-strict semantics and strong static typing.

New!!: Parallel computing and Haskell (programming language) · See more »

Hidden Markov model

Hidden Markov Model (HMM) is a statistical Markov model in which the system being modeled is assumed to be a Markov process with unobserved (i.e. hidden) states.

New!!: Parallel computing and Hidden Markov model · See more »

Holy Grail

The Holy Grail is a vessel that serves as an important motif in Arthurian literature.

New!!: Parallel computing and Holy Grail · See more »


Honeywell International Inc. is an American multinational conglomerate company that produces a variety of commercial and consumer products, engineering services and aerospace systems for a wide variety of customers, from private consumers to major corporations and governments.

New!!: Parallel computing and Honeywell · See more »


Hyper-threading (officially called Hyper-Threading Technology or HT Technology, and abbreviated as HTT or HT) is Intel's proprietary simultaneous multithreading (SMT) implementation used to improve parallelization of computations (doing multiple tasks at once) performed on x86 microprocessors.

New!!: Parallel computing and Hyper-threading · See more »

Hypercube graph

In graph theory, the hypercube graph is the graph formed from the vertices and edges of an -dimensional hypercube.

New!!: Parallel computing and Hypercube graph · See more »


HyperTransport (HT), formerly known as Lightning Data Transport (LDT), is a technology for interconnection of computer processors.

New!!: Parallel computing and HyperTransport · See more »


The International Business Machines Corporation (IBM) is an American multinational technology company headquartered in Armonk, New York, United States, with operations in over 170 countries.

New!!: Parallel computing and IBM · See more »


The ILLIAC IV was the first massively parallel computer.

New!!: Parallel computing and ILLIAC IV · See more »

Implicit parallelism

In computer science, implicit parallelism is a characteristic of a programming language that allows a compiler or interpreter to automatically exploit the parallelism inherent to the computations expressed by some of the language's constructs.

New!!: Parallel computing and Implicit parallelism · See more »

Impulse C

Impulse C is a subset of the C programming language combined with a C-compatible function library supporting parallel programming, in particular for programming of applications targeting FPGA devices.

New!!: Parallel computing and Impulse C · See more »


InfiniBand (abbreviated IB) is a computer-networking communications standard used in high-performance computing that features very high throughput and very low latency.

New!!: Parallel computing and InfiniBand · See more »

Instruction pipelining

Instruction pipelining is a technique for implementing instruction-level parallelism within a single processor.

New!!: Parallel computing and Instruction pipelining · See more »

Instruction set architecture

An instruction set architecture (ISA) is an abstract model of a computer.

New!!: Parallel computing and Instruction set architecture · See more »

Instruction-level parallelism

Instruction-level parallelism (ILP) is a measure of how many of the instructions in a computer program can be executed simultaneously.

New!!: Parallel computing and Instruction-level parallelism · See more »

Instructions per cycle

In computer architecture, instructions per cycle (IPC) is one aspect of a processor's performance: the average number of instructions executed for each clock cycle.

New!!: Parallel computing and Instructions per cycle · See more »


An integer (from the Latin ''integer'' meaning "whole")Integer 's first literal meaning in Latin is "untouched", from in ("not") plus tangere ("to touch").

New!!: Parallel computing and Integer · See more »


Intel Corporation (stylized as intel) is an American multinational corporation and technology company headquartered in Santa Clara, California, in the Silicon Valley.

New!!: Parallel computing and Intel · See more »

Inter-process communication

In computer science, inter-process communication or interprocess communication (IPC) refers specifically to the mechanisms an operating system provides to allow the processes to manage shared data.

New!!: Parallel computing and Inter-process communication · See more »


The Internet is the global system of interconnected computer networks that use the Internet protocol suite (TCP/IP) to link devices worldwide.

New!!: Parallel computing and Internet · See more »

Internet protocol suite

The Internet protocol suite is the conceptual model and set of communications protocols used on the Internet and similar computer networks.

New!!: Parallel computing and Internet protocol suite · See more »


In theoretical computer science, the -calculus (or pi-calculus) is a process calculus.

New!!: Parallel computing and Π-calculus · See more »

John Cocke

John Cocke (May 30, 1925 – July 16, 2002) was an American computer scientist recognized for his large contribution to computer architecture and optimizing compiler design.

New!!: Parallel computing and John Cocke · See more »

John L. Hennessy

John Leroy Hennessy (born September 22, 1952) is an American computer scientist, academician, businessman and Chairman of Alphabet Inc..

New!!: Parallel computing and John L. Hennessy · See more »

Latency (engineering)

Latency is a time interval between the stimulation and response, or, from a more general point of view, a time delay between the cause and the effect of some physical change in the system being observed.

New!!: Parallel computing and Latency (engineering) · See more »

Lattice Boltzmann methods

Lattice Boltzmann methods (LBM) (or thermal Lattice Boltzmann methods (TLBM)) is a class of computational fluid dynamics (CFD) methods for fluid simulation.

New!!: Parallel computing and Lattice Boltzmann methods · See more »

Lawrence Livermore National Laboratory

Lawrence Livermore National Laboratory (LLNL) is an American federal research facility in Livermore, California, United States, founded by the University of California, Berkeley in 1952.

New!!: Parallel computing and Lawrence Livermore National Laboratory · See more »

Leslie Lamport

Leslie B. Lamport (born February 7, 1941) is an American computer scientist.

New!!: Parallel computing and Leslie Lamport · See more »

Library (computing)

In computer science, a library is a collection of non-volatile resources used by computer programs, often for software development.

New!!: Parallel computing and Library (computing) · See more »

Linear algebra

Linear algebra is the branch of mathematics concerning linear equations such as linear functions such as and their representations through matrices and vector spaces.

New!!: Parallel computing and Linear algebra · See more »


In concurrent programming, an operation (or set of operations) is atomic, linearizable, indivisible or uninterruptible if it appears to the rest of the system to occur at once without being interrupted.

New!!: Parallel computing and Linearizability · See more »

List of concurrent and parallel programming languages

This article lists concurrent and parallel programming languages, categorizing them by a defining paradigm.

New!!: Parallel computing and List of concurrent and parallel programming languages · See more »

List of distributed computing conferences

This is a selected list of international academic conferences in the fields of distributed computing, parallel computing, and concurrent computing.

New!!: Parallel computing and List of distributed computing conferences · See more »

List of distributed computing projects

This is a list of distributed computing and grid computing projects.

New!!: Parallel computing and List of distributed computing projects · See more »

List of important publications in concurrent, parallel, and distributed computing

This is a list of important publications in concurrent, parallel, and distributed computing, organized by field.

New!!: Parallel computing and List of important publications in concurrent, parallel, and distributed computing · See more »

Load balancing (computing)

In computing, load balancing improves the distribution of workloads across multiple computing resources, such as computers, a computer cluster, network links, central processing units, or disk drives.

New!!: Parallel computing and Load balancing (computing) · See more »

Local area network

A local area network (LAN) is a computer network that interconnects computers within a limited area such as a residence, school, laboratory, university campus or office building.

New!!: Parallel computing and Local area network · See more »

Lock (computer science)

In computer science, a lock or mutex (from mutual exclusion) is a synchronization mechanism for enforcing limits on access to a resource in an environment where there are many threads of execution.

New!!: Parallel computing and Lock (computer science) · See more »

Lockstep (computing)

Lockstep systems are fault-tolerant computer systems that run the same set of operations at the same time in parallel.

New!!: Parallel computing and Lockstep (computing) · See more »

Luigi Federico Menabrea

Luigi Federico Menabrea (4 September 1809 – 24 May 1896), later made 1st Count Menabrea and 1st Marquess of Valdora, was an Italian general, statesman and mathematician who served as the Prime Minister of Italy from 1867 to 1869.

New!!: Parallel computing and Luigi Federico Menabrea · See more »

Manycore processor

Manycore processors are specialist multi-core processors designed for a high degree of parallel processing, containing a large number of simpler, independent processor cores (e.g. 10s, 100s, or 1,000s).

New!!: Parallel computing and Manycore processor · See more »

Marvin Minsky

Marvin Lee Minsky (August 9, 1927 – January 24, 2016) was an American cognitive scientist concerned largely with research of artificial intelligence (AI), co-founder of the Massachusetts Institute of Technology's AI laboratory, and author of several texts concerning AI and philosophy.

New!!: Parallel computing and Marvin Minsky · See more »

Massively parallel

In computing, massively parallel refers to the use of a large number of processors (or separate computers) to perform a set of coordinated computations in parallel (simultaneously).

New!!: Parallel computing and Massively parallel · See more »

Mathematical finance

Mathematical finance, also known as quantitative finance, is a field of applied mathematics, concerned with mathematical modeling of financial markets.

New!!: Parallel computing and Mathematical finance · See more »

Matrix (mathematics)

In mathematics, a matrix (plural: matrices) is a rectangular array of numbers, symbols, or expressions, arranged in rows and columns.

New!!: Parallel computing and Matrix (mathematics) · See more »

Mean time between failures

Mean time between failures (MTBF) is the predicted elapsed time between inherent failures of a mechanical or electronic system, during normal system operation.

New!!: Parallel computing and Mean time between failures · See more »

Memory latency

In computing, memory latency is the time (the latency) between initiating a request for a byte or word in memory until it is retrieved by a processor.

New!!: Parallel computing and Memory latency · See more »

Memory virtualization

In computer science, memory virtualization decouples volatile random access memory (RAM) resources from individual systems in the data center, and then aggregates those resources into a virtualized memory pool available to any computer in the cluster.

New!!: Parallel computing and Memory virtualization · See more »

Mesh networking

A mesh network is a local network topology in which the infrastructure nodes (i.e. bridges, switches and other infrastructure devices) connect directly, dynamically and non-hierarchically to as many other nodes as possible and cooperate with one another to efficiently route data from/to clients.

New!!: Parallel computing and Mesh networking · See more »

Message passing

In computer science, message passing is a technique for invoking behavior (i.e., running a program) on a computer.

New!!: Parallel computing and Message passing · See more »

Message Passing Interface

Message Passing Interface (MPI) is a standardized and portable message-passing standard designed by a group of researchers from academia and industry to function on a wide variety of parallel computing architectures.

New!!: Parallel computing and Message Passing Interface · See more »


Meteorology is a branch of the atmospheric sciences which includes atmospheric chemistry and atmospheric physics, with a major focus on weather forecasting.

New!!: Parallel computing and Meteorology · See more »

Michael Gazzaniga

Michael S. Gazzaniga (born December 12, 1939) is a professor of psychology at the University of California, Santa Barbara, where he heads the new SAGE Center for the Study of the Mind.

New!!: Parallel computing and Michael Gazzaniga · See more »

Michael J. Flynn

Michael J. Flynn (born May 20, 1934) is an American professor emeritus at Stanford University.

New!!: Parallel computing and Michael J. Flynn · See more »

Michio Kaku

Michio Kaku (born 24 January 1947) is an American theoretical physicist, futurist, and popularizer of science.

New!!: Parallel computing and Michio Kaku · See more »


Middleware is computer software that provides services to software applications beyond those available from the operating system.

New!!: Parallel computing and Middleware · See more »

MIT Computer Science and Artificial Intelligence Laboratory

MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) is a research institute at the Massachusetts Institute of Technology formed by the 2003 merger of the Laboratory for Computer Science and the Artificial Intelligence Laboratory.

New!!: Parallel computing and MIT Computer Science and Artificial Intelligence Laboratory · See more »


Mitrionics is a Swedish company manufacturing softcore reconfigurable processors.

New!!: Parallel computing and Mitrionics · See more »

Molecular dynamics

Molecular dynamics (MD) is a computer simulation method for studying the physical movements of atoms and molecules.

New!!: Parallel computing and Molecular dynamics · See more »

Monte Carlo method

Monte Carlo methods (or Monte Carlo experiments) are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results.

New!!: Parallel computing and Monte Carlo method · See more »

Moore's law

Moore's law is the observation that the number of transistors in a dense integrated circuit doubles about every two years.

New!!: Parallel computing and Moore's law · See more »

Multi-core processor

A multi-core processor is a single computing component with two or more independent processing units called cores, which read and execute program instructions.

New!!: Parallel computing and Multi-core processor · See more »


Multics (Multiplexed Information and Computing Service) is an influential early time-sharing operating system, based around the concept of a single-level memory.

New!!: Parallel computing and Multics · See more »


In telecommunications and computer networks, multiplexing (sometimes contracted to muxing) is a method by which multiple analog or digital signals are combined into one signal over a shared medium.

New!!: Parallel computing and Multiplexing · See more »

Mutual exclusion

In computer science, mutual exclusion is a property of concurrency control, which is instituted for the purpose of preventing race conditions; it is the requirement that one thread of execution never enter its critical section at the same time that another concurrent thread of execution enters its own critical section.

New!!: Parallel computing and Mutual exclusion · See more »


Myrinet, ANSI/VITA 26-1998, is a high-speed local area networking system designed by the company Myricom to be used as an interconnect between multiple machines to form computer clusters.

New!!: Parallel computing and Myrinet · See more »

N-body problem

In physics, the -body problem is the problem of predicting the individual motions of a group of celestial objects interacting with each other gravitationally.

New!!: Parallel computing and N-body problem · See more »

Network topology

Network topology is the arrangement of the elements (links, nodes, etc.) of a communication network.

New!!: Parallel computing and Network topology · See more »

Non-blocking algorithm

In computer science, an algorithm is called non-blocking if failure or suspension of any thread cannot cause failure or suspension of another thread; for some operations, these algorithms provide a useful alternative to traditional blocking implementations.

New!!: Parallel computing and Non-blocking algorithm · See more »

Non-uniform memory access

Non-uniform memory access (NUMA) is a computer memory design used in multiprocessing, where the memory access time depends on the memory location relative to the processor.

New!!: Parallel computing and Non-uniform memory access · See more »


Nvidia Corporation (most commonly referred to as Nvidia, stylized as NVIDIA, or (due to their logo) nVIDIA) is an American technology company incorporated in Delaware and based in Santa Clara, California.

New!!: Parallel computing and Nvidia · See more »

Nvidia Tesla

Nvidia Tesla is Nvidia's brand name for their products targeting stream processing or general-purpose GPU.

New!!: Parallel computing and Nvidia Tesla · See more »

Object (computer science)

In computer science, an object can be a variable, a data structure, a function, or a method, and as such, is a value in memory referenced by an identifier.

New!!: Parallel computing and Object (computer science) · See more »


OpenCL (Open Computing Language) is a framework for writing programs that execute across heterogeneous platforms consisting of central processing units (CPUs), graphics processing units (GPUs), digital signal processors (DSPs), field-programmable gate arrays (FPGAs) and other processors or hardware accelerators.

New!!: Parallel computing and OpenCL · See more »


OpenHMPP (HMPP for Hybrid Multicore Parallel Programming) - programming standard for heterogeneous computing.

New!!: Parallel computing and OpenHMPP · See more »


OpenMP (Open Multi-Processing) is an application programming interface (API) that supports multi-platform shared memory multiprocessing programming in C, C++, and Fortran, on most platforms, instruction set architectures and operating systems, including Solaris, AIX, HP-UX, Linux, macOS, and Windows.

New!!: Parallel computing and OpenMP · See more »

Operating system

An operating system (OS) is system software that manages computer hardware and software resources and provides common services for computer programs.

New!!: Parallel computing and Operating system · See more »

Out-of-order execution

In computer engineering, out-of-order execution (or more formally dynamic execution) is a paradigm used in most high-performance central processing units to make use of instruction cycles that would otherwise be wasted.

New!!: Parallel computing and Out-of-order execution · See more »

Parallel algorithm

In computer science, a parallel algorithm, as opposed to a traditional serial algorithm, is an algorithm which can be executed a piece at a time on many different processing devices, and then combined together again at the end to get the correct result.

New!!: Parallel computing and Parallel algorithm · See more »

Parallel programming model

In computing, a parallel programming model is an abstraction of parallel computer architecture, with which it is convenient to express algorithms and their composition in programs.

New!!: Parallel computing and Parallel programming model · See more »

Parallel slowdown

Parallel slowdown is a phenomenon in parallel computing where parallelization of a parallel algorithm beyond a certain point causes the program to run slower (take more time to run to completion).

New!!: Parallel computing and Parallel slowdown · See more »


PeakStream was a parallel processing software company located in Redwood Shores, California founded by Matthew Papakipos and Asher Waldfogel in April 2005 and backed by Sequoia Capital and Kleiner Perkins.

New!!: Parallel computing and PeakStream · See more »

Pentium 4

Pentium 4 is a brand by Intel for an entire series of single-core CPUs for desktops, laptops and entry-level servers.

New!!: Parallel computing and Pentium 4 · See more »

Petri net

A Petri net, also known as a place/transition (PT) net, is one of several mathematical modeling languages for the description of distributed systems.

New!!: Parallel computing and Petri net · See more »

PlayStation 3

The PlayStation 3 (PS3) is a home video game console developed by Sony Computer Entertainment.

New!!: Parallel computing and PlayStation 3 · See more »

POSIX Threads

POSIX Threads, usually referred to as pthreads, is an execution model that exists independently from a language, as well as a parallel execution model.

New!!: Parallel computing and POSIX Threads · See more »

Process (computing)

In computing, a process is an instance of a computer program that is being executed.

New!!: Parallel computing and Process (computing) · See more »

Process calculus

In computer science, the process calculi (or process algebras) are a diverse family of related approaches for formally modelling concurrent systems.

New!!: Parallel computing and Process calculus · See more »

Propagation delay

Propagation delay is a technical term that can have a different meaning depending on the context.

New!!: Parallel computing and Propagation delay · See more »

Protein folding

Protein folding is the physical process by which a protein chain acquires its native 3-dimensional structure, a conformation that is usually biologically functional, in an expeditious and reproducible manner.

New!!: Parallel computing and Protein folding · See more »

Race condition

A race condition or race hazard is the behavior of an electronics, software, or other system where the output is dependent on the sequence or timing of other uncontrollable events.

New!!: Parallel computing and Race condition · See more »


RapidMind Inc. was a privately held company founded and headquartered in Waterloo, Ontario, Canada, acquired by Intel in 2009.

New!!: Parallel computing and RapidMind · See more »

Reconfigurable computing

Reconfigurable computing is a computer architecture combining some of the flexibility of software with the high performance of hardware by processing with very flexible high speed computing fabrics like field-programmable gate arrays (FPGAs).

New!!: Parallel computing and Reconfigurable computing · See more »

Reduced instruction set computer

A reduced instruction set computer, or RISC (pronounced 'risk'), is one whose instruction set architecture (ISA) allows it to have fewer cycles per instruction (CPI) than a complex instruction set computer (CISC).

New!!: Parallel computing and Reduced instruction set computer · See more »

Redundancy (engineering)

In engineering, redundancy is the duplication of critical components or functions of a system with the intention of increasing reliability of the system, usually in the form of a backup or fail-safe, or to improve actual system performance, such as in the case of GNSS receivers, or multi-threaded computer processing.

New!!: Parallel computing and Redundancy (engineering) · See more »

Register renaming

In computer architecture, register renaming is a technique that eliminates the false data dependencies arising from the reuse of architectural registers by successive instructions that do not have any real data dependencies between them.

New!!: Parallel computing and Register renaming · See more »

Regular grid

A regular grid is a tessellation of n-dimensional Euclidean space by congruent parallelotopes (e.g. bricks).

New!!: Parallel computing and Regular grid · See more »

Resource management (computing)

In computer programming, resource management refers to techniques for managing resources (components with limited availability).

New!!: Parallel computing and Resource management (computing) · See more »


MDGRAPE-3 is an ultra-high performance petascale supercomputer system developed by the RIKEN research institute in Japan.

New!!: Parallel computing and RIKEN MDGRAPE-3 · See more »

Ring network

A ring network is a network topology in which each node connects to exactly two other nodes, forming a single continuous pathway for signals through each node - a ring.

New!!: Parallel computing and Ring network · See more »

Rob Pike

Robert "Rob" C. Pike (born 1956) is a Canadian programmer and author.

New!!: Parallel computing and Rob Pike · See more »

Robert E. Ornstein

Robert Evan Ornstein (born 1942) The web page gives the birth year as 1942.

New!!: Parallel computing and Robert E. Ornstein · See more »


Routing is the process of selecting a path for traffic in a network, or between or across multiple networks.

New!!: Parallel computing and Routing · See more »


Scoreboarding is a centralized method, used in the CDC 6600 computer, for dynamically scheduling a pipeline so that the instructions can execute out of order when there are no conflicts and the hardware is available.

New!!: Parallel computing and Scoreboarding · See more »

Semaphore (programming)

In computer science, a semaphore is a variable or abstract data type used to control access to a common resource by multiple processes in a concurrent system such as a multitasking operating system.

New!!: Parallel computing and Semaphore (programming) · See more »

Sequence analysis

In bioinformatics, sequence analysis is the process of subjecting a DNA, RNA or peptide sequence to any of a wide range of analytical methods to understand its features, function, structure, or evolution.

New!!: Parallel computing and Sequence analysis · See more »


SequenceL is a general purpose functional programming language and auto-parallelizing (Parallel computing) compiler and tool set, whose primary design objectives are performance on multi-core processor hardware, ease of programming, platform portability/optimization, and code clarity and readability.

New!!: Parallel computing and SequenceL · See more »

Sequential consistency

Sequential consistency is one of the consistency models used in the domain of concurrent computing (e.g. in distributed shared memory, distributed transactions, etc.). It was first defined as the property that requires that To understand this statement, it is essential to understand one key property of sequential consistency: execution order of program in the same processor (or thread) is the same as the program order, while execution order of program between processors (or threads) is undefined.

New!!: Parallel computing and Sequential consistency · See more »

Serial computer

A serial computer is a computer typified by bit-serial architecture — i.e., internally operating on one bit or digit for each clock cycle.

New!!: Parallel computing and Serial computer · See more »


In concurrency control of databases,Philip A. Bernstein, Vassos Hadzilacos, Nathan Goodman (1987): (free PDF download), Addison Wesley Publishing Company, Gerhard Weikum, Gottfried Vossen (2001):, Elsevier, transaction processing (transaction management), and various transactional applications (e.g., transactional memoryMaurice Herlihy and J. Eliot B. Moss. Transactional memory: architectural support for lock-free data structures. Proceedings of the 20th annual international symposium on Computer architecture (ISCA '93). Volume 21, Issue 2, May 1993. and software transactional memory), both centralized and distributed, a transaction schedule is serializable if its outcome (e.g., the resulting database state) is equal to the outcome of its transactions executed serially, i.e. without overlapping in time.

New!!: Parallel computing and Serializability · See more »

Server (computing)

In computing, a server is a computer program or a device that provides functionality for other programs or devices, called "clients".

New!!: Parallel computing and Server (computing) · See more »


SETI@home ("SETI at home") is an Internet-based public volunteer computing project employing the BOINC software platform created by the Berkeley SETI Research Center and is hosted by the Space Sciences Laboratory, at the University of California, Berkeley.

New!!: Parallel computing and SETI@home · See more »

Seymour Papert

Seymour Aubrey Papert (February 29, 1928 – July 31, 2016) was a South African-born American mathematician, computer scientist, and educator, who spent most of his career teaching and researching at MIT.

New!!: Parallel computing and Seymour Papert · See more »

Shared memory

In computer science, shared memory is memory that may be simultaneously accessed by multiple programs with an intent to provide communication among them or avoid redundant copies.

New!!: Parallel computing and Shared memory · See more »

Signal processing

Signal processing concerns the analysis, synthesis, and modification of signals, which are broadly defined as functions conveying "information about the behavior or attributes of some phenomenon", such as sound, images, and biological measurements.

New!!: Parallel computing and Signal processing · See more »


SISAL ("Streams and Iteration in a Single Assignment Language") is a general-purpose single assignment functional programming language with strict semantics, implicit parallelism, and efficient array handling.

New!!: Parallel computing and SISAL · See more »

Society of Mind

The Society of Mind is both the title of a 1986 book and the name of a theory of natural intelligence as written and developed by Marvin Minsky.

New!!: Parallel computing and Society of Mind · See more »


Computer software, or simply software, is a generic term that refers to a collection of data or computer instructions that tell the computer how to work, in contrast to the physical hardware from which the system is built, that actually performs the work.

New!!: Parallel computing and Software · See more »

Software bug

A software bug is an error, flaw, failure or fault in a computer program or system that causes it to produce an incorrect or unexpected result, or to behave in unintended ways.

New!!: Parallel computing and Software bug · See more »

Software lockout

In multiprocessor computer systems, software lockout is the issue of performance degradation due to the idle wait times spent by the CPUs in kernel-level critical sections.

New!!: Parallel computing and Software lockout · See more »

Software transactional memory

In computer science, software transactional memory (STM) is a concurrency control mechanism analogous to database transactions for controlling access to shared memory in concurrent computing.

New!!: Parallel computing and Software transactional memory · See more »


is a Japanese multinational conglomerate corporation headquartered in Kōnan, Minato, Tokyo.

New!!: Parallel computing and Sony · See more »

Sorting algorithm

In computer science, a sorting algorithm is an algorithm that puts elements of a list in a certain order.

New!!: Parallel computing and Sorting algorithm · See more »


In computer architecture, speedup is a number that measures the relative performance of two systems processing the same problem.

New!!: Parallel computing and Speedup · See more »

Star network

A Star network is one of the most common computer network topologies.

New!!: Parallel computing and Star network · See more »

Streaming SIMD Extensions

In computing, Streaming SIMD Extensions (SSE) is an SIMD instruction set extension to the x86 architecture, designed by Intel and introduced in 1999 in their Pentium III series of processors shortly after the appearance of AMD's 3DNow!.

New!!: Parallel computing and Streaming SIMD Extensions · See more »


A supercomputer is a computer with a high level of performance compared to a general-purpose computer.

New!!: Parallel computing and Supercomputer · See more »

Superscalar processor

A superscalar processor is a CPU that implements a form of parallelism called instruction-level parallelism within a single processor.

New!!: Parallel computing and Superscalar processor · See more »

Symmetric multiprocessing

Symmetric multiprocessing (SMP) involves a multiprocessor computer hardware and software architecture where two or more identical processors are connected to a single, shared main memory, have full access to all input and output devices, and are controlled by a single operating system instance that treats all processors equally, reserving none for special purposes.

New!!: Parallel computing and Symmetric multiprocessing · See more »

Synchronization (computer science)

In computer science, synchronization refers to one of two distinct but related concepts: synchronization of processes, and synchronization of Data.

New!!: Parallel computing and Synchronization (computer science) · See more »

Synchronous programming language

A synchronous programming language is a computer programming language optimized for programming reactive systems.

New!!: Parallel computing and Synchronous programming language · See more »

System C

System C Healthcare Limited is a British supplier of health information technology solutions and services, based in Maidstone, Kent, specialising in the health and social care sectors.

New!!: Parallel computing and System C · See more »


SystemC is a set of C++ classes and macros which provide an event-driven simulation interface (see also discrete event simulation).

New!!: Parallel computing and SystemC · See more »

Systolic array

In parallel computer architectures, a systolic array is a homogeneous network of tightly coupled data processing units (DPUs) called cells or nodes.

New!!: Parallel computing and Systolic array · See more »

Task parallelism

Task parallelism (also known as function parallelism and control parallelism) is a form of parallelization of computer code across multiple processors in parallel computing environments.

New!!: Parallel computing and Task parallelism · See more »

Tejas and Jayhawk

Tejas was a code name for Intel's microprocessor which was to be a successor to the latest Pentium 4 with the Prescott core.

New!!: Parallel computing and Tejas and Jayhawk · See more »

Temporal logic of actions

Temporal logic of actions (TLA) is a logic developed by Leslie Lamport, which combines temporal logic with a logic of actions.

New!!: Parallel computing and Temporal logic of actions · See more »

Temporal multithreading

Temporal multithreading is one of the two main forms of multithreading that can be implemented on computer processor hardware, the other being simultaneous multithreading.

New!!: Parallel computing and Temporal multithreading · See more »

The Future of the Mind

The Future of the Mind: The Scientific Quest to Understand, Enhance, and Empower the Mind is a popular science book by the futurist and physicist Michio Kaku.

New!!: Parallel computing and The Future of the Mind · See more »

Thomas Sterling (computing)

Thomas Sterling is Professor of Computer Science at Indiana University, a Faculty Associate at California Institute of Technology, and a Distinguished Visiting Scientist at Oak Ridge National Laboratory.

New!!: Parallel computing and Thomas Sterling (computing) · See more »

Thread (computing)

In computer science, a thread of execution is the smallest sequence of programmed instructions that can be managed independently by a scheduler, which is typically a part of the operating system.

New!!: Parallel computing and Thread (computing) · See more »


In computing, time-sharing is the sharing of a computing resource among many users by means of multiprogramming and multi-tasking at the same time.

New!!: Parallel computing and Time-sharing · See more »

Tomasulo algorithm

Tomasulo’s algorithm is a computer architecture hardware algorithm for dynamic scheduling of instructions that allows out-of-order execution and enables more efficient use of multiple execution units.

New!!: Parallel computing and Tomasulo algorithm · See more »


The TOP500 project ranks and details the 500 most powerful non-distributed computer systems in the world.

New!!: Parallel computing and TOP500 · See more »

Trace theory

In mathematics and computer science, trace theory aims to provide a concrete mathematical underpinning for the study of concurrent computation and process calculi.

New!!: Parallel computing and Trace theory · See more »


A transistor is a semiconductor device used to amplify or switch electronic signals and electrical power.

New!!: Parallel computing and Transistor · See more »


The transputer is a series of pioneering microprocessors from the 1980s, featuring integrated memory and serial communication links, intended for parallel computing.

New!!: Parallel computing and Transputer · See more »

Tree (graph theory)

In mathematics, and more specifically in graph theory, a tree is an undirected graph in which any two vertices are connected by exactly one path.

New!!: Parallel computing and Tree (graph theory) · See more »

Uniform memory access

Uniform memory access (UMA) is a shared memory architecture used in parallel computers.

New!!: Parallel computing and Uniform memory access · See more »

United States Air Force

The United States Air Force (USAF) is the aerial and space warfare service branch of the United States Armed Forces.

New!!: Parallel computing and United States Air Force · See more »

Unstructured grid

An unstructured (or irregular) grid is a tessellation of a part of the Euclidean plane or Euclidean space by simple shapes, such as triangles or tetrahedra, in an irregular pattern.

New!!: Parallel computing and Unstructured grid · See more »

Upper and lower bounds

In mathematics, especially in order theory, an upper bound of a subset S of some partially ordered set (K, ≤) is an element of K which is greater than or equal to every element of S. The term lower bound is defined dually as an element of K which is less than or equal to every element of S. A set with an upper bound is said to be bounded from above by that bound, a set with a lower bound is said to be bounded from below by that bound.

New!!: Parallel computing and Upper and lower bounds · See more »

Variable (computer science)

In computer programming, a variable or scalar is a storage location (identified by a memory address) paired with an associated symbolic name (an identifier), which contains some known or unknown quantity of information referred to as a value.

New!!: Parallel computing and Variable (computer science) · See more »

Vector processor

In computing, a vector processor or array processor is a central processing unit (CPU) that implements an instruction set containing instructions that operate on one-dimensional arrays of data called vectors, compared to scalar processors, whose instructions operate on single data items.

New!!: Parallel computing and Vector processor · See more »


Verilog, standardized as IEEE 1364, is a hardware description language (HDL) used to model electronic systems.

New!!: Parallel computing and Verilog · See more »

Very-large-scale integration

Very-large-scale integration (VLSI) is the process of creating an integrated circuit (IC) by combining hundreds of thousands of transistors or devices into a single chip.

New!!: Parallel computing and Very-large-scale integration · See more »


VHDL (VHSIC Hardware Description Language) is a hardware description language used in electronic design automation to describe digital and mixed-signal systems such as field-programmable gate arrays and integrated circuits.

New!!: Parallel computing and VHDL · See more »


Voltage, electric potential difference, electric pressure or electric tension (formally denoted or, but more often simply as V or U, for instance in the context of Ohm's or Kirchhoff's circuit laws) is the difference in electric potential between two points.

New!!: Parallel computing and Voltage · See more »

Word (computer architecture)

In computing, a word is the natural unit of data used by a particular processor design.

New!!: Parallel computing and Word (computer architecture) · See more »


x86-64 (also known as x64, x86_64, AMD64 and Intel 64) is the 64-bit version of the x86 instruction set.

New!!: Parallel computing and X86-64 · See more »

Yale Patt

Yale Nance Patt is an American professor of electrical and computer engineering at The University of Texas at Austin.

New!!: Parallel computing and Yale Patt · See more »


16-bit microcomputers are computers in which 16-bit microprocessors were the norm.

New!!: Parallel computing and 16-bit · See more »


A group of four bits is also called a nibble and has 24.

New!!: Parallel computing and 4-bit · See more »

64-bit computing

In computer architecture, 64-bit computing is the use of processors that have datapath widths, integer size, and memory address widths of 64 bits (eight octets).

New!!: Parallel computing and 64-bit computing · See more »


8-bit is also a generation of microcomputers in which 8-bit microprocessors were the norm.

New!!: Parallel computing and 8-bit · See more »

Redirects here:

Computer Parallelism, Concurrent (programming), Concurrent event, Concurrent language, Concurrent process, History of parallel computing, Message-driven parallel programming, Multicomputer, Multiple processing elements, Parallel Computing, Parallel Programming, Parallel architecture, Parallel code, Parallel computation, Parallel computer, Parallel computer hardware, Parallel computers, Parallel execution units, Parallel hardware, Parallel language, Parallel machine, Parallel processing (computing), Parallel processing computer, Parallel processor, Parallel program, Parallel programming, Parallel programming language, Parallelisation, Parallelism (computing), Parallelization, Parallelized, Parellel computing.


[1] https://en.wikipedia.org/wiki/Parallel_computing

Hey! We are on Facebook now! »