168 relations: A-law algorithm, Absolute threshold of hearing, Algorithm, Algorithmic information theory, Apple Lossless, Arithmetic coding, Audicom, Audio coding format, Audio editing software, Audio file format, Audio Lossless Coding, Audio signal processing, Auditory masking, Auditory system, Bandwidth (computing), Bell Labs, Bit, Bit rate, Blu-ray, Broadcast automation, Burrows–Wheeler transform, Cabinet (file format), Carnegie Mellon University, Claude Shannon, Codec, Coding theory, Compact disc, Compression artifact, Compression of Genomic Re-Sequencing Data, Computational resource, Context-adaptive binary arithmetic coding, Convolution, Data compression, Data differencing, Data file, Data transmission, David A. Huffman, Deblocking filter, Decorrelation, DEFLATE, Delta encoding, Digital camera, Digital container format, Discrete cosine transform, Discrete wavelet transform, Dolby TrueHD, DVD, DVD-Audio, Dynamic range compression, Electronic hardware, ..., Entropy (information theory), Entropy encoding, Equal-loudness contour, Exabyte, Feature (machine learning), Final Fantasy XII, Finite-state machine, FLAC, Forward error correction, Fractal compression, Frequency domain, Gary Sullivan (engineer), Generation loss, GIF, Grammar-based code, Gzip, H.261, H.263, H.264/MPEG-4 AVC, Hadamard transform, HD DVD, Hearing, High Efficiency Video Coding, High fidelity, HTTP compression, Huffman coding, IBM Personal Computer, Image compression, Information, Information theory, Institute of Electrical and Electronics Engineers, Inter frame, International HapMap Project, International Organization for Standardization, Internet, Intra-frame coding, ITU-T, JPEG, K. R. Rao, Kolmogorov complexity, Kullback–Leibler divergence, Latency (engineering), Lempel–Ziv–Markov chain algorithm, Lempel–Ziv–Welch, Line code, Linear prediction, Linear predictive coding, Lossless compression, Lossy compression, Luminance, LZ77 and LZ78, Machine learning, Matching pursuit, Meridian Lossless Packing, Minimum description length, Modified discrete cosine transform, Modulo-N code, Monkey's Audio, Motion compensation, Motion vector, MP3, MPEG-2, MPEG-4, MPEG-4 SLS, N. Ahmed, OptimFROG, Pattern recognition, Pixel, PKZIP, Portable Network Graphics, Posterior probability, Prediction by partial matching, Probability distribution, Proceedings of the IEEE, Psychoacoustics, Pulse-code modulation, Quantization (image processing), Quantization (signal processing), Randomized algorithm, Range encoding, Rate–distortion theory, Redundancy (information theory), Residual frame, Run-length encoding, Self-information, Sequitur algorithm, Shannon–Fano coding, Shorten (file format), Signal processing, Software, Sound quality, Space–time tradeoff, Speech coding, Statistical inference, Statistical model, Sub-band coding, Super Audio CD, Terry Welch, Thomas Wiegand, Time domain, Trade-off, TTA (codec), Uncompressed video, Universal code (data compression), University of Buenos Aires, Variable bitrate, Vector quantization, Video codec, Video coding format, Video quality, Voice over IP, Vorbis, Waveform, Wavelet transform, WavPack, White noise, Windows Media Audio, Zstandard. Expand index (118 more) » « Shrink index
An A-law algorithm is a standard companding algorithm, used in European 8-bit PCM digital communications systems to optimize, i.e. modify, the dynamic range of an analog signal for digitizing.
The absolute threshold of hearing (ATH) is the minimum sound level of a pure tone that an average human ear with normal hearing can hear with no other sound present.
In mathematics and computer science, an algorithm is an unambiguous specification of how to solve a class of problems.
Algorithmic information theory is a subfield of information theory and computer science that concerns itself with the relationship between computation and information.
Apple Lossless, also known as Apple Lossless Audio Codec (ALAC), or Apple Lossless Encoder (ALE), is an audio coding format, and its reference audio codec implementation, developed by Apple Inc. for lossless data compression of digital music.
Arithmetic coding is a form of entropy encoding used in lossless data compression.
Audicom (from “Audio en computadora”, Spanish for “Audio in computer”), released in 1989, was the world's first PC-based broadcast automation system to use audio data compression technology based on psychoacoustics.
An audio coding format (or sometimes audio compression format) is a content representation format for storage or transmission of digital audio (such as in digital television, digital radio and in audio and video files).
Audio editing software is software which allows editing and generating of audio data.
An audio file format is a file format for storing digital audio data on a computer system.
MPEG-4 Audio Lossless Coding, also known as MPEG-4 ALS, is an extension to the MPEG-4 Part 3 audio standard to allow lossless audio compression.
Audio signal processing or audio processing is the intentional alteration of audio signals often through an audio effect or effects unit.
Auditory masking occurs when the perception of one sound is affected by the presence of another sound.
The auditory system is the sensory system for the sense of hearing.
In computing, bandwidth is the maximum rate of data transfer across a given path.
Nokia Bell Labs (formerly named AT&T Bell Laboratories, Bell Telephone Laboratories and Bell Labs) is an American research and scientific development company, owned by Finnish company Nokia.
The bit (a portmanteau of binary digit) is a basic unit of information used in computing and digital communications.
In telecommunications and computing, bit rate (bitrate or as a variable R) is the number of bits that are conveyed or processed per unit of time.
Blu-ray or Blu-ray Disc (BD) is a digital optical disc data storage format.
Broadcast automation incorporates the use of broadcast programming technology to automate broadcasting operations.
The Burrows–Wheeler transform (BWT, also called block-sorting compression) rearranges a character string into runs of similar characters.
Cabinet (or CAB) is an archive-file format for Microsoft Windows that supports lossless data compression and embedded digital certificates used for maintaining archive integrity.
Carnegie Mellon University (commonly known as CMU) is a private research university in Pittsburgh, Pennsylvania.
Claude Elwood Shannon (April 30, 1916 – February 24, 2001) was an American mathematician, electrical engineer, and cryptographer known as "the father of information theory".
A codec is a device or computer program for encoding or decoding a digital data stream or signal.
Coding theory is the study of the properties of codes and their respective fitness for specific applications.
Compact disc (CD) is a digital optical disc data storage format that was co-developed by Philips and Sony and released in 1982.
A compression artifact (or artefact) is a noticeable distortion of media (including images, audio, and video) caused by the application of lossy compression.
High-throughput sequencing technologies have led to a dramatic decline of genome sequencing costs and to an astonishingly rapid accumulation of genomic data.
In computational complexity theory, a computational resource is a resource used by some computational models in the solution of computational problems.
Context-adaptive binary arithmetic coding (CABAC) is a form of entropy encoding used in the H.264/MPEG-4 AVC and High Efficiency Video Coding (HEVC) standards.
In mathematics (and, in particular, functional analysis) convolution is a mathematical operation on two functions (f and g) to produce a third function, that is typically viewed as a modified version of one of the original functions, giving the integral of the pointwise multiplication of the two functions as a function of the amount that one of the original functions is translated.
In signal processing, data compression, source coding, or bit-rate reduction involves encoding information using fewer bits than the original representation.
In computer science and information theory, data differencing or differential compression is producing a technical description of the difference between two sets of data – a source and a target.
A Data file is a computer file which stores data to be used by a computer application or system.
Data transmission (also data communication or digital communications) is the transfer of data (a digital bitstream or a digitized analog signal) over a point-to-point or point-to-multipoint communication channel.
David Albert Huffman (August 9, 1925 – October 7, 1999) was a pioneer in computer science, known for his Huffman coding.
A deblocking filter is a video filter applied to decoded compressed video to improve visual quality and prediction performance by smoothing the sharp edges which can form between macroblocks when block coding techniques are used.
Decorrelation is a general term for any process that is used to reduce autocorrelation within a signal, or cross-correlation within a set of signals, while preserving other aspects of the signal.
In computing, Deflate is a lossless data compression algorithm and associated file format that uses a combination of the LZ77 algorithm and Huffman coding.
Delta encoding is a way of storing or transmitting data in the form of differences (deltas) between sequential data rather than complete files; more generally this is known as data differencing.
A digital camera or digicam is a camera that captures photographs in digital memory.
A container or wrapper format is a metafile format whose specification describes how different elements of data and metadata coexist in a computer file.
A discrete cosine transform (DCT) expresses a finite sequence of data points in terms of a sum of cosine functions oscillating at different frequencies.
In numerical analysis and functional analysis, a discrete wavelet transform (DWT) is any wavelet transform for which the wavelets are discretely sampled.
Dolby TrueHD is a lossless multi-channel audio codec developed by Dolby Laboratories which is used in home-entertainment equipment such as Blu-ray Disc players and A/V receivers.
DVD (an abbreviation of "digital video disc" or "digital versatile disc") is a digital optical disc storage format invented and developed by Philips and Sony in 1995.
DVD-Audio (commonly abbreviated as DVD-A) is a digital format for delivering high-fidelity audio content on a DVD.
Dynamic range compression (DRC) or simply compression is an audio signal processing operation that reduces the volume of loud sounds or amplifies quiet sounds thus reducing or compressing an audio signal's dynamic range.
Electronic hardware consists of interconnected electronic components which perform analog or logic operations on received and locally stored information to produce as output or store resulting new information or to provide control for output actuator mechanisms.
Information entropy is the average rate at which information is produced by a stochastic source of data.
In information theory an entropy encoding is a lossless data compression scheme that is independent of the specific characteristics of the medium.
An equal-loudness contour is a measure of sound pressure (dB SPL), over the frequency spectrum, for which a listener perceives a constant loudness when presented with pure steady tones.
The exabyte is a multiple of the unit byte for digital information.
In machine learning and pattern recognition, a feature is an individual measurable property or characteristic of a phenomenon being observed.
is a fantasy role-playing video game developed and published by Square Enix for the PlayStation 2 home video console.
A finite-state machine (FSM) or finite-state automaton (FSA, plural: automata), finite automaton, or simply a state machine, is a mathematical model of computation.
FLAC (Free Lossless Audio Codec) is an audio coding format for lossless compression of digital audio, and is also the name of the free software project producing the FLAC tools, the reference software package that includes a codec implementation.
In telecommunication, information theory, and coding theory, forward error correction (FEC) or channel coding is a technique used for controlling errors in data transmission over unreliable or noisy communication channels.
Fractal compression is a lossy compression method for digital images, based on fractals.
In electronics, control systems engineering, and statistics, the frequency domain refers to the analysis of mathematical functions or signals with respect to frequency, rather than time.
Gary Joseph Sullivan (born 1960) is an American electrical engineer who led the development of the H.264/MPEG-4 AVC and HEVC video coding standards and created the DirectX Video Acceleration (DXVA) API/DDI video decoding feature of the Microsoft Windows operating system.
Generation loss is the loss of quality between subsequent copies or transcodes of data.
The Graphics Interchange Format, better known by its acronym GIF, is a bitmap image format that was developed by a team at the bulletin board service (BBS) provider CompuServe led by American computer scientist Steve Wilhite on June 15, 1987.
Grammar-based codes or Grammar-based compression are compression algorithms based on the idea of constructing a context-free grammar (CFG) for the string to be compressed.
gzip is a file format and a software application used for file compression and decompression.
H.261 is an ITU-T video compression standard, first ratified in November 1988.
H.263 is a video compression standard originally designed as a low-bit-rate compressed format for videoconferencing.
H.264 or MPEG-4 Part 10, Advanced Video Coding (MPEG-4 AVC) is a block-oriented motion-compensation-based video compression standard.
The Hadamard transform (also known as the Walsh–Hadamard transform, Hadamard–Rademacher–Walsh transform, Walsh transform, or Walsh–Fourier transform) is an example of a generalized class of Fourier transforms.
HD DVD (short for High Definition Digital Versatile Disc) is a discontinued high-density optical disc format for storing data and playback of high-definition video.
Hearing, or auditory perception, is the ability to perceive sounds by detecting vibrations, changes in the pressure of the surrounding medium through time, through an organ such as the ear.
High Efficiency Video Coding (HEVC), also known as H.265 and MPEG-H Part 2, is a video compression standard, one of several potential successors to the widely used AVC (H.264 or MPEG-4 Part 10).
High fidelity (often shortened to hi-fi or hifi) is a term used by listeners, audiophiles and home audio enthusiasts to refer to high-quality reproduction of sound.
HTTP compression is a capability that can be built into web servers and web clients to improve transfer speed and bandwidth utilization.
In computer science and information theory, a Huffman code is a particular type of optimal prefix code that is commonly used for lossless data compression.
The IBM Personal Computer, commonly known as the IBM PC, is the original version and progenitor of the IBM PC compatible hardware platform.
Image compression is a type of data compression applied to digital images, to reduce their cost for storage or transmission.
Information is any entity or form that provides the answer to a question of some kind or resolves uncertainty.
Information theory studies the quantification, storage, and communication of information.
The Institute of Electrical and Electronics Engineers (IEEE) is a professional association with its corporate office in New York City and its operations center in Piscataway, New Jersey.
An inter frame is a frame in a video compression stream which is expressed in terms of one or more neighboring frames.
The International HapMap Project was an organization that aimed to develop a haplotype map (HapMap) of the human genome, to describe the common patterns of human genetic variation.
The International Organization for Standardization (ISO) is an international standard-setting body composed of representatives from various national standards organizations.
The Internet is the global system of interconnected computer networks that use the Internet protocol suite (TCP/IP) to link devices worldwide.
Intra-frame coding is used in video coding (compression).
The ITU Telecommunication Standardization Sector (ITU-T) is one of the three sectors (divisions or units) of the International Telecommunication Union (ITU); it coordinates standards for telecommunications.
JPEG is a commonly used method of lossy compression for digital images, particularly for those images produced by digital photography.
In algorithmic information theory (a subfield of computer science and mathematics), the Kolmogorov complexity of an object, such as a piece of text, is the length of the shortest computer program (in a predetermined programming language) that produces the object as output.
In mathematical statistics, the Kullback–Leibler divergence (also called relative entropy) is a measure of how one probability distribution diverges from a second, expected probability distribution.
Latency is a time interval between the stimulation and response, or, from a more general point of view, a time delay between the cause and the effect of some physical change in the system being observed.
The Lempel–Ziv–Markov chain algorithm (LZMA) is an algorithm used to perform lossless data compression.
Lempel–Ziv–Welch (LZW) is a universal lossless data compression algorithm created by Abraham Lempel, Jacob Ziv, and Terry Welch.
Some signals are more prone to error than others when conveyed over a communication channel as the physics of the communication or storage medium constrains the repertoire of signals that can be used reliably.
Linear prediction is a mathematical operation where future values of a discrete-time signal are estimated as a linear function of previous samples.
Linear predictive coding (LPC) is a tool used mostly in audio signal processing and speech processing for representing the spectral envelope of a digital signal of speech in compressed form, using the information of a linear predictive model.
Lossless compression is a class of data compression algorithms that allows the original data to be perfectly reconstructed from the compressed data.
In information technology, lossy compression or irreversible compression is the class of data encoding methods that uses inexact approximations and partial data discarding to represent the content.
Luminance is a photometric measure of the luminous intensity per unit area of light travelling in a given direction.
LZ77 and LZ78 are the two lossless data compression algorithms published in papers by Abraham Lempel and Jacob Ziv in 1977 and 1978.
Machine learning is a subset of artificial intelligence in the field of computer science that often uses statistical techniques to give computers the ability to "learn" (i.e., progressively improve performance on a specific task) with data, without being explicitly programmed.
Matching pursuit (MP) is a sparse approximation algorithm which involves finding the "best matching" projections of multidimensional data onto the span of an over-complete (i.e., redundant) dictionary D. The basic idea is to approximately represent a signal f from Hilbert space H as a weighted sum of finitely many functions g_ (called atoms) taken from D. An approximation with N atoms has the form where a_n is the scalar weighting factor (amplitude) for the atom g_\in D. Normally, not every atom in D will be used in this sum.
Meridian Lossless Packing, also known as Packed PCM (PPCM), is a lossless compression technique for compressing PCM audio data developed by Meridian Audio, Ltd..
The minimum description length (MDL) principle is a formalization of Occam's razor in which the best hypothesis (a model and its parameters) for a given set of data is the one that leads to the best compression of the data.
The modified discrete cosine transform (MDCT) is a lapped transform based on the type-IV discrete cosine transform (DCT-IV), with the additional property of being lapped: it is designed to be performed on consecutive blocks of a larger dataset, where subsequent blocks are overlapped so that the last half of one block coincides with the first half of the next block.
Modulo-N code is a lossy compression algorithm used to compress correlated data sources using modulo arithmetic.
Monkey's Audio is an algorithm and file format for lossless audio data compression.
Motion compensation is an algorithmic technique used to predict a frame in a video, given the previous and/or future frames by accounting for motion of the camera and/or objects in the video.
In video compression, a motion vector is the key element in the motion estimation process.
MP3 (formally MPEG-1 Audio Layer III or MPEG-2 Audio Layer III) is an audio coding format for digital audio.
MPEG-2 (a.k.a. H.222/H.262 as defined by the ITU) is a standard for "the generic coding of moving pictures and associated audio information".
MPEG-4 is a method of defining compression of audio and visual (AV) digital data.
MPEG-4 SLS, or MPEG-4 Scalable to Lossless as per ISO/IEC 14496-3:2005/Amd 3:2006 (Scalable Lossless Coding), is an extension to the MPEG-4 Part 3 (MPEG-4 Audio) standard to allow lossless audio compression scalable to lossy MPEG-4 General Audio coding methods (e.g., variations of AAC).
Nasir Ahmed (born 1940 in Bangalore, India) is a Professor Emeritus of Electrical and Computer and Engineering at University of New Mexico (UNM).
OptimFROG is a proprietary lossless audio data compression codec developed by Florin Ghido.
Pattern recognition is a branch of machine learning that focuses on the recognition of patterns and regularities in data, although it is in some cases considered to be nearly synonymous with machine learning.
In digital imaging, a pixel, pel, dots, or picture element is a physical point in a raster image, or the smallest addressable element in an all points addressable display device; so it is the smallest controllable element of a picture represented on the screen.
PKZIP is a file archiving computer program, notable for introducing the popular ZIP file format.
Portable Network Graphics (PNG, pronounced or) is a raster graphics file format that supports lossless data compression.
In Bayesian statistics, the posterior probability of a random event or an uncertain proposition is the conditional probability that is assigned after the relevant evidence or background is taken into account.
Prediction by partial matching (PPM) is an adaptive statistical data compression technique based on context modeling and prediction.
In probability theory and statistics, a probability distribution is a mathematical function that provides the probabilities of occurrence of different possible outcomes in an experiment.
The Proceedings of the IEEE is a monthly peer-reviewed scientific journal published by the Institute of Electrical and Electronics Engineers (IEEE).
Psychoacoustics is the scientific study of sound perception and audiology.
Pulse-code modulation (PCM) is a method used to digitally represent sampled analog signals.
Quantization, involved in image processing, is a lossy compression technique achieved by compressing a range of values to a single quantum value.
Quantization, in mathematics and digital signal processing, is the process of mapping input values from a large set (often a continuous set) to output values in a (countable) smaller set.
A randomized algorithm is an algorithm that employs a degree of randomness as part of its logic.
Range encoding is an entropy coding method defined by G. Nigel N. Martin in a 1979 paper,, Video & Data Recording Conference, Southampton, UK, July 24–27, 1979.
Rate–distortion theory is a major branch of information theory which provides the theoretical foundations for lossy data compression; it addresses the problem of determining the minimal number of bits per symbol, as measured by the rate R, that should be communicated over a channel, so that the source (input signal) can be approximately reconstructed at the receiver (output signal) without exceeding a given distortion D.
In Information theory, redundancy measures the fractional difference between the entropy of an ensemble, and its maximum possible value \log(|\mathcal_X|).
In video compression algorithms a residual frame is formed by subtracting the reference frame from the desired frame.
Run-length encoding (RLE) is a very simple form of lossless data compression in which runs of data (that is, sequences in which the same data value occurs in many consecutive data elements) are stored as a single data value and count, rather than as the original run.
In information theory, self-information or surprisal is the surprise when a random variable is sampled.
Sequitur (or Nevill-Manning algorithm) is a recursive algorithm developed by Craig Nevill-Manning and Ian H. Witten in 1997 that infers a hierarchical structure (context-free grammar) from a sequence of discrete symbols.
In the field of data compression, Shannon–Fano coding, named after Claude Shannon and Robert Fano, is a technique for constructing a prefix code based on a set of symbols and their probabilities (estimated or measured).
Shorten (SHN) is a file format used for compressing audio data.
Signal processing concerns the analysis, synthesis, and modification of signals, which are broadly defined as functions conveying "information about the behavior or attributes of some phenomenon", such as sound, images, and biological measurements.
Computer software, or simply software, is a generic term that refers to a collection of data or computer instructions that tell the computer how to work, in contrast to the physical hardware from which the system is built, that actually performs the work.
Sound quality is typically an assessment of the accuracy, enjoyability, or intelligibility of audio output from an electronic device.
A space–time or time–memory trade-off in computer science is a case where an algorithm or program trades increased space usage with decreased time.
Speech coding is an application of data compression of digital audio signals containing speech.
Statistical inference is the process of using data analysis to deduce properties of an underlying probability distribution.
A statistical model is a mathematical model that embodies a set of statistical assumptions concerning the generation of some sample data and similar data from a larger population.
In signal processing, sub-band coding (SBC) is any form of transform coding that breaks a signal into a number of different frequency bands, typically by using a fast Fourier transform, and encodes each one independently.
Super Audio CD (SACD) is a read-only optical disc for audio storage, introduced in 1999.
Terry Archer Welch was an American computer scientist.
Thomas Wiegand (born 6 May 1970 in Wismar) is a German electrical engineer who substantially contributed to the creation of the H.264/MPEG-4 AVC and H.265/MPEG-H HEVC video coding standards.
Time domain is the analysis of mathematical functions, physical signals or time series of economic or environmental data, with respect to time.
A trade-off (or tradeoff) is a situational decision that involves diminishing or losing one quality, quantity or property of a set or design in return for gains in other aspects.
True Audio (TTA) is a lossless compressor for multichannel 8, 16 and 24 bits audio data.
Uncompressed video is digital video that either has never been compressed or was generated by decompressing previously compressed digital video.
In data compression, a universal code for integers is a prefix code that maps the positive integers onto binary codewords, with the additional property that whatever the true probability distribution on integers, as long as the distribution is monotonic (i.e., p(i) ≥ p(i + 1) for all positive i), the expected lengths of the codewords are within a constant factor of the expected lengths that the optimal code for that probability distribution would have assigned.
The University of Buenos Aires (Universidad de Buenos Aires, UBA) is the largest university in Argentina and the second largest university by enrollment in Latin America.
Variable bitrate (VBR) is a term used in telecommunications and computing that relates to the bitrate used in sound or video encoding.
Vector quantization (VQ) is a classical quantization technique from signal processing that allows the modeling of probability density functions by the distribution of prototype vectors.
A video codec is an electronic circuit or software that compresses or decompresses digital video.
A video coding format (or sometimes video compression format) is a content representation format for storage or transmission of digital video content (such as in a data file or bitstream).
Video quality is a characteristic of a video passed through a video transmission/processing system, a formal or informal measure of perceived video degradation (typically, compared to the original video).
Voice over Internet Protocol (also voice over IP, VoIP or IP telephony) is a methodology and group of technologies for the delivery of voice communications and multimedia sessions over Internet Protocol (IP) networks, such as the Internet.
Vorbis is a free and open-source software project headed by the Xiph.Org Foundation.
A waveform is the shape and form of a signal such as a wave moving in a physical medium or an abstract representation.
In mathematics, a wavelet series is a representation of a square-integrable (real- or complex-valued) function by a certain orthonormal series generated by a wavelet.
WavPack is a free and open-source lossless audio compression format.
In signal processing, white noise is a random signal having equal intensity at different frequencies, giving it a constant power spectral density.
Windows Media Audio (WMA) is the name of a series of audio codecs and their corresponding audio coding formats developed by Microsoft.
Zstandard (or Zstd) is a lossless data compression algorithm developed by Yann Collet at Facebook.
Audio compression (data), Audio data compression, Bit-rate reduction, Block compression, Coding techniques, Compressed data, Compressed digital video, Compressed video, Compression algorithm, Compression algorithms, Compression program, Compression software, Compression utility, Data Compression, Data compression algorithm, Data compression/multimedia compression, Data decompression, Datacompression, Digital audio compression, Digital video compression, File compressing, File compression, Genetic compression algorithm, Intelligent Compression, Lossless Audio, Lossless audio, Lossless audio compression, Lossy audio compression, Multimedia compression, Negabytes, Sound compression, Source Coding, Source coding, Spatial compression, Temporal compression, Text compression, Transparent decompression, Uncompression, Video Compression, Video coding, Video compression, Video data compression, Video encoding.