168 relations: A-law algorithm, Absolute threshold of hearing, Algorithm, Algorithmic information theory, Apple Lossless, Arithmetic coding, Audicom, Audio coding format, Audio editing software, Audio file format, Audio Lossless Coding, Audio signal processing, Auditory masking, Auditory system, Bandwidth (computing), Bell Labs, Bit, Bit rate, Blu-ray, Broadcast automation, Burrows–Wheeler transform, Cabinet (file format), Carnegie Mellon University, Claude Shannon, Codec, Coding theory, Compact disc, Compression artifact, Compression of Genomic Re-Sequencing Data, Computational resource, Context-adaptive binary arithmetic coding, Convolution, Data compression, Data differencing, Data file, Data transmission, David A. Huffman, Deblocking filter, Decorrelation, DEFLATE, Delta encoding, Digital camera, Digital container format, Discrete cosine transform, Discrete wavelet transform, Dolby TrueHD, DVD, DVD-Audio, Dynamic range compression, Electronic hardware, ..., Entropy (information theory), Entropy encoding, Equal-loudness contour, Exabyte, Feature (machine learning), Final Fantasy XII, Finite-state machine, FLAC, Forward error correction, Fractal compression, Frequency domain, Gary Sullivan (engineer), Generation loss, GIF, Grammar-based code, Gzip, H.261, H.263, H.264/MPEG-4 AVC, Hadamard transform, HD DVD, Hearing, High Efficiency Video Coding, High fidelity, HTTP compression, Huffman coding, IBM Personal Computer, Image compression, Information, Information theory, Institute of Electrical and Electronics Engineers, Inter frame, International HapMap Project, International Organization for Standardization, Internet, Intra-frame coding, ITU-T, JPEG, K. R. Rao, Kolmogorov complexity, Kullback–Leibler divergence, Latency (engineering), Lempel–Ziv–Markov chain algorithm, Lempel–Ziv–Welch, Line code, Linear prediction, Linear predictive coding, Lossless compression, Lossy compression, Luminance, LZ77 and LZ78, Machine learning, Matching pursuit, Meridian Lossless Packing, Minimum description length, Modified discrete cosine transform, Modulo-N code, Monkey's Audio, Motion compensation, Motion vector, MP3, MPEG-2, MPEG-4, MPEG-4 SLS, N. Ahmed, OptimFROG, Pattern recognition, Pixel, PKZIP, Portable Network Graphics, Posterior probability, Prediction by partial matching, Probability distribution, Proceedings of the IEEE, Psychoacoustics, Pulse-code modulation, Quantization (image processing), Quantization (signal processing), Randomized algorithm, Range encoding, Rate–distortion theory, Redundancy (information theory), Residual frame, Run-length encoding, Self-information, Sequitur algorithm, Shannon–Fano coding, Shorten (file format), Signal processing, Software, Sound quality, Space–time tradeoff, Speech coding, Statistical inference, Statistical model, Sub-band coding, Super Audio CD, Terry Welch, Thomas Wiegand, Time domain, Trade-off, TTA (codec), Uncompressed video, Universal code (data compression), University of Buenos Aires, Variable bitrate, Vector quantization, Video codec, Video coding format, Video quality, Voice over IP, Vorbis, Waveform, Wavelet transform, WavPack, White noise, Windows Media Audio, Zstandard. Expand index (118 more) » « Shrink index
An A-law algorithm is a standard companding algorithm, used in European 8-bit PCM digital communications systems to optimize, i.e. modify, the dynamic range of an analog signal for digitizing.
New!!: Data compression and A-law algorithm · See more »
Absolute threshold of hearing
The absolute threshold of hearing (ATH) is the minimum sound level of a pure tone that an average human ear with normal hearing can hear with no other sound present.
New!!: Data compression and Absolute threshold of hearing · See more »
In mathematics and computer science, an algorithm is an unambiguous specification of how to solve a class of problems.
New!!: Data compression and Algorithm · See more »
Algorithmic information theory
Algorithmic information theory is a subfield of information theory and computer science that concerns itself with the relationship between computation and information.
New!!: Data compression and Algorithmic information theory · See more »
Apple Lossless, also known as Apple Lossless Audio Codec (ALAC), or Apple Lossless Encoder (ALE), is an audio coding format, and its reference audio codec implementation, developed by Apple Inc. for lossless data compression of digital music.
New!!: Data compression and Apple Lossless · See more »
Arithmetic coding is a form of entropy encoding used in lossless data compression.
New!!: Data compression and Arithmetic coding · See more »
Audicom (from “Audio en computadora”, Spanish for “Audio in computer”), released in 1989, was the world's first PC-based broadcast automation system to use audio data compression technology based on psychoacoustics.
New!!: Data compression and Audicom · See more »
Audio coding format
An audio coding format (or sometimes audio compression format) is a content representation format for storage or transmission of digital audio (such as in digital television, digital radio and in audio and video files).
New!!: Data compression and Audio coding format · See more »
Audio editing software
Audio editing software is software which allows editing and generating of audio data.
New!!: Data compression and Audio editing software · See more »
Audio file format
An audio file format is a file format for storing digital audio data on a computer system.
New!!: Data compression and Audio file format · See more »
Audio Lossless Coding
MPEG-4 Audio Lossless Coding, also known as MPEG-4 ALS, is an extension to the MPEG-4 Part 3 audio standard to allow lossless audio compression.
New!!: Data compression and Audio Lossless Coding · See more »
Audio signal processing
Audio signal processing or audio processing is the intentional alteration of audio signals often through an audio effect or effects unit.
New!!: Data compression and Audio signal processing · See more »
Auditory masking occurs when the perception of one sound is affected by the presence of another sound.
New!!: Data compression and Auditory masking · See more »
The auditory system is the sensory system for the sense of hearing.
New!!: Data compression and Auditory system · See more »
In computing, bandwidth is the maximum rate of data transfer across a given path.
New!!: Data compression and Bandwidth (computing) · See more »
Nokia Bell Labs (formerly named AT&T Bell Laboratories, Bell Telephone Laboratories and Bell Labs) is an American research and scientific development company, owned by Finnish company Nokia.
New!!: Data compression and Bell Labs · See more »
The bit (a portmanteau of binary digit) is a basic unit of information used in computing and digital communications.
New!!: Data compression and Bit · See more »
In telecommunications and computing, bit rate (bitrate or as a variable R) is the number of bits that are conveyed or processed per unit of time.
New!!: Data compression and Bit rate · See more »
Blu-ray or Blu-ray Disc (BD) is a digital optical disc data storage format.
New!!: Data compression and Blu-ray · See more »
Broadcast automation incorporates the use of broadcast programming technology to automate broadcasting operations.
New!!: Data compression and Broadcast automation · See more »
The Burrows–Wheeler transform (BWT, also called block-sorting compression) rearranges a character string into runs of similar characters.
New!!: Data compression and Burrows–Wheeler transform · See more »
Cabinet (file format)
Cabinet (or CAB) is an archive-file format for Microsoft Windows that supports lossless data compression and embedded digital certificates used for maintaining archive integrity.
New!!: Data compression and Cabinet (file format) · See more »
Carnegie Mellon University
Carnegie Mellon University (commonly known as CMU) is a private research university in Pittsburgh, Pennsylvania.
New!!: Data compression and Carnegie Mellon University · See more »
Claude Elwood Shannon (April 30, 1916 – February 24, 2001) was an American mathematician, electrical engineer, and cryptographer known as "the father of information theory".
New!!: Data compression and Claude Shannon · See more »
A codec is a device or computer program for encoding or decoding a digital data stream or signal.
New!!: Data compression and Codec · See more »
Coding theory is the study of the properties of codes and their respective fitness for specific applications.
New!!: Data compression and Coding theory · See more »
Compact disc (CD) is a digital optical disc data storage format that was co-developed by Philips and Sony and released in 1982.
New!!: Data compression and Compact disc · See more »
A compression artifact (or artefact) is a noticeable distortion of media (including images, audio, and video) caused by the application of lossy compression.
New!!: Data compression and Compression artifact · See more »
Compression of Genomic Re-Sequencing Data
High-throughput sequencing technologies have led to a dramatic decline of genome sequencing costs and to an astonishingly rapid accumulation of genomic data.
New!!: Data compression and Compression of Genomic Re-Sequencing Data · See more »
In computational complexity theory, a computational resource is a resource used by some computational models in the solution of computational problems.
New!!: Data compression and Computational resource · See more »
Context-adaptive binary arithmetic coding
Context-adaptive binary arithmetic coding (CABAC) is a form of entropy encoding used in the H.264/MPEG-4 AVC and High Efficiency Video Coding (HEVC) standards.
New!!: Data compression and Context-adaptive binary arithmetic coding · See more »
In mathematics (and, in particular, functional analysis) convolution is a mathematical operation on two functions (f and g) to produce a third function, that is typically viewed as a modified version of one of the original functions, giving the integral of the pointwise multiplication of the two functions as a function of the amount that one of the original functions is translated.
New!!: Data compression and Convolution · See more »
In signal processing, data compression, source coding, or bit-rate reduction involves encoding information using fewer bits than the original representation.
New!!: Data compression and Data compression · See more »
In computer science and information theory, data differencing or differential compression is producing a technical description of the difference between two sets of data – a source and a target.
New!!: Data compression and Data differencing · See more »
A Data file is a computer file which stores data to be used by a computer application or system.
New!!: Data compression and Data file · See more »
Data transmission (also data communication or digital communications) is the transfer of data (a digital bitstream or a digitized analog signal) over a point-to-point or point-to-multipoint communication channel.
New!!: Data compression and Data transmission · See more »
David A. Huffman
David Albert Huffman (August 9, 1925 – October 7, 1999) was a pioneer in computer science, known for his Huffman coding.
New!!: Data compression and David A. Huffman · See more »
A deblocking filter is a video filter applied to decoded compressed video to improve visual quality and prediction performance by smoothing the sharp edges which can form between macroblocks when block coding techniques are used.
New!!: Data compression and Deblocking filter · See more »
Decorrelation is a general term for any process that is used to reduce autocorrelation within a signal, or cross-correlation within a set of signals, while preserving other aspects of the signal.
New!!: Data compression and Decorrelation · See more »
In computing, Deflate is a lossless data compression algorithm and associated file format that uses a combination of the LZ77 algorithm and Huffman coding.
New!!: Data compression and DEFLATE · See more »
Delta encoding is a way of storing or transmitting data in the form of differences (deltas) between sequential data rather than complete files; more generally this is known as data differencing.
New!!: Data compression and Delta encoding · See more »
A digital camera or digicam is a camera that captures photographs in digital memory.
New!!: Data compression and Digital camera · See more »
Digital container format
A container or wrapper format is a metafile format whose specification describes how different elements of data and metadata coexist in a computer file.
New!!: Data compression and Digital container format · See more »
Discrete cosine transform
A discrete cosine transform (DCT) expresses a finite sequence of data points in terms of a sum of cosine functions oscillating at different frequencies.
New!!: Data compression and Discrete cosine transform · See more »
Discrete wavelet transform
In numerical analysis and functional analysis, a discrete wavelet transform (DWT) is any wavelet transform for which the wavelets are discretely sampled.
New!!: Data compression and Discrete wavelet transform · See more »
Dolby TrueHD is a lossless multi-channel audio codec developed by Dolby Laboratories which is used in home-entertainment equipment such as Blu-ray Disc players and A/V receivers.
New!!: Data compression and Dolby TrueHD · See more »
DVD (an abbreviation of "digital video disc" or "digital versatile disc") is a digital optical disc storage format invented and developed by Philips and Sony in 1995.
New!!: Data compression and DVD · See more »
DVD-Audio (commonly abbreviated as DVD-A) is a digital format for delivering high-fidelity audio content on a DVD.
New!!: Data compression and DVD-Audio · See more »
Dynamic range compression
Dynamic range compression (DRC) or simply compression is an audio signal processing operation that reduces the volume of loud sounds or amplifies quiet sounds thus reducing or compressing an audio signal's dynamic range.
New!!: Data compression and Dynamic range compression · See more »
Electronic hardware consists of interconnected electronic components which perform analog or logic operations on received and locally stored information to produce as output or store resulting new information or to provide control for output actuator mechanisms.
New!!: Data compression and Electronic hardware · See more »
Entropy (information theory)
Information entropy is the average rate at which information is produced by a stochastic source of data.
New!!: Data compression and Entropy (information theory) · See more »
In information theory an entropy encoding is a lossless data compression scheme that is independent of the specific characteristics of the medium.
New!!: Data compression and Entropy encoding · See more »
An equal-loudness contour is a measure of sound pressure (dB SPL), over the frequency spectrum, for which a listener perceives a constant loudness when presented with pure steady tones.
New!!: Data compression and Equal-loudness contour · See more »
The exabyte is a multiple of the unit byte for digital information.
New!!: Data compression and Exabyte · See more »
Feature (machine learning)
In machine learning and pattern recognition, a feature is an individual measurable property or characteristic of a phenomenon being observed.
New!!: Data compression and Feature (machine learning) · See more »
Final Fantasy XII
is a fantasy role-playing video game developed and published by Square Enix for the PlayStation 2 home video console.
New!!: Data compression and Final Fantasy XII · See more »
A finite-state machine (FSM) or finite-state automaton (FSA, plural: automata), finite automaton, or simply a state machine, is a mathematical model of computation.
New!!: Data compression and Finite-state machine · See more »
FLAC (Free Lossless Audio Codec) is an audio coding format for lossless compression of digital audio, and is also the name of the free software project producing the FLAC tools, the reference software package that includes a codec implementation.
New!!: Data compression and FLAC · See more »
Forward error correction
In telecommunication, information theory, and coding theory, forward error correction (FEC) or channel coding is a technique used for controlling errors in data transmission over unreliable or noisy communication channels.
New!!: Data compression and Forward error correction · See more »
Fractal compression is a lossy compression method for digital images, based on fractals.
New!!: Data compression and Fractal compression · See more »
In electronics, control systems engineering, and statistics, the frequency domain refers to the analysis of mathematical functions or signals with respect to frequency, rather than time.
New!!: Data compression and Frequency domain · See more »
Gary Sullivan (engineer)
Gary Joseph Sullivan (born 1960) is an American electrical engineer who led the development of the H.264/MPEG-4 AVC and HEVC video coding standards and created the DirectX Video Acceleration (DXVA) API/DDI video decoding feature of the Microsoft Windows operating system.
New!!: Data compression and Gary Sullivan (engineer) · See more »
Generation loss is the loss of quality between subsequent copies or transcodes of data.
New!!: Data compression and Generation loss · See more »
The Graphics Interchange Format, better known by its acronym GIF, is a bitmap image format that was developed by a team at the bulletin board service (BBS) provider CompuServe led by American computer scientist Steve Wilhite on June 15, 1987.
New!!: Data compression and GIF · See more »
Grammar-based codes or Grammar-based compression are compression algorithms based on the idea of constructing a context-free grammar (CFG) for the string to be compressed.
New!!: Data compression and Grammar-based code · See more »
gzip is a file format and a software application used for file compression and decompression.
New!!: Data compression and Gzip · See more »
H.261 is an ITU-T video compression standard, first ratified in November 1988.
New!!: Data compression and H.261 · See more »
H.263 is a video compression standard originally designed as a low-bit-rate compressed format for videoconferencing.
New!!: Data compression and H.263 · See more »
H.264 or MPEG-4 Part 10, Advanced Video Coding (MPEG-4 AVC) is a block-oriented motion-compensation-based video compression standard.
New!!: Data compression and H.264/MPEG-4 AVC · See more »
The Hadamard transform (also known as the Walsh–Hadamard transform, Hadamard–Rademacher–Walsh transform, Walsh transform, or Walsh–Fourier transform) is an example of a generalized class of Fourier transforms.
New!!: Data compression and Hadamard transform · See more »
HD DVD (short for High Definition Digital Versatile Disc) is a discontinued high-density optical disc format for storing data and playback of high-definition video.
New!!: Data compression and HD DVD · See more »
Hearing, or auditory perception, is the ability to perceive sounds by detecting vibrations, changes in the pressure of the surrounding medium through time, through an organ such as the ear.
New!!: Data compression and Hearing · See more »
High Efficiency Video Coding
High Efficiency Video Coding (HEVC), also known as H.265 and MPEG-H Part 2, is a video compression standard, one of several potential successors to the widely used AVC (H.264 or MPEG-4 Part 10).
New!!: Data compression and High Efficiency Video Coding · See more »
High fidelity (often shortened to hi-fi or hifi) is a term used by listeners, audiophiles and home audio enthusiasts to refer to high-quality reproduction of sound.
New!!: Data compression and High fidelity · See more »
HTTP compression is a capability that can be built into web servers and web clients to improve transfer speed and bandwidth utilization.
New!!: Data compression and HTTP compression · See more »
In computer science and information theory, a Huffman code is a particular type of optimal prefix code that is commonly used for lossless data compression.
New!!: Data compression and Huffman coding · See more »
IBM Personal Computer
The IBM Personal Computer, commonly known as the IBM PC, is the original version and progenitor of the IBM PC compatible hardware platform.
New!!: Data compression and IBM Personal Computer · See more »
Image compression is a type of data compression applied to digital images, to reduce their cost for storage or transmission.
New!!: Data compression and Image compression · See more »
Information is any entity or form that provides the answer to a question of some kind or resolves uncertainty.
New!!: Data compression and Information · See more »
Information theory studies the quantification, storage, and communication of information.
New!!: Data compression and Information theory · See more »
Institute of Electrical and Electronics Engineers
The Institute of Electrical and Electronics Engineers (IEEE) is a professional association with its corporate office in New York City and its operations center in Piscataway, New Jersey.
New!!: Data compression and Institute of Electrical and Electronics Engineers · See more »
An inter frame is a frame in a video compression stream which is expressed in terms of one or more neighboring frames.
New!!: Data compression and Inter frame · See more »
International HapMap Project
The International HapMap Project was an organization that aimed to develop a haplotype map (HapMap) of the human genome, to describe the common patterns of human genetic variation.
New!!: Data compression and International HapMap Project · See more »
International Organization for Standardization
The International Organization for Standardization (ISO) is an international standard-setting body composed of representatives from various national standards organizations.
New!!: Data compression and International Organization for Standardization · See more »
The Internet is the global system of interconnected computer networks that use the Internet protocol suite (TCP/IP) to link devices worldwide.
New!!: Data compression and Internet · See more »
Intra-frame coding is used in video coding (compression).
New!!: Data compression and Intra-frame coding · See more »
The ITU Telecommunication Standardization Sector (ITU-T) is one of the three sectors (divisions or units) of the International Telecommunication Union (ITU); it coordinates standards for telecommunications.
New!!: Data compression and ITU-T · See more »
JPEG is a commonly used method of lossy compression for digital images, particularly for those images produced by digital photography.
New!!: Data compression and JPEG · See more »
K. R. Rao
New!!: Data compression and K. R. Rao · See more »
In algorithmic information theory (a subfield of computer science and mathematics), the Kolmogorov complexity of an object, such as a piece of text, is the length of the shortest computer program (in a predetermined programming language) that produces the object as output.
New!!: Data compression and Kolmogorov complexity · See more »
In mathematical statistics, the Kullback–Leibler divergence (also called relative entropy) is a measure of how one probability distribution diverges from a second, expected probability distribution.
New!!: Data compression and Kullback–Leibler divergence · See more »
Latency is a time interval between the stimulation and response, or, from a more general point of view, a time delay between the cause and the effect of some physical change in the system being observed.
New!!: Data compression and Latency (engineering) · See more »
Lempel–Ziv–Markov chain algorithm
The Lempel–Ziv–Markov chain algorithm (LZMA) is an algorithm used to perform lossless data compression.
New!!: Data compression and Lempel–Ziv–Markov chain algorithm · See more »
Lempel–Ziv–Welch (LZW) is a universal lossless data compression algorithm created by Abraham Lempel, Jacob Ziv, and Terry Welch.
New!!: Data compression and Lempel–Ziv–Welch · See more »
Some signals are more prone to error than others when conveyed over a communication channel as the physics of the communication or storage medium constrains the repertoire of signals that can be used reliably.
New!!: Data compression and Line code · See more »
Linear prediction is a mathematical operation where future values of a discrete-time signal are estimated as a linear function of previous samples.
New!!: Data compression and Linear prediction · See more »
Linear predictive coding
Linear predictive coding (LPC) is a tool used mostly in audio signal processing and speech processing for representing the spectral envelope of a digital signal of speech in compressed form, using the information of a linear predictive model.
New!!: Data compression and Linear predictive coding · See more »
Lossless compression is a class of data compression algorithms that allows the original data to be perfectly reconstructed from the compressed data.
New!!: Data compression and Lossless compression · See more »
In information technology, lossy compression or irreversible compression is the class of data encoding methods that uses inexact approximations and partial data discarding to represent the content.
New!!: Data compression and Lossy compression · See more »
Luminance is a photometric measure of the luminous intensity per unit area of light travelling in a given direction.
New!!: Data compression and Luminance · See more »
LZ77 and LZ78
LZ77 and LZ78 are the two lossless data compression algorithms published in papers by Abraham Lempel and Jacob Ziv in 1977 and 1978.
New!!: Data compression and LZ77 and LZ78 · See more »
Machine learning is a subset of artificial intelligence in the field of computer science that often uses statistical techniques to give computers the ability to "learn" (i.e., progressively improve performance on a specific task) with data, without being explicitly programmed.
New!!: Data compression and Machine learning · See more »
Matching pursuit (MP) is a sparse approximation algorithm which involves finding the "best matching" projections of multidimensional data onto the span of an over-complete (i.e., redundant) dictionary D. The basic idea is to approximately represent a signal f from Hilbert space H as a weighted sum of finitely many functions g_ (called atoms) taken from D. An approximation with N atoms has the form where a_n is the scalar weighting factor (amplitude) for the atom g_\in D. Normally, not every atom in D will be used in this sum.
New!!: Data compression and Matching pursuit · See more »
Meridian Lossless Packing
Meridian Lossless Packing, also known as Packed PCM (PPCM), is a lossless compression technique for compressing PCM audio data developed by Meridian Audio, Ltd..
New!!: Data compression and Meridian Lossless Packing · See more »
Minimum description length
The minimum description length (MDL) principle is a formalization of Occam's razor in which the best hypothesis (a model and its parameters) for a given set of data is the one that leads to the best compression of the data.
New!!: Data compression and Minimum description length · See more »
Modified discrete cosine transform
The modified discrete cosine transform (MDCT) is a lapped transform based on the type-IV discrete cosine transform (DCT-IV), with the additional property of being lapped: it is designed to be performed on consecutive blocks of a larger dataset, where subsequent blocks are overlapped so that the last half of one block coincides with the first half of the next block.
New!!: Data compression and Modified discrete cosine transform · See more »
Modulo-N code is a lossy compression algorithm used to compress correlated data sources using modulo arithmetic.
New!!: Data compression and Modulo-N code · See more »
Monkey's Audio is an algorithm and file format for lossless audio data compression.
New!!: Data compression and Monkey's Audio · See more »
Motion compensation is an algorithmic technique used to predict a frame in a video, given the previous and/or future frames by accounting for motion of the camera and/or objects in the video.
New!!: Data compression and Motion compensation · See more »
In video compression, a motion vector is the key element in the motion estimation process.
New!!: Data compression and Motion vector · See more »
MP3 (formally MPEG-1 Audio Layer III or MPEG-2 Audio Layer III) is an audio coding format for digital audio.
New!!: Data compression and MP3 · See more »
MPEG-2 (a.k.a. H.222/H.262 as defined by the ITU) is a standard for "the generic coding of moving pictures and associated audio information".
New!!: Data compression and MPEG-2 · See more »
MPEG-4 is a method of defining compression of audio and visual (AV) digital data.
New!!: Data compression and MPEG-4 · See more »
MPEG-4 SLS, or MPEG-4 Scalable to Lossless as per ISO/IEC 14496-3:2005/Amd 3:2006 (Scalable Lossless Coding), is an extension to the MPEG-4 Part 3 (MPEG-4 Audio) standard to allow lossless audio compression scalable to lossy MPEG-4 General Audio coding methods (e.g., variations of AAC).
New!!: Data compression and MPEG-4 SLS · See more »
Nasir Ahmed (born 1940 in Bangalore, India) is a Professor Emeritus of Electrical and Computer and Engineering at University of New Mexico (UNM).
New!!: Data compression and N. Ahmed · See more »
OptimFROG is a proprietary lossless audio data compression codec developed by Florin Ghido.
New!!: Data compression and OptimFROG · See more »
Pattern recognition is a branch of machine learning that focuses on the recognition of patterns and regularities in data, although it is in some cases considered to be nearly synonymous with machine learning.
New!!: Data compression and Pattern recognition · See more »
In digital imaging, a pixel, pel, dots, or picture element is a physical point in a raster image, or the smallest addressable element in an all points addressable display device; so it is the smallest controllable element of a picture represented on the screen.
New!!: Data compression and Pixel · See more »
PKZIP is a file archiving computer program, notable for introducing the popular ZIP file format.
New!!: Data compression and PKZIP · See more »
Portable Network Graphics
Portable Network Graphics (PNG, pronounced or) is a raster graphics file format that supports lossless data compression.
New!!: Data compression and Portable Network Graphics · See more »
In Bayesian statistics, the posterior probability of a random event or an uncertain proposition is the conditional probability that is assigned after the relevant evidence or background is taken into account.
New!!: Data compression and Posterior probability · See more »
Prediction by partial matching
Prediction by partial matching (PPM) is an adaptive statistical data compression technique based on context modeling and prediction.
New!!: Data compression and Prediction by partial matching · See more »
In probability theory and statistics, a probability distribution is a mathematical function that provides the probabilities of occurrence of different possible outcomes in an experiment.
New!!: Data compression and Probability distribution · See more »
Proceedings of the IEEE
The Proceedings of the IEEE is a monthly peer-reviewed scientific journal published by the Institute of Electrical and Electronics Engineers (IEEE).
New!!: Data compression and Proceedings of the IEEE · See more »
Psychoacoustics is the scientific study of sound perception and audiology.
New!!: Data compression and Psychoacoustics · See more »
Pulse-code modulation (PCM) is a method used to digitally represent sampled analog signals.
New!!: Data compression and Pulse-code modulation · See more »
Quantization (image processing)
Quantization, involved in image processing, is a lossy compression technique achieved by compressing a range of values to a single quantum value.
New!!: Data compression and Quantization (image processing) · See more »
Quantization (signal processing)
Quantization, in mathematics and digital signal processing, is the process of mapping input values from a large set (often a continuous set) to output values in a (countable) smaller set.
New!!: Data compression and Quantization (signal processing) · See more »
A randomized algorithm is an algorithm that employs a degree of randomness as part of its logic.
New!!: Data compression and Randomized algorithm · See more »
Range encoding is an entropy coding method defined by G. Nigel N. Martin in a 1979 paper,, Video & Data Recording Conference, Southampton, UK, July 24–27, 1979.
New!!: Data compression and Range encoding · See more »
Rate–distortion theory is a major branch of information theory which provides the theoretical foundations for lossy data compression; it addresses the problem of determining the minimal number of bits per symbol, as measured by the rate R, that should be communicated over a channel, so that the source (input signal) can be approximately reconstructed at the receiver (output signal) without exceeding a given distortion D.
New!!: Data compression and Rate–distortion theory · See more »
Redundancy (information theory)
In Information theory, redundancy measures the fractional difference between the entropy of an ensemble, and its maximum possible value \log(|\mathcal_X|).
New!!: Data compression and Redundancy (information theory) · See more »
In video compression algorithms a residual frame is formed by subtracting the reference frame from the desired frame.
New!!: Data compression and Residual frame · See more »
Run-length encoding (RLE) is a very simple form of lossless data compression in which runs of data (that is, sequences in which the same data value occurs in many consecutive data elements) are stored as a single data value and count, rather than as the original run.
New!!: Data compression and Run-length encoding · See more »
In information theory, self-information or surprisal is the surprise when a random variable is sampled.
New!!: Data compression and Self-information · See more »
Sequitur (or Nevill-Manning algorithm) is a recursive algorithm developed by Craig Nevill-Manning and Ian H. Witten in 1997 that infers a hierarchical structure (context-free grammar) from a sequence of discrete symbols.
New!!: Data compression and Sequitur algorithm · See more »
In the field of data compression, Shannon–Fano coding, named after Claude Shannon and Robert Fano, is a technique for constructing a prefix code based on a set of symbols and their probabilities (estimated or measured).
New!!: Data compression and Shannon–Fano coding · See more »
Shorten (file format)
Shorten (SHN) is a file format used for compressing audio data.
New!!: Data compression and Shorten (file format) · See more »
Signal processing concerns the analysis, synthesis, and modification of signals, which are broadly defined as functions conveying "information about the behavior or attributes of some phenomenon", such as sound, images, and biological measurements.
New!!: Data compression and Signal processing · See more »
Computer software, or simply software, is a generic term that refers to a collection of data or computer instructions that tell the computer how to work, in contrast to the physical hardware from which the system is built, that actually performs the work.
New!!: Data compression and Software · See more »
Sound quality is typically an assessment of the accuracy, enjoyability, or intelligibility of audio output from an electronic device.
New!!: Data compression and Sound quality · See more »
A space–time or time–memory trade-off in computer science is a case where an algorithm or program trades increased space usage with decreased time.
New!!: Data compression and Space–time tradeoff · See more »
Speech coding is an application of data compression of digital audio signals containing speech.
New!!: Data compression and Speech coding · See more »
Statistical inference is the process of using data analysis to deduce properties of an underlying probability distribution.
New!!: Data compression and Statistical inference · See more »
A statistical model is a mathematical model that embodies a set of statistical assumptions concerning the generation of some sample data and similar data from a larger population.
New!!: Data compression and Statistical model · See more »
In signal processing, sub-band coding (SBC) is any form of transform coding that breaks a signal into a number of different frequency bands, typically by using a fast Fourier transform, and encodes each one independently.
New!!: Data compression and Sub-band coding · See more »
Super Audio CD
Super Audio CD (SACD) is a read-only optical disc for audio storage, introduced in 1999.
New!!: Data compression and Super Audio CD · See more »
Terry Archer Welch was an American computer scientist.
New!!: Data compression and Terry Welch · See more »
Thomas Wiegand (born 6 May 1970 in Wismar) is a German electrical engineer who substantially contributed to the creation of the H.264/MPEG-4 AVC and H.265/MPEG-H HEVC video coding standards.
New!!: Data compression and Thomas Wiegand · See more »
Time domain is the analysis of mathematical functions, physical signals or time series of economic or environmental data, with respect to time.
New!!: Data compression and Time domain · See more »
A trade-off (or tradeoff) is a situational decision that involves diminishing or losing one quality, quantity or property of a set or design in return for gains in other aspects.
New!!: Data compression and Trade-off · See more »
True Audio (TTA) is a lossless compressor for multichannel 8, 16 and 24 bits audio data.
New!!: Data compression and TTA (codec) · See more »
Uncompressed video is digital video that either has never been compressed or was generated by decompressing previously compressed digital video.
New!!: Data compression and Uncompressed video · See more »
Universal code (data compression)
In data compression, a universal code for integers is a prefix code that maps the positive integers onto binary codewords, with the additional property that whatever the true probability distribution on integers, as long as the distribution is monotonic (i.e., p(i) ≥ p(i + 1) for all positive i), the expected lengths of the codewords are within a constant factor of the expected lengths that the optimal code for that probability distribution would have assigned.
New!!: Data compression and Universal code (data compression) · See more »
University of Buenos Aires
The University of Buenos Aires (Universidad de Buenos Aires, UBA) is the largest university in Argentina and the second largest university by enrollment in Latin America.
New!!: Data compression and University of Buenos Aires · See more »
Variable bitrate (VBR) is a term used in telecommunications and computing that relates to the bitrate used in sound or video encoding.
New!!: Data compression and Variable bitrate · See more »
Vector quantization (VQ) is a classical quantization technique from signal processing that allows the modeling of probability density functions by the distribution of prototype vectors.
New!!: Data compression and Vector quantization · See more »
A video codec is an electronic circuit or software that compresses or decompresses digital video.
New!!: Data compression and Video codec · See more »
Video coding format
A video coding format (or sometimes video compression format) is a content representation format for storage or transmission of digital video content (such as in a data file or bitstream).
New!!: Data compression and Video coding format · See more »
Video quality is a characteristic of a video passed through a video transmission/processing system, a formal or informal measure of perceived video degradation (typically, compared to the original video).
New!!: Data compression and Video quality · See more »
Voice over IP
Voice over Internet Protocol (also voice over IP, VoIP or IP telephony) is a methodology and group of technologies for the delivery of voice communications and multimedia sessions over Internet Protocol (IP) networks, such as the Internet.
New!!: Data compression and Voice over IP · See more »
Vorbis is a free and open-source software project headed by the Xiph.Org Foundation.
New!!: Data compression and Vorbis · See more »
A waveform is the shape and form of a signal such as a wave moving in a physical medium or an abstract representation.
New!!: Data compression and Waveform · See more »
In mathematics, a wavelet series is a representation of a square-integrable (real- or complex-valued) function by a certain orthonormal series generated by a wavelet.
New!!: Data compression and Wavelet transform · See more »
WavPack is a free and open-source lossless audio compression format.
New!!: Data compression and WavPack · See more »
In signal processing, white noise is a random signal having equal intensity at different frequencies, giving it a constant power spectral density.
New!!: Data compression and White noise · See more »
Windows Media Audio
Windows Media Audio (WMA) is the name of a series of audio codecs and their corresponding audio coding formats developed by Microsoft.
New!!: Data compression and Windows Media Audio · See more »
Zstandard (or Zstd) is a lossless data compression algorithm developed by Yann Collet at Facebook.
New!!: Data compression and Zstandard · See more »
Audio compression (data), Audio data compression, Bit-rate reduction, Block compression, Coding techniques, Compressed data, Compressed digital video, Compressed video, Compression algorithm, Compression algorithms, Compression program, Compression software, Compression utility, Data Compression, Data compression algorithm, Data compression/multimedia compression, Data decompression, Datacompression, Digital audio compression, Digital video compression, File compressing, File compression, Genetic compression algorithm, Intelligent Compression, Lossless Audio, Lossless audio, Lossless audio compression, Lossy audio compression, Multimedia compression, Negabytes, Sound compression, Source Coding, Source coding, Spatial compression, Temporal compression, Text compression, Transparent decompression, Uncompression, Video Compression, Video coding, Video compression, Video data compression, Video encoding.