74 relations: Aleksandr Gorban, Autoencoder, Backpropagation, Charles-Augustin de Coulomb, Concentration of measure, Coulomb's law, Curse of dimensionality, Degree matrix, Diffusion map, Dimension, Dimensionality reduction, Distance, Distance (graph theory), Distance matrix, Dynamic time warping, Dynamical system, Eigendecomposition of a matrix, Elastic map, Embedding, Euclidean distance, Expectation–maximization algorithm, Factor analysis, Feature extraction, Feature learning, Floyd–Warshall algorithm, Fourier series, Gaussian process, Generative topographic map, Geodesic, Global cascades model, Graduated optimization, Growing self-organizing map, Hamming space, Independent component analysis, Isomap, Journal of Machine Learning Research, K-nearest neighbors algorithm, Kernel method, Kernel principal component analysis, Klein bottle, Laplace–Beltrami operator, Latent variable model, Linear discriminant analysis, Local tangent space alignment, Machine learning, Manifold, Manifold alignment, Markov chain, MIT Press, Multidimensional scaling, ..., Neural network, Nonlinear dimensionality reduction, Ohio State University, Partha Niyogi, Principal component analysis, Projective space, Radial basis function network, Random walk, Reproducing kernel Hilbert space, Restricted Boltzmann machine, Sammon mapping, Self-organizing map, Semidefinite embedding, Similarity measure, Singular-value decomposition, Sparse matrix, Sphere, Stress majorization, Swiss roll, T-distributed stochastic neighbor embedding, Torus, Trevor Hastie, University of Chicago, Yale University. Expand index (24 more) »

## Aleksandr Gorban

Alexander Nikolaevich Gorban (Александр Николаевич Горба́нь.) is a scientist of Soviet origin, working in the United Kingdom.

New!!: Nonlinear dimensionality reduction and Aleksandr Gorban · See more »

## Autoencoder

An autoencoder is a type of artificial neural network used to learn efficient data codings in an unsupervised manner.

New!!: Nonlinear dimensionality reduction and Autoencoder · See more »

## Backpropagation

Backpropagation is a method used in artificial neural networks to calculate a gradient that is needed in the calculation of the weights to be used in the network.

New!!: Nonlinear dimensionality reduction and Backpropagation · See more »

## Charles-Augustin de Coulomb

Charles-Augustin de Coulomb (14 June 1736 – 23 August 1806) was a French military engineer and physicist.

New!!: Nonlinear dimensionality reduction and Charles-Augustin de Coulomb · See more »

## Concentration of measure

In mathematics, concentration of measure (about a median) is a principle that is applied in measure theory, probability and combinatorics, and has consequences for other fields such as Banach space theory.

New!!: Nonlinear dimensionality reduction and Concentration of measure · See more »

## Coulomb's law

Coulomb's law, or Coulomb's inverse-square law, is a law of physics for quantifying the amount of force with which stationary electrically charged particles repel or attract each other.

New!!: Nonlinear dimensionality reduction and Coulomb's law · See more »

## Curse of dimensionality

The curse of dimensionality refers to various phenomena that arise when analyzing and organizing data in high-dimensional spaces (often with hundreds or thousands of dimensions) that do not occur in low-dimensional settings such as the three-dimensional physical space of everyday experience.

New!!: Nonlinear dimensionality reduction and Curse of dimensionality · See more »

## Degree matrix

In the mathematical field of graph theory the degree matrix is a diagonal matrix which contains information about the degree of each vertex—that is, the number of edges attached to each vertex.

New!!: Nonlinear dimensionality reduction and Degree matrix · See more »

## Diffusion map

Diffusion maps is a dimensionality reduction or feature extraction algorithm introduced by Coifman and Lafon which computes a family of embeddings of a data set into Euclidean space (often low-dimensional) whose coordinates can be computed from the eigenvectors and eigenvalues of a diffusion operator on the data.

New!!: Nonlinear dimensionality reduction and Diffusion map · See more »

## Dimension

In physics and mathematics, the dimension of a mathematical space (or object) is informally defined as the minimum number of coordinates needed to specify any point within it.

New!!: Nonlinear dimensionality reduction and Dimension · See more »

## Dimensionality reduction

In statistics, machine learning, and information theory, dimensionality reduction or dimension reduction is the process of reducing the number of random variables under consideration by obtaining a set of principal variables.

New!!: Nonlinear dimensionality reduction and Dimensionality reduction · See more »

## Distance

Distance is a numerical measurement of how far apart objects are.

New!!: Nonlinear dimensionality reduction and Distance · See more »

## Distance (graph theory)

In the mathematical field of graph theory, the distance between two vertices in a graph is the number of edges in a shortest path (also called a graph geodesic) connecting them.

New!!: Nonlinear dimensionality reduction and Distance (graph theory) · See more »

## Distance matrix

In mathematics, computer science and especially graph theory, a distance matrix is a square matrix (two-dimensional array) containing the distances, taken pairwise, between the elements of a set.

New!!: Nonlinear dimensionality reduction and Distance matrix · See more »

## Dynamic time warping

In time series analysis, dynamic time warping (DTW) is one of the algorithms for measuring similarity between two temporal sequences, which may vary in speed.

New!!: Nonlinear dimensionality reduction and Dynamic time warping · See more »

## Dynamical system

In mathematics, a dynamical system is a system in which a function describes the time dependence of a point in a geometrical space.

New!!: Nonlinear dimensionality reduction and Dynamical system · See more »

## Eigendecomposition of a matrix

In linear algebra, eigendecomposition or sometimes spectral decomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors.

New!!: Nonlinear dimensionality reduction and Eigendecomposition of a matrix · See more »

## Elastic map

Elastic maps provide a tool for nonlinear dimensionality reduction.

New!!: Nonlinear dimensionality reduction and Elastic map · See more »

## Embedding

In mathematics, an embedding (or imbedding) is one instance of some mathematical structure contained within another instance, such as a group that is a subgroup.

New!!: Nonlinear dimensionality reduction and Embedding · See more »

## Euclidean distance

In mathematics, the Euclidean distance or Euclidean metric is the "ordinary" straight-line distance between two points in Euclidean space.

New!!: Nonlinear dimensionality reduction and Euclidean distance · See more »

## Expectation–maximization algorithm

In statistics, an expectation–maximization (EM) algorithm is an iterative method to find maximum likelihood or maximum a posteriori (MAP) estimates of parameters in statistical models, where the model depends on unobserved latent variables.

New!!: Nonlinear dimensionality reduction and Expectation–maximization algorithm · See more »

## Factor analysis

Factor analysis is a statistical method used to describe variability among observed, correlated variables in terms of a potentially lower number of unobserved variables called factors.

New!!: Nonlinear dimensionality reduction and Factor analysis · See more »

## Feature extraction

In machine learning, pattern recognition and in image processing, feature extraction starts from an initial set of measured data and builds derived values (features) intended to be informative and non-redundant, facilitating the subsequent learning and generalization steps, and in some cases leading to better human interpretations.

New!!: Nonlinear dimensionality reduction and Feature extraction · See more »

## Feature learning

In machine learning, feature learning or representation learning is a set of techniques that allows a system to automatically discover the representations needed for feature detection or classification from raw data.

New!!: Nonlinear dimensionality reduction and Feature learning · See more »

## Floyd–Warshall algorithm

In computer science, the Floyd–Warshall algorithm is an algorithm for finding shortest paths in a weighted graph with positive or negative edge weights (but with no negative cycles).

New!!: Nonlinear dimensionality reduction and Floyd–Warshall algorithm · See more »

## Fourier series

In mathematics, a Fourier series is a way to represent a function as the sum of simple sine waves.

New!!: Nonlinear dimensionality reduction and Fourier series · See more »

## Gaussian process

In probability theory and statistics, a Gaussian process is a stochastic process (a collection of random variables indexed by time or space), such that every finite collection of those random variables has a multivariate normal distribution, i.e. every finite linear combination of them is normally distributed.

New!!: Nonlinear dimensionality reduction and Gaussian process · See more »

## Generative topographic map

Generative topographic map (GTM) is a machine learning method that is a probabilistic counterpart of the self-organizing map (SOM), is probably convergent and does not require a shrinking neighborhood or a decreasing step size.

New!!: Nonlinear dimensionality reduction and Generative topographic map · See more »

## Geodesic

In differential geometry, a geodesic is a generalization of the notion of a "straight line" to "curved spaces".

New!!: Nonlinear dimensionality reduction and Geodesic · See more »

## Global cascades model

Global cascades models are a class of models aiming to model large and rare cascades that are triggered by exogenous perturbations which are relatively small compared with the size of the system.

New!!: Nonlinear dimensionality reduction and Global cascades model · See more »

## Graduated optimization

Graduated optimization is a global optimization technique that attempts to solve a difficult optimization problem by initially solving a greatly simplified problem, and progressively transforming that problem (while optimizing) until it is equivalent to the difficult optimization problem.

New!!: Nonlinear dimensionality reduction and Graduated optimization · See more »

## Growing self-organizing map

A growing self-organizing map (GSOM) is a growing variant of a self-organizing map (SOM).

New!!: Nonlinear dimensionality reduction and Growing self-organizing map · See more »

## Hamming space

In statistics and coding theory, a Hamming space is usually the set of all 2^N binary strings of length N. It is used in the theory of coding signals and transmission.

New!!: Nonlinear dimensionality reduction and Hamming space · See more »

## Independent component analysis

In signal processing, independent component analysis (ICA) is a computational method for separating a multivariate signal into additive subcomponents.

New!!: Nonlinear dimensionality reduction and Independent component analysis · See more »

## Isomap

Isomap is a nonlinear dimensionality reduction method.

New!!: Nonlinear dimensionality reduction and Isomap · See more »

## Journal of Machine Learning Research

The Journal of Machine Learning Research is a peer-reviewed open access scientific journal covering machine learning.

New!!: Nonlinear dimensionality reduction and Journal of Machine Learning Research · See more »

## K-nearest neighbors algorithm

In pattern recognition, the k-nearest neighbors algorithm (k-NN) is a non-parametric method used for classification and regression.

New!!: Nonlinear dimensionality reduction and K-nearest neighbors algorithm · See more »

## Kernel method

In machine learning, kernel methods are a class of algorithms for pattern analysis, whose best known member is the support vector machine (SVM).

New!!: Nonlinear dimensionality reduction and Kernel method · See more »

## Kernel principal component analysis

In the field of multivariate statistics, kernel principal component analysis (kernel PCA) is an extension of principal component analysis (PCA) using techniques of kernel methods.

New!!: Nonlinear dimensionality reduction and Kernel principal component analysis · See more »

## Klein bottle

In topology, a branch of mathematics, the Klein bottle is an example of a non-orientable surface; it is a two-dimensional manifold against which a system for determining a normal vector cannot be consistently defined.

New!!: Nonlinear dimensionality reduction and Klein bottle · See more »

## Laplace–Beltrami operator

In differential geometry, the Laplace operator, named after Pierre-Simon Laplace, can be generalized to operate on functions defined on surfaces in Euclidean space and, more generally, on Riemannian and pseudo-Riemannian manifolds.

New!!: Nonlinear dimensionality reduction and Laplace–Beltrami operator · See more »

## Latent variable model

A latent variable model is a statistical model that relates a set of observable variables (so-called manifest variables) to a set of latent variables.

New!!: Nonlinear dimensionality reduction and Latent variable model · See more »

## Linear discriminant analysis

Linear discriminant analysis (LDA), normal discriminant analysis (NDA), or discriminant function analysis is a generalization of Fisher's linear discriminant, a method used in statistics, pattern recognition and machine learning to find a linear combination of features that characterizes or separates two or more classes of objects or events.

New!!: Nonlinear dimensionality reduction and Linear discriminant analysis · See more »

## Local tangent space alignment

Local tangent space alignment (LTSA) is a method for manifold learning, which can efficiently learn a nonlinear embedding into low-dimensional coordinates from high-dimensional data, and can also reconstruct high-dimensional coordinates from embedding coordinates.

New!!: Nonlinear dimensionality reduction and Local tangent space alignment · See more »

## Machine learning

Machine learning is a subset of artificial intelligence in the field of computer science that often uses statistical techniques to give computers the ability to "learn" (i.e., progressively improve performance on a specific task) with data, without being explicitly programmed.

New!!: Nonlinear dimensionality reduction and Machine learning · See more »

## Manifold

In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point.

New!!: Nonlinear dimensionality reduction and Manifold · See more »

## Manifold alignment

Manifold alignment is a class of machine learning algorithms that produce projections between sets of data, given that the original data sets lie on a common manifold.

New!!: Nonlinear dimensionality reduction and Manifold alignment · See more »

## Markov chain

A Markov chain is "a stochastic model describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event".

New!!: Nonlinear dimensionality reduction and Markov chain · See more »

## MIT Press

The MIT Press is a university press affiliated with the Massachusetts Institute of Technology (MIT) in Cambridge, Massachusetts (United States).

New!!: Nonlinear dimensionality reduction and MIT Press · See more »

## Multidimensional scaling

Multidimensional scaling (MDS) is a means of visualizing the level of similarity of individual cases of a dataset.

New!!: Nonlinear dimensionality reduction and Multidimensional scaling · See more »

## Neural network

The term neural network was traditionally used to refer to a network or circuit of neurons.

New!!: Nonlinear dimensionality reduction and Neural network · See more »

## Nonlinear dimensionality reduction

High-dimensional data, meaning data that requires more than two or three dimensions to represent, can be difficult to interpret.

New!!: Nonlinear dimensionality reduction and Nonlinear dimensionality reduction · See more »

## Ohio State University

The Ohio State University, commonly referred to as Ohio State or OSU, is a large, primarily residential, public university in Columbus, Ohio.

New!!: Nonlinear dimensionality reduction and Ohio State University · See more »

## Partha Niyogi

Partha Niyogi (July 31, 1967 – October 1, 2010) was the Louis Block Professor in Computer Science and Statistics at the University of Chicago.

New!!: Nonlinear dimensionality reduction and Partha Niyogi · See more »

## Principal component analysis

Principal component analysis (PCA) is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components.

New!!: Nonlinear dimensionality reduction and Principal component analysis · See more »

## Projective space

In mathematics, a projective space can be thought of as the set of lines through the origin of a vector space V. The cases when and are the real projective line and the real projective plane, respectively, where R denotes the field of real numbers, R2 denotes ordered pairs of real numbers, and R3 denotes ordered triplets of real numbers.

New!!: Nonlinear dimensionality reduction and Projective space · See more »

## Radial basis function network

In the field of mathematical modeling, a radial basis function network is an artificial neural network that uses radial basis functions as activation functions.

New!!: Nonlinear dimensionality reduction and Radial basis function network · See more »

## Random walk

A random walk is a mathematical object, known as a stochastic or random process, that describes a path that consists of a succession of random steps on some mathematical space such as the integers.

New!!: Nonlinear dimensionality reduction and Random walk · See more »

## Reproducing kernel Hilbert space

In functional analysis (a branch of mathematics), a reproducing kernel Hilbert space (RKHS) is a Hilbert space of functions in which point evaluation is a continuous linear functional.

New!!: Nonlinear dimensionality reduction and Reproducing kernel Hilbert space · See more »

## Restricted Boltzmann machine

A restricted Boltzmann machine (RBM) is a generative stochastic artificial neural network that can learn a probability distribution over its set of inputs.

New!!: Nonlinear dimensionality reduction and Restricted Boltzmann machine · See more »

## Sammon mapping

Sammon mapping or Sammon projection is an algorithm that maps a high-dimensional space to a space of lower dimensionality (see multidimensional scaling) by trying to preserve the structure of inter-point distances in high-dimensional space in the lower-dimension projection.

New!!: Nonlinear dimensionality reduction and Sammon mapping · See more »

## Self-organizing map

A self-organizing map (SOM) or self-organizing feature map (SOFM) is a type of artificial neural network (ANN) that is trained using unsupervised learning to produce a low-dimensional (typically two-dimensional), discretized representation of the input space of the training samples, called a map, and is therefore a method to do dimensionality reduction.

New!!: Nonlinear dimensionality reduction and Self-organizing map · See more »

## Semidefinite embedding

Semidefinite embedding (SDE) or maximum variance unfolding (MVU) is an algorithm in computer science that uses semidefinite programming to perform non-linear dimensionality reduction of high-dimensional vectorial input data.

New!!: Nonlinear dimensionality reduction and Semidefinite embedding · See more »

## Similarity measure

In statistics and related fields, a similarity measure or similarity function is a real-valued function that quantifies the similarity between two objects.

New!!: Nonlinear dimensionality reduction and Similarity measure · See more »

## Singular-value decomposition

In linear algebra, the singular-value decomposition (SVD) is a factorization of a real or complex matrix.

New!!: Nonlinear dimensionality reduction and Singular-value decomposition · See more »

## Sparse matrix

In numerical analysis and computer science, a sparse matrix or sparse array is a matrix in which most of the elements are zero.

New!!: Nonlinear dimensionality reduction and Sparse matrix · See more »

## Sphere

A sphere (from Greek σφαῖρα — sphaira, "globe, ball") is a perfectly round geometrical object in three-dimensional space that is the surface of a completely round ball (viz., analogous to the circular objects in two dimensions, where a "circle" circumscribes its "disk").

New!!: Nonlinear dimensionality reduction and Sphere · See more »

## Stress majorization

Stress majorization is an optimization strategy used in multidimensional scaling (MDS) where, for a set of n m-dimensional data items, a configuration X of n points in r(\sigma(X). Usually r is 2 or 3, i.e. the (n x r) matrix X lists points in 2- or 3-dimensional Euclidean space so that the result may be visualised (i.e. an MDS plot). The function \sigma is a cost or loss function that measures the squared differences between ideal (m-dimensional) distances and actual distances in r-dimensional space. It is defined as: where w_\ge 0 is a weight for the measurement between a pair of points (i,j), d_(X) is the euclidean distance between i and j and \delta_ is the ideal distance between the points (their separation) in the m-dimensional data space. Note that w_ can be used to specify a degree of confidence in the similarity between points (e.g. 0 can be specified if there is no information for a particular pair). A configuration X which minimizes \sigma(X) gives a plot in which points that are close together correspond to points that are also close together in the original m-dimensional data space. There are many ways that \sigma(X) could be minimized. For example, Kruskal recommended an iterative steepest descent approach. However, a significantly better (in terms of guarantees on, and rate of, convergence) method for minimizing stress was introduced by Jan de Leeuw.. De Leeuw's iterative majorization method at each step minimizes a simple convex function which both bounds \sigma from above and touches the surface of \sigma at a point Z, called the supporting point. In convex analysis such a function is called a majorizing function. This iterative majorization process is also referred to as the SMACOF algorithm ("Scaling by MAjorizing a COmplicated Function").

New!!: Nonlinear dimensionality reduction and Stress majorization · See more »

## Swiss roll

A Swiss roll, jelly roll, or cream roll is a type of sponge cake roll filled with whipped cream, jam, or icing.

New!!: Nonlinear dimensionality reduction and Swiss roll · See more »

## T-distributed stochastic neighbor embedding

T-distributed Stochastic Neighbor Embedding (t-SNE) is a machine learning algorithm for visualization developed by Laurens van der Maaten and Geoffrey Hinton.

New!!: Nonlinear dimensionality reduction and T-distributed stochastic neighbor embedding · See more »

## Torus

In geometry, a torus (plural tori) is a surface of revolution generated by revolving a circle in three-dimensional space about an axis coplanar with the circle.

New!!: Nonlinear dimensionality reduction and Torus · See more »

## Trevor Hastie

Trevor John Hastie (born 27 June 1953) is a South African and American statistician and computer scientist.

New!!: Nonlinear dimensionality reduction and Trevor Hastie · See more »

## University of Chicago

The University of Chicago (UChicago, U of C, or Chicago) is a private, non-profit research university in Chicago, Illinois.

New!!: Nonlinear dimensionality reduction and University of Chicago · See more »

## Yale University

Yale University is an American private Ivy League research university in New Haven, Connecticut.

New!!: Nonlinear dimensionality reduction and Yale University · See more »

## Redirects here:

Local linear embedding, Locally Linear Embedding, Locally linear embedding, Locally linear embeddings, Manifold learning, Non-linear dimensionality reduction.

## References

[1] https://en.wikipedia.org/wiki/Nonlinear_dimensionality_reduction