MOTIVATION
Unsupervised learning of useful features, or representations, is one of the most basic challenges of machine learning. Too often success of a data science project depends on the choice of features used. Machine learning has made great progress in training classification, regression, and recognition systems when "good" representations, or features, of input data are available. However, such human effort is spent on designing good features which are usually knowledge-based and engineered by domain-experts over years of trail and error. A natural question to ask is "Can we automate the learning of useful features from raw data?".
Unsupervised representation learning techniques capitalise on unlabelled data which is often cheap and abundant and sometimes virtually unlimited. The goal of these ubiquitous techniques is to learn a representation that reveals intrinsic low-dimensional structure in data, disentangles underlying factors of variation by incorporating universal AI priors such as smoothness and sparsity, and is useful across multiple tasks and domains.
TENTATIVE TOPICS
- Subspace learning: PCA, sparse PCA, robust PCA, independent component analysis (ICA), etc.
- Manifold learning: kernel K-means, kernel PCA, IsoMap, Local Linear Embedding, etc.
- Deep learning: Restricted Boltzmann Machine, auto encoders, etc.
- Muti-view learning: partial least squares, canonical correlation analysis (CCA), kernel CCA, etc.
- Spectral learning: spectral methods, spectral clustering, etc.
- Representation Learning: A Review and New. Perspectives. Yoshua Bengio†, Aaron Courville, and Pascal Vincent†.
- https://en.wikipedia.org/wiki/Feature_learning
- ...
Không có nhận xét nào:
Đăng nhận xét