- 
                Notifications
    
You must be signed in to change notification settings  - Fork 34
 
17. Embeddings
Topic: Embeddings
Course: GMLC
Date: 16 April 2019
Professor: Not specified
- 
https://developers.google.com/machine-learning/crash-course/embeddings/video-lecture
 - 
https://developers.google.com/machine-learning/crash-course/embeddings/categorical-input-data
 - 
https://developers.google.com/machine-learning/crash-course/embeddings/obtaining-embeddings
 - 
https://developers.google.com/machine-learning/crash-course/embeddings/programming-exercise
 
- 
Collaborative filtering
- Task of making predictions for an user based on predictions of other users
 
 - 
Categorical data
- refers to input features that can represent one or more items from a finite number of choices
 
 - 
Embeddings
- used to translate large sparse-vectors into low dimensional space while preserving semantic relationships
 
 - 
Sparse input data problems can be solved by translating into lower dimensional space
 - 
Shrinking the network
- The aim is to have enough space for rich semantic relations, but also not too much that it slows down the mode training
 
 - 
Embeddings as lookup tables
- To receive dense vectors (multiple items) from a matrix, we retrieve individual embedding then add them together
 
 - 
Embedding lookup as matrix multiplication
- equivalent to matrix multiplication
 
 - 
Obtaining embeddings
- 
Principal component analysis (PCA)
- 
Used for word embeddings
 - 
Finds highly correlated dimensions that can be collapsed into a single dimension
 
 - 
 - 
Word2Vec
- 
Algorithm invented by Google for training word embeddings
 - 
Uses distributional hypothesis to map semantically similar word to geometrically close embedding vectors
 
 - 
 
 - 
 - 
Training an embedding as part of a model
- Can be used as a layer in neural networks
 
 
- 
What are embeddings useful for?
 - 
What are ways of obtaining embeddings?
 
- 
Embeddings are used to translate large sparse-vectors to a lower dimension while preserving their semantic relationships
 - 
Embeddings can be trained in a neural network as an embedding layer