Posts about machine learning
- Advanced Optimization
- Advice for Applying PCA
- Advice for Data Scientist and Recap
- Algorithm
- Anomaly Detection vs Supervised Learning
- Auditing the data
- Autonomous Driving (Examples)
- Backpropagation Algorithm
- Backpropagation Intuition
- Bayesian Inference
- Boosting
- Boosting, Post-decessor
- Boosting, Pre-decesor
- Ceiling Analysis: What Part of the Pipeline to Work on Next
- Choosing the Number of Principal Components
- Choosing what features to use
- Classification
- Classification vs Regression
- Collaborative filtering
- Collaborative filtering algorithm
- Content-based Recommendation
- Cost Function
- Data for Machine Learning
- Deciding What to Do Next (Revisited)
- Deciding what to Try Next
- Decision Boundary
- Decision Trees
- Definition
- Developing and evaluating an anomaly detection system
- Diagnosing bias vs. variance
- Error Analysis
- Error metrics for Skewed Classes
- Evaluating a hyphotesis
- Examples & Intuition l
- Examples & Intuition ll
- Features And Polynomial Regression
- Gaussian Distribution
- Getting Lots of Data and Artificial Data Synthesis
- Gradient Checking
- Gradient Descent
- hypothesis representation
- ID3
- Implementation detail: Mean Normalization
- Implementation note: Unrolling parameters
- Infinite Hypothesis Spaces
- Instance Based Learning and Others
- Introduction
- Joint Distribution
- K-means algorithm
- Kernels I
- Kernels II
- kNN
- Large Margin Intuition
- Learning Curves
- Learning Theory
- Learning With Large Datasets
- Linear Regression, Gradient Descent, Cost Function
- Machine Learning
- Map-reduce and data-parallelism
- Mini Batch Gradient Descent
- Model Representation
- Model Representation l
- Model Representation ll
- Model selection and training/validation/test sets
- Motivation I : Data Compression
- Motivation II : Data Visualization
- Multi-class Classification
- multiclass classification
- Multiple Variables
- Neat Tricks
- Neural Networks & Perceptron
- Neurons & the brain
- Non-linear hypothesis
- Normal Equation
- Note from Intro to Data Science
- Online Learning
- Optimization Objective
- Pac Learning
- Pandas and Dataframes
- Polynomial Regression
- Principal Component Analysis Algorithm
- Principal Component Analysis problem formulation
- Prioritizing What to Work On
- Problem Description and Pipeline
- Problem Formulation
- Problem Motivation
- Problem Set 2
- Project Intro for Titanic
- Putting it together
- Random Initialization
- Reconstruction from compressed representation
- Regression
- Regularization and Bias/Variance
- Regularized Linear Regression
- Regularized Logistic Regression
- Server Computers (AD Ex.)
- Simplified cost function and gradient descent
- Sliding Windows
- Stochastic Gradient Descent
- Stochastic Gradient Descent Convergence
- Summary
- Support Vector Machines
- The problem of overfitting
- Tools for Neural Networks & Others
- Trading of Precision & Recall
- Unsupervised Learning: Introduction
- Using SVM
- Vectorization: Low Rank Matrix Factorization
- validation with scikit-learn
- evaluation with scikit-learn
- PCA with scikit-learn
- Feature Selection with scikit-learn
- Text Learning with scikit-learn
- Feature Scaling with scikit-learn
- K-Means with scikit-learn
- Outliers with scikit-learn
- Regression with scikit-learn
- Datasets and Question
- Random Forest with scikit-learn
- Decision Trees with scikit-learn
- Support Vector Machine with scikit-learn
- Naive Bayes