[JustForFunML] Bayes Classifier and KNN

Step 1. Groundwork

  1. Supervised learning
    We want to estimate a function that maps input x into output y.
  2. Unsupervised learning
    The only thing we have is input x. Thereby, we want to estimate the pattern within x.
  3. Reinforcement learning
    This is somewhat distinguished from the above, as we have state, action, and most importantly reward in the setting. So that the agent in the setting tries to outperform its previous self, for the sake of achieving the utmost, optimal sequence of actions.

Step 2. Classification — Do you like the Simpsons?

Photo by Stefan Grage on Unsplash
Fig 1. training error rate

Step 3. Bayes Classifier

Fig 2. conditional probability — Bayes Classifier

Step 4. KNN (K-Nearest Neighbors)

Fig 3. KNN classifier conditional probability




Ydobon is nobody.

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Managing Data as a Data Engineer — Part 1: Understanding Users

Evaluating Performance -Classification

Fun Project: DeViSe on Pytorch

Guide To Breaking Into Data Science

Ideas for using word2vec in (human) learning tasks

Quantum Data Embeddings Circuit Design #2

Personalization @Intuit Part 3 — Platform

CNNs for Image Classification, pt. 2

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
A Ydobon

A Ydobon

Ydobon is nobody.

More from Medium

Logistic Regression — In Depth Intuition with an example

Decision Trees

Simple Linear Regression Simplified

Linear Regression