added by Ian Vanagas · updated 2y ago
How to Understand ML Papers Quickly
- By thinking about a ML problem first as a set of inputs and desired outputs, you can reason whether the input is even sufficient to predict the output.
from How to Understand ML Papers Quickly by Eric Jang
Ian Vanagas added 2y ago
- Thinking about inputs and outputs to the system in a method-agnostic way lets you take a step back from the algorithmic jargon and consider whether other fields have developed methods that might work here using different terminology.
from How to Understand ML Papers Quickly by Eric Jang
Ian Vanagas added 2y ago
- 5) Are the claims in the paper falsifiable?
from How to Understand ML Papers Quickly by Eric Jang
Ian Vanagas added 2y ago
- 1) What are the inputs to the function approximator?
from How to Understand ML Papers Quickly by Eric Jang
Ian Vanagas added 2y ago
- 2) What are the outputs to the function approximator?
from How to Understand ML Papers Quickly by Eric Jang
Ian Vanagas added 2y ago
- 3) What loss supervises the output predictions? What assumptions about the world does this particular objective make?
from How to Understand ML Papers Quickly by Eric Jang
Ian Vanagas added 2y ago
- ML models are formed from combining biases and data. Sometimes the biases are strong, other times they are weak. To make a model generalize better, you need to add more biases or add more unbiased data. There is no free lunch.
from How to Understand ML Papers Quickly by Eric Jang
Ian Vanagas added 2y ago
- 4) Once trained, what is the model able to generalize to, in regards to input/output pairs it hasn’t seen before?
from How to Understand ML Papers Quickly by Eric Jang
Ian Vanagas added 2y ago