Saved by Ian Vanagas
How to Understand ML Papers Quickly
ML models are formed from combining biases and data. Sometimes the biases are strong, other times they are weak. To make a model generalize better, you need to add more biases or add more unbiased data. There is no free lunch.
Eric Jang • How to Understand ML Papers Quickly
By thinking about a ML problem first as a set of inputs and desired outputs, you can reason whether the input is even sufficient to predict the output.
Eric Jang • How to Understand ML Papers Quickly
Thinking about inputs and outputs to the system in a method-agnostic way lets you take a step back from the algorithmic jargon and consider whether other fields have developed methods that might work here using different terminology.
Eric Jang • How to Understand ML Papers Quickly
5) Are the claims in the paper falsifiable?
Eric Jang • How to Understand ML Papers Quickly
4) Once trained, what is the model able to generalize to, in regards to input/output pairs it hasn’t seen before?
Eric Jang • How to Understand ML Papers Quickly
3) What loss supervises the output predictions? What assumptions about the world does this particular objective make?
Eric Jang • How to Understand ML Papers Quickly
2) What are the outputs to the function approximator?