Writing about internet communities, products, creation, and crypto.
By thinking about a ML problem first as a set of inputs and desired outputs, you can reason whether the input is even sufficient to predict the output.
ML models are formed from combining biases and data. Sometimes the biases are strong, other times they are weak. To make a model generalize better, you need to add more biases or add more unbiased data. There is no free lunch.
Paris is not in pursuit of perfection. "Most of my shit's bad," she laughs, "I'm not trying to act like I'm some savant, like this is some fucking Picasso shit." She points to her friend, prolific YouTuber and filmmaker Casey Neistat. "He's like 'I won't put anything out there they don't think is amazing and I don't give a fuck how people receive... See more
Information bankruptcy is a common problem for communities to have. When there is too much information to absorb, members often give up on absorbing any of it at all. It’s a play off of email bankruptcy, where someone ignores or deletes all emails beyond a certain date.
Twitter’s only conclusion can be abandonment: an overdue MySpace-ification. I am totally confident about this prediction, but that’s an easy confidence, because in the long run, we’re all MySpace-ified. The only question, then, is how many more possibilities will go unexplored? How much more time will be wasted?