Forget Privacy: You're Terrible at Targeting Anyway
Saved by Alex Dobrenko and
This is, by the way, the dirty secret of the machine learning movement: almost everything produced by ML could have been produced, more cheaply, using a very dumb heuristic you coded up by hand, because mostly the ML is trained by feeding it examples of what humans did while following a very dumb heuristic. There's no magic here. If you use ML to teach a computer how to sort through resumes, it will recommend you interview people with male, white-sounding names, because it turns out that's what your HR department already does. If you ask it what video a person like you wants to see next, it will recommend some political propaganda crap, because 50% of the time 90% of the people do watch that next, because they can't help themselves, and that's a pretty good success rate.
Saved by Alex Dobrenko and
all of this remains much less a science than an art, and AIs still work more like people than software.
Today most algorithms that recommend or suppress content act purely on the basis of inferred popularity. They look at how much time people spend engaging with a piece of content, and boost it to more people if the numbers look good. The content itself is almost purely a black box. Some algorithms try to classify content with tags like “food” or “fu
... See moreMeanwhile AI may make better decisions than people and steal our jobs, but computers and algorithms cannot frame. AI is brilliant at answering what it is asked; framers pose questions never before voiced. Computers work only in a world that exists; humans live in ones they imagine through framing.
A machine-learning model, trained by data, “is by definition a tool to predict the future, given that it looks like the past. . . . That’s why it’s fundamentally the wrong tool for a lot of domains, where you’re trying to design interventions and mechanisms to change the world.”