On the universal list of dream jobs, being paid to sit around and watch TV and movies all day would almost certainly be near the top of the list. You may have heard a while back that a few lucky people do just that for Netflix. The streaming company employs about 40 individuals around the world to create metadata tags for its content. In effect, they watch TV shows and movies and enter information about them into a spreadsheet – how much violence is there, the gender of the main characters, tone and mood, is there nudity or cursing, and so on.
I had the chance to chat with Mike Hastings, who as the director of enhanced content team runs this program, during my visit to Netflix’s headquarters in Los Gatos., Calif. last week. I was curious as to why the company still uses humans for this particular task. Image recognition and machine learning has come a long way, with company’s such as Israel’s Anyclip effectively doing the same job, but without people.
“We’ve been doing this for about six years and humans were definitely the way at first,” says Hastings (no relation to chief executive Reed Hastings). “We’re still using humans today because it’s been the most reliable, trustworthy way of getting some of this data.”
Technology such as that used by Anyclip is great for finding something specific – typing in the word “bicycle,” for example, will turn up clips involving that particular item – but it’s not necessarily something that can help when people are simply browsing around for something to watch.
“It’s quirky and interesting technology but I haven’t yet seen an application of it that would help me find something,” he says. “I haven’t seen it make that leap from micro-analyzing to what I think of as themes and tropes and the things you think of when you want to find something.”
But that’s bound to change. Now that Netflix has a huge database of metadata to work with, it’s becoming increasingly easier for algorithms to do the jobs of human taggers.
“Once we figure out what’s similar to House of Cards, we can infer things about other programs from common tags. I think we can get there,” says Hastings. “We definitely talk about that and what it might look like and I’m excited to hopefully be here when it happens, but we’re not quite there yet.”
What about the errors? Whether they’re the result of human or machine tagging, many Netflix subscribers have had that all-to-familiar experience where the service has suggested TV shows or movies that are completely off-base from their interests.
Hastings is confident about the accuracy of his team’s work, so when Netflix gets blamed for offering off-base recommendations, “I kind of think, ‘Did we really?'” Problems occur when different people use the same account, rather than their own individual profiles, to watch content. Recommendations can’t help but get skewed when the grown-ups are watching horror movies while the kids are watching cartoons.
“I like the kind of feedback we get where people say, ‘Oh, I’m in to understated midlife crisis movies. That’s depressing, but accurate,'” he says. “More often than not, it usually says more about them than it does about us.”