Some say that you can’t teach an old dog new tricks, but teaching young dogs tricks in general is super difficult—I would know, my dogs don’t listen to me at all unless I’m bribing them with treats. But what do dogs have to do with the news? Well nothing, but teaching a puppy a trick is kind of like teaching a machine to do something; it needs very specific instructions and there’s oftentimes a lot of trial and error. I’ve heard people say that we should let machines make decisions, they don’t have prior conceptions about things, it’s like they’re existing behind a veil of ignorance. But researchers are trying to teach machines to recognize words and pictures but have found them to still be unreliable.
Similar things are happening when we try to have machines make ethical decisions for us. A machine can’t really be taught to apply Kant’s categorical imperative or Aristotle’s golden mean, and while a machine may technically have a set of loyalties like the case with Tesla’s self-driving cars, a machine is incapable of using the Potter Box tool for making moral decisions. If we were to have machines write our news stories and be responsible for reporting them, I don’t think there would be a very positive response. Sure, a computer given information about an event may be able to be slightly more objective and “just report the facts,” but how is the computer going to receive the information? Even if drones were used to collect data and essentially make reports on things, the computer could not actually make sense of what’s happening. People would need to write the code for the computer to follow and thusly write a story.
Since it’s clear that we can’t solely rely on machines to make the news, I don’t think it can be fair to argue that machines ought to be responsible for regulating and filtering it. A computer might see imperfect news as “fake news” and flag it, but if there’s one or two misprints, that doesn’t make an article fake. While I see the merits in having technology try to flag fake news, we can’t wholly rely on them. It’s still the readers’ responsibilities to make the decision to read or not to read an article. If anything, it should be a mutual relationship between technology and human kind—an algorithm might flag an article, but the person must make an informed decision on their own. If they aren’t happy with what they read, it’s not fair to blame the platform they found it for their reading it.
I agree that it sucks to feel duped into reading by a machine, uh hello: clickbate. But some person somewhere made the clickbate, not the machine. People need to realize that and take responsibility for their decisions. That being said, I also don’t agree that algorithms should be in charge of making decisions about what we are presented with. No, I probably don’t want to read an article that celebrates a politician I find nauseating, but that doesn’t mean I shouldn’t be able to see it. If your newsfeed is purely blue or explicitly red that does not allow you to be well informed. Biased newsfeeds lead to bubbles of thought that leave little room for critical thinking. So please, stop blaming Facebook for the election, for sharing fake news—take some responsibility and read critically.