When you’re dealing with two billion people on a single social network, I suppose strange and unfortunate things are simply bound to happen. Things you wish you could prevent.
Suicide — and particularly suicide broadcast live — has got to be near the top of that list. There’s just something about it that’s bad for business, so Facebook has been using AI to detect suicidal posts and alert authorities when deemed appropriate.
Facebook readily admits this is not a job for their human workforce — there are simply too many users and too much information being generated every second of every day. A bit cold and morbid perhaps, but I think this is a quintessential example of automation in 2017; who needs humans to detect suicidal tendencies when AI can do it so much more efficiently.
Obviously there’s a big brother element here; using AI to track down and prevent (if possible) self-harm is definitely a bit 1984-creepy. But it’s not like Facebook hasn’t been monitoring your every move since you first joined; how else would it be able to tailor advertising to your very particular (and, let’s admit it, probably odd) tastes? It’s what they do. You’re the product, and if possible they will prevent you from disturbing and disrupting their other products; i.e. your circle of Facebook friends.
Humans are messy, complex creatures. Facebook, I’m sure, knows this better than anyone. While they may prefer that you keep your suicidal thoughts to yourself, they know Facebook is always going to be an outlet of last resort for some. If you keep close enough tabs on two billion people, eventually you’ll see everything — thankfully there’s AI to see and sort through much of that mess for us.