As humans go home, Facebook and YouTube face a coronavirus crisis

Carramba che sorpresa!

Link articolo originale

Archivio di tutti i clip:
clips.quintarelli.it
(Notebook di Evernote).

As humans go home, Facebook and YouTube face a coronavirus crisis

The artificial intelligence systems replacing human moderators simply can’t cope

Chris Stokel-Walker

Friday 20 March 2020

WIRED

The team of moderators tasked with monitoring YouTube’s content at Accenture’s Dublin headquarters got the message their office was closing via a ping through their group chat on the evening of March 17. The spread of Covid-19, the coronavirus that has so far infected more than 200,000 people worldwide, was too dangerous for them to continue working in the open-plan office in such close proximity. For their safety, they were being sent home – though contractual restrictions meant they couldn’t continue to do their work filtering through the 500 hours of footage uploaded every minute to the platform remotely. There was a general acceptance of inevitability about the decision, explains one contractor there at the time, who asked not to be identified because they aren’t authorised to speak to the press. The contractors were told they’ll be supported financially while the office is closed.The contractors are the secret army of workers who keep social networks relatively clean. Underpaid and overworked, with impractically high targets (a former moderator for YouTube contracted through Accenture told me that their 98 per cent correct targets were so stringent they would only be allowed a few of mistakes per month), they do the grunt work of filtering through questionable content and deciding whether it’s acceptable under the rules of the websites that dominate our world. There are 10,000 of them who work across Google’s products, and more than 30,000 are contracted to monitor Facebook. The numbers have drastically increased in the last two years as the tech companies have faced criticism for problems in their processes.
Read next

DeepMind’s AI is getting closer to its first big real-world application

DeepMind’s AI is getting closer to its first big real-world application

By
Matt Reynolds

This meticulous human work is often downplayed in favour of tech firms promoting their automated monitoring systems to flag and remove inappropriate content – but we’re about to realise just how important they are.It was when Harold Oldfield tried to post a couple of stories to Facebook about the United States’ reaction to the coronavirus outbreak that he realised something was wrong. They were from reputable news outlets – The Hill and Politico – and both were immediately flagged as spam. “I was amused, since I have literally written the book on regulating platforms like Facebook and have an entire chapter on the importance of fairness and transparency in content moderation,” he explains. “To say this is not how you do it is an understatement.”He protested the classification and the posts were restored. “For me, this was minor, even amusing. But the downside is if it knocks out time sensitive speech,” he says. “That’s always the worry. Everyone points out the bad stuff that gets through, so they make the filters overaggressive. It’s harder to see the time sensitive good stuff that gets screened out.”Oldfield isn’t the only one. People who have been trying to share important, factual news stories and information about the coronavirus have seen their posts triggering Facebook’s anti-spam filters to maintain community standards. Alex Stamos, a former Facebook executive, tweeted that he thought it was “an anti-spam rule… going haywire. We might be seeing the start of the [machine learning] going nuts with less human oversight.”It’s something Mark Zuckerberg touched on in a conference call on March 18: “Even in the most free-expression, friendly traditions like the United States, you’ve long had the precedent that you don’t allow people to yell fire in a crowded room, and that – I think it’s similar to people spreading dangerous misinformation in the time of an outbreak like this.” The conclusion – which was also reached by YouTube – was a simple one. It’s better to accidentally suppress the spread of “good” information in order to ensure “bad” information absolutely can’t take a foothold. YouTube admitted “users and creators may see increased video removals, including some videos that may not violate policies.”
Read next

AI’s real impact? Freeing us from the tyranny of repetitive tasks

AI’s real impact? Freeing us from the tyranny of repetitive tasks

By
Kai-Fu Lee

“Content regulation involves making incredibly nuanced decisions – especially when the thing you’re trying to take down doesn’t have a neat definition: think ‘terrorism’,” says Frederike Kaltheuner, a tech policy fellow at Mozilla. “Automated systems can’t make these nuances decisions, and manual flagging mechanisms are also often abused by people who engage in coordinated flagging to contents or accounts they don’t like.”Yet people were annoyed. “Mark Zuckerberg has been promoting AI as a solution for every problem to policy-makers for a long time,” says Julia Reda, a former Green MEP who has long advocated for better controls on tech firms. “It makes sense from a business point of view – developing AI requires access to vast troves of data, which gives Facebook an edge over the competition.” But she argues tech firms are ill-equipped for an absence of human oversight. “These announcements show that platform companies are well aware that upload filters are unable to distinguish legal from illegal content. The errors systematically lead to discrimination, for example because Arabic-language content is disproportionately flagged as terrorist. In situations of crisis such as now, governments and companies establish norms that would otherwise be unthinkable. We must be vigilant to ensure that they do not become the new norm.”Facebook’s vice president of integrity, Guy Rosen, was quick to say that the initial problem was fixed. “We’ve restored all the posts that were incorrectly removed, which included posts on all topics – not just those related to Covid-19,” he tweeted on March 18. “This was an issue with an automated system that removes links to abusive websites, but incorrectly removed a lot of other posts too.” The problem was, people’s posts were still being incorrectly blocked. People replied to him with examples of Facebook’s algorithm triggering breaches of the site’s community standards after he said it was all fixed.“It’s like every story about Facebook in the last ten years,” says Jennifer Cobbe, research associate and affiliated lecturer at Cambridge University, who monitors and studies content moderation. “They say they’ve fixed it and it’s still broken. I don’t think anybody should have any faith in Facebook’s ability to handle this.”This issue has always been coming. Back in June 2019, I spoke to a software engineer at Google working on YouTube’s algorithm. (One of their conditions of speaking was that I not name them or directly quote them.) They had found me after a story I’d published in the New York Times talking about the failure of the platform’s recommendation algorithm. They agreed that there was an issue with the algorithm that needed to be fixed, but were keen to stress that machine learning algorithms were improving all the time, and at a breathtaking pace. Given enough training data, almost all problems could be solved algorithmically, they concluded. But is this just tech utopian hubris?
Read next

Yes, AI will soon be everywhere – but it will support humans, not replace them

Yes, AI will soon be everywhere – but it will support humans, not replace them

In partnership with
QUANTUMBLACK

The UK’s schools had to close. But what comes next will be worse

Education

The UK’s schools had to close. But what comes next will be worse

“AI cannot – and will never be able to – moderate perfectly,” says Ysabel Gerard of the University of Sheffield, who studies social media content moderation. “You can’t possibly automate something as complex as human interaction and we’ve seen countless examples of errors in AI-based moderation, like the removal of Covid-19 news articles and posts over the past couple of days.” And ultimately, it’s important to keep the people behind the scenes safe as the coronavirus spreads. Millions of the people around the world are now working from home. “It involves huge number of people, probably all sitting in small rooms reviewing content in close proximity to each other,” says Cobbe.Cobbe finds the move towards more automation of content moderation “potentially very concerning”. She’s worried for two reasons: firstly, handing power away from humans and towards technology puts more power in social media platforms to decide what is and what isn’t appropriate content, outside of governmental oversight. The other is a more simplistic technological problem.“The systems aren’t really up to the job, and aren’t really capable of replacing humans at this point,” says Cobbe. It’s well-established that algorithms reflect the biases of their creators, and human moderators currently play a role in cancelling out some of the most egregious biases.
Read next

In the battle against deepfakes, AI is being pitted against AI

In the battle against deepfakes, AI is being pitted against AI

By
Elise Thomas

We’ve reached this point because of the tech industry’s fixation on profit, she says. “One of the things the tech companies try to chase above everything else is scale. They try to grow as big as they possibly can be with as much content as they possibly can get their hands on. Once you reach a certain scale, it’s very difficult to do things with humans because you need to hire so many people it becomes prohibitively expensive to do things. So they turn to AI to try and replace those humans so they can scale at the level they want.”However, the AI is far from ready for prime time, as we’re learning now. “The tech industry has far too much faith in its own systems, and as a society we’ve put far too much stock in AI,” Cobbe says. We’re just realising the human power behind the highly-trumpeted AI-powered solution at one of the worst possible times. “We’re likely to see far greater problems develop with content moderation than we’ve seen up to now,” explains Cobbe. “And I say that in full knowledge that there have been a lot of problems even with human moderation.”Coronavirus coverage from WIRED

If you like this post, please consider sharing it.

Leave a Comment

Your email address will not be published. Required fields are marked *