Facebook, Twitter and Google grilled by MPs over hate speech

Peter Barron, Simon Milner and Nick PicklesImage copyright
PA

Image caption

Executives from Google, Facebook and Twitter (L-R) were questioned by the Home Affairs select committee

Social media giants should “do a better job” to protect users from online hate speech, MPs have said.

Executives from Facebook, Twitter and Google were asked by the Home Affairs select committee why they did not police their content more effectively, given the billions they made.

They were told they had a “terrible reputation” for dealing with problems.

The firms said they worked hard to make sure freedom of expression was protected within the law.

‘Money out of hate’

Labour MP Chuka Umunna focused his questioning on Google-owned YouTube, which he accused of making money from “videos peddling hate” on its platform.

A recent investigation by the Times found adverts were appearing alongside content from supporters of extremist groups, making them around £6 per 1,000 viewers, as well as making money for the company.

Mr Umunna said: “Your operating profit in 2016 was $30.4bn.

“Now, there are not many business activities that somebody openly would have to come and admit… that they are making money and people who use their platform are making money out of hate.

“You, as an outfit, are not working nearly hard enough to deal with this.”

Peter Barron, vice president of communications and public affairs at Google Europe, told the committee the cash made from the videos in question was “very small amounts”, but added that the firm was “working very hard in this area” to stop it happening again.

Fellow committee member David Winnick said, when he heard Mr Barron’s answer, “the thought that came into my mind was the thought of commercial prostitution that you are engaged in,” adding: “I think that is a good and apt description.”

Yvette Cooper, who is chairwoman of the committee, turned her attention to Twitter.

Image copyright
Reuters

Image caption

Yvette Cooper read out abusive tweets from a user’s account to the committee.

The shadow home secretary said she had personally reported a user who had tweeted a “series of racist, vile and violent attacks” against political figures such as German Chancellor Angela Merkel and London Mayor Sadiq Khan, but the user had not been removed.

Nick Pickles, head of public policy and government for Twitter in the UK, said the company acknowledged it was “not doing a good enough job” at responding to reports from users.

“We don’t communicate with the users enough when they report something, we don’t keep people updated enough and we don’t communicate back enough when we do take action,” he said.

“I am sorry to hear those reports had not been looked at. We would have expected them to have been looked at certainly by the end of today, particularly for violent threats.”

When the BBC checked the account after the committee session, it had been suspended.

BBC investigation

Ms Cooper said she found none of the responses from the executives to her questions “particularly convincing”.

She added: “We understand the challenges that you face and technology changes very fast, but you all have millions of users in the United Kingdom and you make billions of pounds from these users, [yet] you all have a terrible reputation among users for dealing swiftly with content even against your own community standards.

“Surely when you manage to have such a good reputation with advertisers for targeting content and for doing all kinds of sophisticated things with your platforms, you should be able to do a better job in order to be able to keep your users safe online and deal with this type of hate speech.”

Facebook’s Simon Milner admitted that a BBC investigation last week into pictures of children on the platform showed that the company’s system “was not working” – but said it had now been fixed.

BBC News had reported 100 posts featuring sexualised images and comments about children, but 82 were deemed not to “breach community standards”.

When journalists went back to Facebook with the images that had not been taken down, the company reported them to the police and cancelled an interview, saying in a statement: “It is against the law for anyone to distribute images of child exploitation.”

Mr Milner said the report had exposed a flaw in its content moderation process.

But he said the content flagged up had since been addressed, reviewed and taken off Facebook.

Facebook, Twitter and Google grilled by MPs over hate speech