
YouTube's[1] failure to stop the spread of conspiracy theories related to last week's school shooting in Florida highlights a problem that has long plagued the platform: It is far better at recommending videos that appeal to users than at stanching the flow of lies.
The company for years has poured resources into better tuning its recommendation algorithm to the tastes of individual viewers. But its weakness in detecting misinformation was on stark display this week as demonstrably false videos rose to the top of YouTube's rankings.
One clip that mixed authentic news images with misleading context earned more than 200,000 views before YouTube yanked it Wednesday for breaching its rules on harassment.
The failures of this past week - which also happened on Facebook, Twitter and other social media - make clear that some of the richest, most technically sophisticated companies in the world are losing against people pushing content rife with untruths.
"I think tragically the proliferation and spread of these videos attacking the victims of the shooting in Parkland are a pretty clear indication the technology companies have a long way to go to deal with this problem," said Rep. Adam Schiff, Calif., the top Democrat on the House Intelligence Committee.
YouTube has apologised for the prominence of the misleading videos, which claimed that survivors featured in news report were "crisis actors" merely appearing to grieve for political gain.
YouTube removed several videos and said the people who posted them outsmarted the platform's safeguards by using portions of real news reports about the Parkland, Florid, shooting as the basis for their conspiracy videos. These fake reports often contain photos, videos and memes that repurpose authentic content.
YouTube said in a statement Thursday that its algorithm looks at...