YouTube’s algorithm still amplifies violent videos, hateful content and misinformation despite the company’s efforts to limit the reach of such videos, according to a study published this week. The Mozilla Foundation, a software nonprofit that is outspoken on privacy issues, conducted the 10-month investigation, which found that 71 percent of all videos flagged by volunteers as disturbing were recommended by YouTube’s algorithm. The study, which Mozilla described as “the largest-ever crowdsourced investigation into YouTube’s algorithm,” used data volunteered by users who installed a Mozilla extension on their web browser that tracked their YouTube usage and allowed them to report potentially problematic videos. The researchers could then go back and see if the flagged videos were suggested by the algorithm or whether the user found it on their own. More than 37,000 users from 91 countries installed the extension, and the volunteers flagged 3,362 “regrettable videos” between July 2020 and May 2021. Mozilla then brought in 41 researchers from the University of Exeter to review the flagged videos and determine if they might violate YouTube’s Community Guidelines.