YouTube says it’s accelerating its efforts to combat online extremism content with a new tack: made it harder to find them.

“While we and others have worked for years to identify and remove content that violates our policies, the uncomfortable truth is that we, as an industry, must acknowledge that more needs to be done. Now,” said Kent Walker, senior vice-president and general counsel of Google, in a blog post on Google.org and an op-ed in the Financial Times.

The prevalence of extremist content on YouTube became an issue again earlier this month when it was revealed one of the three attackers in the London Bridge terror incident June 3 had been influenced by YouTube videos of Ahmad Musa Jibril, a Dearborn, Mich., cleric popular who has developed an international following in recent years with ISIS fighters. The three attackers, who were killed by police, drove a van into pedestrians on London Bridge and got out of the van to stab others in a market, killing eight and injuring dozens of others.

In March, Google and YouTube found themselves facing irate advertisers, with many pulling their business, after they found their ads played on videos promoting terrorism and extremist content on the video service. They moved to establish a 10,000-viewer requirement for access into its YouTube Partner Program, which lets creators earn revenue via ads running on their videos.

The tech giant also improved its use of machine learning technology to prevent ads from being automatically run with extremist or other violent content. Now it’s using that research to train its staff, and it’s rolling out other measures, including adding warnings to extremist videos and preventing comments, which will make it harder for them to get popular.

“We will now devote more…