Images of child sex abuse can circle the globe more quickly than an airplane, and officers in Canada can spend countless months investigating a single image just to identify the country where a video or photo was taken. Until now.
"The problem is that it's very, very easy to set up websites and webpages to take this content and put it on. If police or a service provider shuts it down, then they create a new one and put it up again," said Richard Frank, an assistant professor at Simon Fraser University's School of Criminology.
"It's very easy to keep doing this."
So Frank, a computer programmer, and some of his colleagues at the International Cybercrime Research Centre at SFU developed a web-crawling program that identifies and tracks images of child sexual exploitation through their networks in cyberspace.
Starting with a database of images and websites known to RCMP, the automated program has adapted common search engine techniques to follow links from one site to others, gathering information on content, keywords, images and videos that portray the sexual abuse of children.
"The general idea has been used by Google and Microsoft and other search engines to collect data they use for searches," Frank said. "The general idea is out there. Adapting it to child exploitation has been difficult."
The program — the Child Exploitation Network Extractor — is still in development. It hasn't yet been used in any active investigations.
But a $47,000 grant from the Canadian Internet Registration Authority means development can continue.
Frank said the aim is to follow the virtual path to key websites on the child exploitation circuit.
"That will make law enforcement more efficient, so they're not going after the smaller offenders but they're going after the ones ... who host a lot of content and possibly — even better — the ones who are supplying the content so the police can get at the kids and rescue them," he said.
The program could one day spare police officers from traumatic hours of viewing photos and videos of child abuse content, Frank said.
The program has been in development for three years.
One of the hurdles is that the program didn't identify the geographical location of these sites. The CIRA grant will allow the team to add a geolocation feature and to look up registered website owners.
"I don't know to what level we can narrow down the location but we should be able to do it for a province, possibly a city," Frank said.
The SFU web-crawler was one of 28 projects awarded more than $1 million from the Canadian Internet Registration Authority, the organization that manages the .ca domain.
Michael Geist, Canada Research Chair of Internet and e-commerce Law at the University of Ottawa and authority board member, said the purpose was to invest in community projects that will enhance the Internet for Canadians. It is the first of what will be an annual initiative, he said.
"This is a good fit," he said.
Follow @ByDeneMoore on Twitter