Amazon investigating Perplexity AI after accusations it scrapes websites without consent
Amazon Web Services has started an investigation to determine whether Perplexity AI is breaking its rules, according to Wired. To, be precise, the company's cloud division is looking into allegations that the service is using a crawler, which is hosted on its servers, that ignores the Robots Exclusion Protocol. This protocol is a web standard, wherein developers put a robots.txt file on a domain containing instructions on whether bots can or can't access a particular page. Complying with those instructions is voluntary, but crawlers from reputable companies have generally been respecting them since web developers started implementing the standard in the '90s. In an earlier piece, Wired reported that it discovered a virtual machine that was bypassing its website's robots.txt instructions. That machine was hosted on an Amazon Web Services server using the IP address 44.221.181.252 that's "certainly operated by Perplexity." It reportedly visited other Condé Nast properties hundreds of times over the past three months to scrape their content, as well. The Guardian, Forbes and The New York Times had also detected it visiting their publications multiple times, Wired said. To confirm whether Perplexity truly was scraping its content, Wired entered headlines or short descriptions of its articles into the company's chatbot. The tool then responded with results that closely paraphrased its articles "with minimal attribution." A recent Reuters report claimed that Perplexity isn't the only AI company that's bypassing robots.txt files to gather content used to train large language models. However, Amazon's investigation seems to be focused on Perplexity AI only. An Amazon spokesperson told Wired that its customers have to comply with robots.txt instructions when crawling websites. "AWS’s terms of service prohibit customers from using our services for any illegal activity, and our customers are responsible for complying with our terms and all applicable laws," they said. Perplexity spokesperson Sara Platnick told Wired that the company has already responded to Amazon's inquiries and denied that its crawlers are bypassing the Robots Exclusion Protocol. "Our PerplexityBot — which runs on AWS — respects robots.txt, and we confirmed that Perplexity-controlled services are not crawling in any way that violates AWS Terms of Service," she said. Platnick admitted, however, that PerplexityBot will ignore robots.text when a user includes a specific URL in their chatbot inquiry. Aravind Srinivas, the CEO of Perplexity, also previously denied that his company is "ignoring the Robot Exclusions Protocol and then lying about it." Srinivas did admit to Fast Company that Perplexity uses third-party web crawlers on top of its own, and that the bot Wired identified was one of them.This article originally appeared on Engadget at https://www.engadget.com/amazon-investigating-perplexity-ai-after-accusations-it-scrapes-websites-without-consent-133003374.html?src=rss
Amazon Web Services has started an investigation to determine whether Perplexity AI is breaking its rules, according to Wired. To, be precise, the company's cloud division is looking into allegations that the service is using a crawler, which is hosted on its servers, that ignores the Robots Exclusion Protocol. This protocol is a web standard, wherein developers put a robots.txt file on a domain containing instructions on whether bots can or can't access a particular page. Complying with those instructions is voluntary, but crawlers from reputable companies have generally been respecting them since web developers started implementing the standard in the '90s.
In an earlier piece, Wired reported that it discovered a virtual machine that was bypassing its website's robots.txt instructions. That machine was hosted on an Amazon Web Services server using the IP address 44.221.181.252 that's "certainly operated by Perplexity." It reportedly visited other Condé Nast properties hundreds of times over the past three months to scrape their content, as well. The Guardian, Forbes and The New York Times had also detected it visiting their publications multiple times, Wired said. To confirm whether Perplexity truly was scraping its content, Wired entered headlines or short descriptions of its articles into the company's chatbot. The tool then responded with results that closely paraphrased its articles "with minimal attribution."
A recent Reuters report claimed that Perplexity isn't the only AI company that's bypassing robots.txt files to gather content used to train large language models. However, Amazon's investigation seems to be focused on Perplexity AI only. An Amazon spokesperson told Wired that its customers have to comply with robots.txt instructions when crawling websites. "AWS’s terms of service prohibit customers from using our services for any illegal activity, and our customers are responsible for complying with our terms and all applicable laws," they said.
Perplexity spokesperson Sara Platnick told Wired that the company has already responded to Amazon's inquiries and denied that its crawlers are bypassing the Robots Exclusion Protocol. "Our PerplexityBot — which runs on AWS — respects robots.txt, and we confirmed that Perplexity-controlled services are not crawling in any way that violates AWS Terms of Service," she said. Platnick admitted, however, that PerplexityBot will ignore robots.text when a user includes a specific URL in their chatbot inquiry.
Aravind Srinivas, the CEO of Perplexity, also previously denied that his company is "ignoring the Robot Exclusions Protocol and then lying about it." Srinivas did admit to Fast Company that Perplexity uses third-party web crawlers on top of its own, and that the bot Wired identified was one of them.This article originally appeared on Engadget at https://www.engadget.com/amazon-investigating-perplexity-ai-after-accusations-it-scrapes-websites-without-consent-133003374.html?src=rss
What's Your Reaction?