How to Prevent Partial robots.txt matches from Blocking a Path that Should be Crawled
(Doc ID 2888152.1)
Last updated on AUGUST 11, 2022
Applies to:
Oracle Commerce Cloud Service - Version 22.2.10 and laterInformation in this document applies to any platform.
Goal
Allow robots to crawl into sites containing a Disallowed word.
CUSTOMER'S QUESTION:
----------------------
A URL path with brand name is not being crawled by robots, it's being blocked by a partial match in the robots.txt, how can we prevent this?
Is there a way to the robots.txt file to be based on exact match instead of contains logic so that it will allow us to crawl the brand names currently being blocked?
Solution
To view full details, sign in with your My Oracle Support account. |
|
Don't have a My Oracle Support account? Click to get started! |
In this Document
Goal |
Solution |