OHS must not contain any robots.txt files.

From Oracle HTTP Server 12.1.3 Security Technical Implementation Guide

Part of SRG-APP-000516-WSR-000174

Associated with: CCI-000366

SV-79181r1_rule OHS must not contain any robots.txt files.

Vulnerability discussion

Search engines are constantly at work on the Internet. Search engines are augmented by agents, often referred to as spiders or bots, which endeavor to capture and catalog web-site content. In turn, these search engines make the content they obtain and catalog available to any public web user. To request that a well behaved search engine not crawl and catalog a server, the web server may contain a file called robots.txt for each web site hosted. This file contains directories and files that the web server SA desires not be crawled or cataloged, but this file can also be used, by an attacker or poorly coded search engine, as a directory and file index to a site. This information may be used to reduce an attacker’s time searching and traversing the web site to find files that might be relevant. If information on hosted web sites needs to be protected from search engines and public view, other methods must be used.

Check content

1. Open $DOMAIN_HOME/config/fmwconfig/components/OHS//httpd.conf and every .conf file (e.g., ssl.conf) included in it with an editor that contains a "" directive. 2. Search for the "DocumentRoot" directive at the OHS server and virtual host configuration scopes. 3. If the directive value specifies a directory containing a robots.txt file, this is a finding.

Fix text

1. Open $DOMAIN_HOME/config/fmwconfig/components/OHS//httpd.conf and every .conf file (e.g., ssl.conf) included in it with an editor that contains a "" directive. 2. Search for the "DocumentRoot" directive at the OHS server and virtual host configuration scopes. 3. Remove any robots.txt files from the directories specified in the "DocumentRoot" directives.

Pro Tips

Lavender hyperlinks in small type off to the right (of CSS class id, if you view the page source) point to globally unique URIs for each document and item. Copy the link location and paste anywhere you need to talk unambiguously about these things.

You can obtain data about documents and items in other formats. Simply provide an HTTP header Accept: text/turtle or Accept: application/rdf+xml.

Powered by sagemincer