Name

RobotHost — specify hostnames that will be classified as crawler bots (search engines) visiting the site

SYNOPSIS

hostname_glob...

DESCRIPTION

The RobotHost directive defines a list of hostnames which will be classified as crawler robots (search engines), and cause Interchange to alter its behavior to improve the chance of Interchange-served content being crawled and indexed.

Note that this directive (and all other work done to identify robots) only serves to improve the way in which Interchange pages are indexed, and to reduce server overhead for clients that don't require our full attention in the way humans do (for example, session information is not kept around for spider bots). Using this to "tune" the actual page content depending on a crawler visiting does not earn you extra points, and may in fact be detected by the robot and punished.

It's important to note that the directive accepts a wildcard list similar to globbing — * represents any number of characters, while ? represents a single character.

DIRECTIVE TYPE AND DEFAULT VALUE

Global directive

EXAMPLES

Example: Defining RobotHost

RobotHost <<EOR
  *.crawler*.com,     *.excite.com,           *.googlebot.com,
  *.infoseek.com,     *.inktomi.com,          *.inktomisearch.com,
  *.lycos.com,        *.pa-x.dec.com,         add-url.altavista.com,
  westinghouse-rsl-com-usa.NorthRoyalton.cw.net,
EOR

NOTES

For more details regarding web spiders/bots and Interchange, see robot glossary entry.

AVAILABILITY

RobotHost is available in Interchange versions:

4.6.0-5.9.0 (git-head)

SOURCE

Interchange 5.9.0:

Source: lib/Vend/Config.pm
Line 490

['RobotHost',     'list_wildcard_full', ''],

Source: lib/Vend/Config.pm
Line 3859 (context shows lines 3859-3863)

sub parse_list_wildcard_full {
my $value = get_wildcard_list(@_,1);
return '' unless length($value);
return qr/^($value)$/i;
}

AUTHORS

Interchange Development Group

SEE ALSO

RobotIP(7ic), RobotUA(7ic)

DocBook! Interchange!