If you are reading a text-only version of this FAQ, you may find numbers
enclosed in brackets (such as "[12]"). These refer to the list of
reference URLs to be found at the end of the document. These references
do not appear, and are not needed, for the hypertext version.
-
"Why can't I ...? Why won't ... work?" What to
do in case of problems
If you are having trouble with your Apache server software, you should
take the following steps:
- Check the errorlog!
Apache tries to be helpful when it encounters a problem. In many
cases, it will provide some details by writing one or messages to
the server error log. Sometimes this is enough for you to diagnose
& fix the problem yourself (such as file permissions or the like).
The default location of the error log is
/usr/local/apache/logs/error_log, but see the
ErrorLog
directive in your config files for the location on your server.
- Check the
FAQ!
The latest version of the Apache Frequently-Asked Questions list can
always be found at the main Apache web site.
- Check the Apache bug database
Most problems that get reported to The Apache Group are recorded in
the
bug database.
Please check the existing reports, open
and closed, before adding one. If you find
that your issue has already been reported, please don't add
a "me, too" report. If the original report isn't closed
yet, we suggest that you check it periodically. You might also
consider contacting the original submitter, because there may be an
email exchange going on about the issue that isn't getting recorded
in the database.
- Ask in the comp.infosystems.www.servers.unix
USENET newsgroup
A lot of common problems never make it to the bug database because
there's already high Q&A traffic about them in the
comp.infosystems.www.servers.unix
newsgroup. Many Apache users, and some of the developers, can be
found roaming its virtual halls, so it is suggested that you seek
wisdom there. The chances are good that you'll get a faster answer
there than from the bug database, even if you don't see
your question already posted.
- If all else fails, report the problem in the bug
database
If you've gone through those steps above that are appropriate and
have obtained no relief, then please do let The Apache
Group know about the problem by
logging a bug report.
If your problem involves the server crashing and generating a core
dump, please include a backtrace (if possible). As an example,
# cd ServerRoot
# dbx httpd core
(dbx) where
(Substitute the appropriate locations for your
ServerRoot and your httpd and
core files. You may have to use gdb
instead of dbx
.)
-
How compatible is Apache with my existing NCSA 1.3
setup?
Apache attempts to offer all the features and configuration options
of NCSA httpd 1.3, as well as many of the additional features found in
NCSA httpd 1.4 and NCSA httpd 1.5.
NCSA httpd appears to be moving toward adding experimental features
which are not generally required at the moment. Some of the experiments
will succeed while others will inevitably be dropped. The Apache
philosophy is to add what's needed as and when it is needed.
Friendly interaction between Apache and NCSA developers should ensure
that fundamental feature enhancements stay consistent between the two
servers for the foreseeable future.
-
How do I enable CGI execution in directories other than
the ScriptAlias?
Apache recognizes all files in a directory named as a
ScriptAlias
as being eligible for execution rather than processing as normal
documents. This applies regardless of the file name, so scripts in a
ScriptAlias directory don't need to be named
"*.cgi" or "*.pl" or
whatever. In other words, all files in a ScriptAlias
directory are scripts, as far as Apache is concerned.
To persuade Apache to execute scripts in other locations, such as in
directories where normal documents may also live, you must tell it how
to recognize them - and also that it's okay to execute them. For
this, you need to use something like the
AddHandler
directive.
- In an appropriate section of your server configuration files, add
a line such as
AddHandler cgi-script .cgi
The server will then recognize that all files in that location (and
its logical descendants) that end in ".cgi"
are script files, not documents.
- Make sure that the directory location is covered by an
Options
declaration that includes the ExecCGI option.
In some situations, you might not want to actually
allow all files named "*.cgi" to be executable.
Perhaps all you want is to enable a particular file in a normal directory to
be executable. This can be alternatively accomplished
via mod_rewrite
and the following steps:
- Locally add to the corresponding .htaccess file a ruleset
similar to this one:
RewriteEngine on
RewriteBase /~foo/bar/
RewriteRule ^quux\.cgi$ - [T=application/x-httpd-cgi]
- Make sure that the directory location is covered by an
Options
declaration that includes the ExecCGI and
FollowSymLinks option.
-
What does it mean when my CGIs fail with
"Premature end of script headers"?
It means just what it says: the server was expecting a complete set of
HTTP headers (one or more followed by a blank line), and didn't get
them.
The most common cause of this problem is the script dying before
sending the complete set of headers, or possibly any at all, to the
server. To see if this is the case, try running the script standalone
from an interactive session, rather than as a script under the server.
If you get error messages, this is almost certainly the cause of the
"premature end of script headers" message.
The second most common cause of this (aside from people not
outputting the required headers at all) is a result of an interaction
with Perl's output buffering. To make Perl flush its buffers
after each output statement, insert the following statements around
the print
or write
statements that send your
HTTP headers:
{
local ($oldbar) = $|;
$cfh = select (STDOUT);
$| = 1;
#
# print your HTTP headers here
#
$| = $oldbar;
select ($cfh);
}
This is generally only necessary when you are calling external
programs from your script that send output to stdout, or if there will
be a long delay between the time the headers are sent and the actual
content starts being emitted. To maximize performance, you should
turn buffer-flushing back off (with $| = 0
or the
equivalent) after the statements that send the headers, as displayed
above.
If your script isn't written in Perl, do the equivalent thing for
whatever language you are using (e.g., for C, call
fflush()
after writing the headers).
-
How do I enable SSI (parsed HTML)?
SSI (an acronym for Server-Side Include) directives allow static HTML
documents to be enhanced at run-time (e.g., when delivered to
a client by Apache). The format of SSI directives is covered
in the mod_include manual;
suffice it to say that Apache supports not only SSI but
xSSI (eXtended SSI) directives.
Processing a document at run-time is called parsing it; hence
the term "parsed HTML" sometimes used for documents that
contain SSI instructions. Parsing tends to be extremely
resource-consumptive, and is not enabled by default. It can also
interfere with the cachability of your documents, which can put a
further load on your server. (see the
next question for more information about this.)
To enable SSI processing, you need to
For additional information, see the Apache Week article on
Using Server Side Includes.
-
Why don't my parsed files get cached?
Since the server is performing run-time processing of your SSI
directives, which may change the content shipped to the client, it
can't know at the time it starts parsing what the final size of the
result will be, or whether the parsed result will always be the same.
This means that it can't generate Content-Length or
Last-Modified headers. Caches commonly work by comparing
the Last-Modified of what's in the cache with that being
delivered by the server. Since the server isn't sending that header
for a parsed document, whatever's doing the caching can't tell whether
the document has changed or not - and so fetches it again to be on the
safe side.
You can work around this in some cases by causing an
Expires header to be generated. (See the
mod_expires
documentation for more details.) Another possibility is to use the
XBitHack Full
mechanism, which tells Apache to send (under certain circumstances
detailed in the XBitHack directive description) a
Last-Modified header based upon the last modification
time of the file being parsed. Note that this may actually be lying
to the client if the parsed file doesn't change but the SSI-inserted
content does; if the included content changes often, this can result
in stale copies being cached.
-
How can I have my script output parsed?
So you want to include SSI directives in the output from your CGI
script, but can't figure out how to do it?
The short answer is "you can't." This is potentially
a security liability and, more importantly, it can not be cleanly
implemented under the current server API. The best workaround
is for your script itself to do what the SSIs would be doing.
After all, it's generating the rest of the content.
This is a feature The Apache Group hopes to add in the next major
release after 1.3.
-
Does or will Apache act as a Proxy server?
Apache version 1.1 and above comes with a
proxy module.
If compiled in, this will make Apache act as a caching-proxy server.
-
What are "multiviews"?
"Multiviews" is the general name given to the Apache
server's ability to provide language-specific document variants in
response to a request. This is documented quite thoroughly in the
content negotiation
description page. In addition, Apache Week carried an
article on this subject entitled
"Content Negotiation Explained".
-
Why can't I run more than <n>
virtual hosts?
You are probably running into resource limitations in your
operating system. The most common limitation is the
per-process limit on file descriptors,
which is almost always the cause of problems seen when adding
virtual hosts. Apache often does not give an intuitive error
message because it is normally some library routine (such as
gethostbyname()
) which needs file descriptors and
doesn't complain intelligibly when it can't get them.
Each log file requires a file descriptor, which means that if you are
using separate access and error logs for each virtual host, each
virtual host needs two file descriptors. Each
Listen
directive also needs a file descriptor.
Typical values for <n> that we've seen are in
the neighborhood of 128 or 250. When the server bumps into the file
descriptor limit, it may dump core with a SIGSEGV, it might just
hang, or it may limp along and you'll see (possibly meaningful) errors
in the error log. One common problem that occurs when you run into
a file descriptor limit is that CGI scripts stop being executed
properly.
As to what you can do about this:
- Reduce the number of
Listen
directives. If there are no other servers running on the machine
on the same port then you normally don't
need any Listen directives at all. By default Apache listens to
all addresses on port 80.
- Reduce the number of log files. You can use
mod_log_config
to log all requests to a single log file while including the name
of the virtual host in the log file. You can then write a
script to split the logfile into separate files later if
necessary. Such a script is provided with the Apache 1.3 distribution
in the src/support/split-logfile file.
- Increase the number of file descriptors available to the server
(see your system's documentation on the
limit
or
ulimit
commands). For some systems, information on
how to do this is available in the
performance hints page. There is a specific
note for FreeBSD below.
- "Don't do that" - try to run with fewer virtual hosts
- Spread your operation across multiple server processes (using
Listen
for example, but see the first point) and/or ports.
Since this is an operating-system limitation, there's not much else
available in the way of solutions.
As of 1.2.1 we have made attempts to work around various limitations
involving running with many descriptors.
More information is available.
-
Can I increase FD_SETSIZE on FreeBSD?
On versions of FreeBSD before 3.0, the FD_SETSIZE define
defaults to 256. This means that you will have trouble usefully using
more than 256 file descriptors in Apache. This can be increased, but
doing so can be tricky.
If you are using a version prior to 2.2, you need to recompile your
kernel with a larger FD_SETSIZE. This can be done by adding a
line such as:
options FD_SETSIZE nnn
to your kernel config file. Starting at version 2.2, this is no
longer necessary.
If you are using a version of 2.1-stable from after 1997/03/10 or
2.2 or 3.0-current from before 1997/06/28, there is a limit in
the resolver library that prevents it from using more file descriptors
than what FD_SETSIZE is set to when libc is compiled. To
increase this, you have to recompile libc with a higher
FD_SETSIZE.
In FreeBSD 3.0, the default FD_SETSIZE has been increased to
1024 and the above limitation in the resolver library
has been removed.
After you deal with the appropriate changes above, you can increase
the setting of FD_SETSIZE at Apache compilation time
by adding "-DFD_SETSIZE=nnn" to the
EXTRA_CFLAGS line in your Configuration
file.
-
Why do I keep getting "Method Not Allowed" for
form POST requests?
This is almost always due to Apache not being configured to treat the
file you are trying to POST to as a CGI script. You can not POST
to a normal HTML file; the operation has no meaning. See the FAQ
entry on CGIs outside ScriptAliased
directories for details on how to configure Apache to treat the
file in question as a CGI.
-
Can I use my /etc/passwd file
for Web page authentication?
Yes, you can - but it's a very bad idea. Here are
some of the reasons:
- The Web technology provides no governors on how often or how
rapidly password (authentication failure) retries can be made. That
means that someone can hammer away at your system's
root password using the Web, using a dictionary or
similar mass attack, just as fast as the wire and your server can
handle the requests. Most operating systems these days include
attack detection (such as n failed passwords for the same
account within m seconds) and evasion (breaking the
connection, disabling the account under attack, disabling
all logins from that source, et cetera), but the
Web does not.
- An account under attack isn't notified (unless the server is
heavily modified); there's no "You have 19483 login
failures" message when the legitimate owner logs in.
- Without an exhaustive and error-prone examination of the server
logs, you can't tell whether an account has been compromised.
Detecting that an attack has occurred, or is in progress, is fairly
obvious, though - if you look at the logs.
- Web authentication passwords (at least for Basic authentication)
generally fly across the wire, and through intermediate proxy
systems, in what amounts to plain text. "O'er the net we
go/Caching all the way;/O what fun it is to surf/Giving my password
away!"
- Since HTTP is stateless, information about the authentication is
transmitted each and every time a request is made to the
server. Essentially, the client caches it after the first
successful access, and transmits it without asking for all
subsequent requests to the same server.
- It's relatively trivial for someone on your system to put up a
page that will steal the cached password from a client's cache
without them knowing. Can you say "password grabber"?
If you still want to do this in light of the above disadvantages, the
method is left as an exercise for the reader. It'll void your Apache
warranty, though, and you'll lose all accumulated UNIX guru points.
-
Why doesn't my
ErrorDocument 401
work?
You need to use it with a URL in the form
"/foo/bar" and not one with a method and
hostname such as "http://host/foo/bar". See the
ErrorDocument
documentation for details. This was incorrectly documented in the past.
-
How can I use
ErrorDocument
and SSI to simplify customized error messages?
Have a look at this document.
It shows in example form how you can a combination of XSSI and
negotiation to tailor a set of ErrorDocument
s to your
personal taste, and returning different internationalized error
responses based on the client's native language.
-
Why do I get "setgid: Invalid
argument" at startup?
Your
Group
directive (probably in conf/httpd.conf) needs to name a
group that actually exists in the /etc/group file (or
your system's equivalent).
-
Why does Apache send a cookie on every response?
Apache does not send automatically send a cookie on every
response, unless you have re-compiled it with the
mod_cookies
module.
This module was distributed with Apache prior to 1.2.
This module may help track users, and uses cookies to do this. If
you are not using the data generated by mod_cookies, do
not compile it into Apache. Note that in 1.2 this module was renamed
to the more correct name
mod_usertrack,
and cookies
have to be specifically enabled with the
CookieTracking
directive.
-
Why don't my cookies work, I even compiled in
mod_cookies?
Firstly, you do not need to compile in
mod_cookies in order for your scripts to work (see the
previous question
for more about mod_cookies). Apache passes on your
Set-Cookie header fine, with or without this module. If
cookies do not work it will be because your script does not work
properly or your browser does not use cookies or is not set-up to
accept them.
-
Why do my Java app[let]s give me plain text when I request
an URL from an Apache server?
As of version 1.2, Apache is an HTTP/1.1 (HyperText Transfer Protocol
version 1.1) server. This fact is reflected in the protocol version
that's included in the response headers sent to a client when
processing a request. Unfortunately, low-level Web access classes
included in the Java Development Kit (JDK) version 1.0.2 expect to see
the version string "HTTP/1.0" and do not correctly interpret
the "HTTP/1.1" value Apache is sending (this part of the
response is a declaration of what the server can do rather than a
declaration of the dialect of the response). The result
is that the JDK methods do not correctly parse the headers, and
include them with the document content by mistake.
This is definitely a bug in the JDK 1.0.2 foundation classes from Sun,
and it has been fixed in version 1.1. However, the classes in
question are part of the virtual machine environment, which means
they're part of the Web browser (if Java-enabled) or the Java
environment on the client system - so even if you develop
your classes with a recent JDK, the eventual users might
encounter the problem.
The classes involved are replaceable by vendors implementing the
Java virtual machine environment, and so even those that are based
upon the 1.0.2 version may not have this problem.
In the meantime, a workaround is to tell
Apache to "fake" an HTTP/1.0 response to requests that come
from the JDK methods; this can be done by including a line such as the
following in your server configuration files:
BrowserMatch Java1.0 force-response-1.0
BrowserMatch JDK/1.0 force-response-1.0
More information about this issue can be found in the
Java and HTTP/1.1
page at the Apache web site.
-
Why can't I publish to my Apache server using PUT on
Netscape Gold and other programs?
Because you need to install and configure a script to handle
the uploaded files. This script is often called a "PUT" handler.
There are several available, but they may have security problems.
Using FTP uploads may be easier and more secure, at least for now.
For more information, see the Apache Week article
Publishing Pages with PUT.
-
Why isn't FastCGI included with Apache any more?
The simple answer is that it was becoming too difficult to keep the
version being included with Apache synchronized with the master copy
at the
FastCGI web site. When a new version of Apache was released, the
version of the FastCGI module included with it would soon be out of date.
You can still obtain the FastCGI module for Apache from the master
FastCGI web site.
-
Why am I getting "httpd: could not set socket
option TCP_NODELAY" in my error log?
This message almost always indicates that the client disconnected
before Apache reached the point of calling setsockopt()
for the connection. It shouldn't occur for more than about 1% of the
requests your server handles, and it's advisory only in any case.
-
Why am I getting "connection reset by
peer" in my error log?
This is a normal message and nothing about which to be alarmed. It simply
means that the client canceled the connection before it had been
completely set up - such as by the end-user pressing the "Stop"
button. People's patience being what it is, sites with response-time
problems or slow network links may experiences this more than
high-capacity ones or those with large pipes to the network.
-
How can I get my script's output without Apache buffering
it? Why doesn't my server push work?
In order to improve network performance, Apache buffers script output
into relatively large chunks. If you have a script that sends
information in bursts (eg. as partial-done messages in a multi-commit
database transaction or any type of server push), the client will
not necessarily get the output as the script is generating it.
To avoid this, Apache recognizes scripts whose names begin with
"nph-" as non-parsed-header scripts.
That is, Apache won't buffer their output, but connect it directly to
the socket going back to the client.
While this will probably do what you want, there are some
disadvantages to it:
- YOU (the script) are responsible for generating
ALL of the HTTP headers, and no longer
just the "Content-type" or
"Location" headers
- Unless your script generates its output carefully, you will see a
performance penalty as excessive numbers of packets go back and forth
As an example how you might handle the former (in a Perl script):
if ($0 =~ m:^(.*/)*nph-[^/]*$:) {
$HTTP_headers =
"HTTP/1.1 200 OK\015\012";
$HTTP_headers .=
"Connection: close\015\012";
print $HTTP_headers;
}
and then follow with your normal non-nph headers.
Note that in version 1.3, all CGI scripts will be unbuffered
so the only difference between nph scripts and normal scripts is
that nph scripts require the full HTTP headers to be sent.
-
Why do I get complaints about redefinition
of "
struct iovec
" when
compiling under Linux?
This is a conflict between your C library includes and your kernel
includes. You need to make sure that the versions of both are matched
properly. There are two workarounds, either one will solve the problem:
- Remove the definition of
struct iovec
from your C
library includes. It is located in /usr/include/sys/uio.h
.
Or,
- Add
-DNO_WRITEV
to the EXTRA_CFLAGS
line in your Configuration and reconfigure/rebuild.
This hurts performance and should only be used as a last resort.
-
The errorlog says Apache dumped core, but where's the dump
file?
In Apache version 1.2, the error log message
about dumped core includes the directory where the dump file should be
located. However, many Unixes do not allow a process that has
called setuid()
to dump core for security reasons;
the typical Apache setup has the server started as root to bind to
port 80, after which it changes UIDs to a non-privileged user to
serve requests.
Dealing with this is extremely operating system-specific, and may
require rebuilding your system kernel. Consult your operating system
documentation or vendor for more information about whether your system
does this and how to bypass it. If there is a documented way
of bypassing it, it is recommended that you bypass it only for the
httpd server process if possible.
The canonical location for Apache's core-dump files is the
ServerRoot
directory. As of Apache version 1.3, the location can be set via
the
CoreDumpDirectory
directive to a different directory. Make sure that this directory is
writable by the user the server runs as (as opposed to the user the server
is started as).
-
Why isn't restricting access by host or domain name
working correctly?
Two of the most common causes of this are:
- An error, inconsistency, or unexpected mapping in the DNS
registration
This happens frequently: your configuration restricts access to
Host.FooBar.Com, but you can't get in from that host.
The usual reason for this is that Host.FooBar.Com is
actually an alias for another name, and when Apache performs the
address-to-name lookup it's getting the real name, not
Host.FooBar.Com. You can verify this by checking the
reverse lookup yourself. The easiest way to work around it is to
specify the correct host name in your configuration.
- Inadequate checking and verification in your
configuration of Apache
If you intend to perform access checking and restriction based upon
the client's host or domain name, you really need to configure
Apache to double-check the origin information it's supplied. You do
this by adding the -DMAXIMUM_DNS clause to the
EXTRA_CFLAGS definition in your
Configuration file. For example:
EXTRA_CFLAGS=-DMAXIMUM_DNS
This will cause Apache to be very paranoid about making sure a
particular host address is really assigned to the name it
claims to be. Note that this can incur a significant
performance penalty, however, because of all the name resolution
requests being sent to a nameserver.
-
Why doesn't Apache include SSL?
SSL (Secure Socket Layer) data transport requires encryption, and many
governments have restrictions upon the import, export, and use of
encryption technology. If Apache included SSL in the base package,
its distribution would involve all sorts of legal and bureaucratic
issues, and it would no longer be freely available. Also, some of
the technology required to talk to current clients using SSL is
patented by RSA Data Security,
who restricts its use without a license.
Some SSL implementations of Apache are available, however; see the
"related projects"
page at the main Apache web site.
You can find out more about this topic in the Apache Week
article about
Apache and Secure Transactions.
-
Why do I get core dumps under HPUX using HP's ANSI
C compiler?
We have had numerous reports of Apache dumping core when compiled
with HP's ANSI C compiler using optimization. Disabling the compiler
optimization has fixed these problems.
-
How do I get Apache to send a MIDI file so the browser can
play it?
Even though the registered MIME type for MIDI files is
audio/midi, some browsers are not set up to recognize it
as such; instead, they look for audio/x-midi. There are
two things you can do to address this:
- Configure your browser to treat documents of type
audio/midi correctly. This is the type that Apache
sends by default. This may not be workable, however, if you have
many client installations to change, or if some or many of the
clients are not under your control.
- Instruct Apache to send a different Content-type
header for these files by adding the following line to your server's
configuration files:
AddType audio/x-midi .mid .midi .kar
Note that this may break browsers that do recognize the
audio/midi MIME type unless they're prepared to also
handle audio/x-midi the same way.
-
Why won't Apache compile with my system's
cc?
If the server won't compile on your system, it is probably due to one
of the following causes:
- The Configure script doesn't recognize your system
environment.
This might be either because it's completely unknown or because
the specific environment (include files, OS version, et
cetera) isn't explicitly handled. If this happens, you may
need to port the server to your OS yourself.
- Your system's C compiler is garbage.
Some operating systems include a default C compiler that is either
not ANSI C-compliant or suffers from other deficiencies. The usual
recommendation in cases like this is to acquire, install, and use
gcc.
- Your include files may be confused.
In some cases, we have found that a compiler installation or system
upgrade has left the C header files in an inconsistent state. Make
sure that your include directory tree is in sync with the compiler and
the operating system.
- Your operating system or compiler may be out of
revision.
Software vendors (including those that develop operating systems)
issue new releases for a reason; sometimes to add functionality, but
more often to fix bugs that have been discovered. Try upgrading
your compiler and/or your operating system.
The Apache Group tests the ability to build the server on many
different platforms. Unfortunately, we can't test all of the OS
platforms there are. If you have verified that none of the above
issues is the cause of your problem, and it hasn't been reported
before, please submit a
problem report.
Be sure to include complete details, such as the compiler
& OS versions and exact error messages.
-
How do I add browsers and referrers to my logs?
Apache provides a couple of different ways of doing this. The
recommended method is to compile the
mod_log_config
module into your configuration and use the
CustomLog
directive.
You can either log the additional information in files other than your
normal transfer log, or you can add them to the records already being
written. For example:
CustomLog logs/access_log "%h %l %u %t \"%r\" %s %b \"%{Referer}i\" \"%{User-Agent}i\""
This will add the values of the User-agent: and
Referer: headers, which indicate the client and the
referring page, respectively, to the end of each line in the access
log.
You may want to check out the Apache Week article
entitled:
"Gathering Visitor Information: Customising Your
Logfiles".
-
Why do I get an error about an undefined reference to
"__inet_ntoa" or other
__inet_* symbols?
If you have installed BIND-8
then this is normally due to a conflict between your include files
and your libraries. BIND-8 installs its include files and libraries
/usr/local/include/
and /usr/local/lib/
, while
the resolver that comes with your system is probably installed in
/usr/include/
and /usr/lib/
. If
your system uses the header files in /usr/local/include/
before those in /usr/include/
but you do not use the new
resolver library, then the two versions will conflict.
To resolve this, you can either make sure you use the include files
and libraries that came with your system or make sure to use the
new include files and libraries. Adding -lbind
to the
EXTRA_LDFLAGS
line in your Configuration
file, then re-running Configure, should resolve the
problem. (Apache versions 1.2.* and earlier use
EXTRA_LFLAGS
instead.)
Note:As of BIND 8.1.1, the bind libraries and files are
installed under /usr/local/bind by default, so you
should not run into this problem. Should you want to use the bind
resolvers you'll have to add the following to the respective lines:
EXTRA_CFLAGS=-I/usr/local/bind/include
EXTRA_LDFLAGS=-L/usr/local/bind/lib
EXTRA_LIBS=-lbind
-
Why does accessing directories only work when I include
the trailing "/"
(e.g., http://foo.domain.com/~user/)
but not when I omit it
(e.g., http://foo.domain.com/~user)?
When you access a directory without a trailing "/", Apache needs
to send what is called a redirect to the client to tell it to
add the trailing slash. If it did not do so, relative URLs would
not work properly. When it sends the redirect, it needs to know
the name of the server so that it can include it in the redirect.
There are two ways for Apache to find this out; either it can guess,
or you can tell it. If your DNS is configured correctly, it can
normally guess without any problems. If it is not, however, then
you need to tell it.
Add a ServerName directive
to the config file to tell it what the domain name of the server is.
-
How do I set up Apache to require a username and
password to access certain documents?
There are several ways to do this; some of the more popular
ones are to use the mod_auth,
mod_auth_db, or
mod_auth_dbm modules.
For an explanation on how to implement these restrictions, see
Apache Week's
articles on
Using User Authentication
or
DBM User Authentication.
-
Why is the environment variable
REMOTE_USER not set?
This variable is set and thus available in SSI or CGI scripts if and
only if the requested document was protected by access
authentication. For an explanation on how to implement these restrictions,
see
Apache Week's
articles on
Using User Authentication
or
DBM User Authentication.
Hint: When using a CGI script to receive the data of a HTML FORM
notice that protecting the document containing the FORM is not
sufficient to provide REMOTE_USER to the CGI script. You have
to protect the CGI script, too. Or alternatively only the CGI script (then
authentication happens only after filling out the form).
-
How do I set up Apache to allow access to certain
documents only if a site is either a local site or
the user supplies a password and username?
Use the Satisfy directive,
in particular the Satisfy Any
directive, to require
that only one of the access restrictions be met. For example,
adding the following configuration to a .htaccess
or server configuration file would restrict access to people who
either are accessing the site from a host under domain.com or
who can supply a valid username and password:
deny from all
allow from .domain.com
AuthType Basic
AuthUserFile /usr/local/apache/conf/htpasswd.users
AuthName special directory
require valid-user
satisfy any
See the user authentication
question and the mod_access
module for details on how the above directives work.
-
Why doesn't mod_info list any directives?
The mod_info
module allows you to use a Web browser to see how your server is
configured. Among the information it displays is the list modules and
their configuration directives. The "current" values for
the directives are not necessarily those of the running server; they
are extracted from the configuration files themselves at the time of
the request. If the files have been changed since the server was last
reloaded, the display will will not match the values actively in use.
If the files and the path to the files are not readable by the user as
which the server is running (see the
User
directive), then mod_info cannot read them in order to
list their values. An entry will be made in the error log in
this event, however.
-
When I run it under Linux I get "shmget:
function not found", what should I do?
Your kernel has been built without SysV IPC support. You will have to
rebuild the kernel with that support enabled (it's under the
"General Setup" submenu). Documentation for
kernel building is beyond the scope of this FAQ; you should consult
the
Kernel HOWTO,
or the documentation provided with your distribution, or a
Linux newsgroup/mailing list.
As a last-resort workaround, you can
comment out the #define USE_SHMGET_SCOREBOARD
definition in the
LINUX section of
src/conf.h and rebuild the server (prior to 1.3b4, simply
removing #define HAVE_SHMGET
would have sufficed).
This will produce a server which is slower and less reliable.
-
Why does my authentication give me a server error?
Under normal circumstances, the Apache access control modules will
pass unrecognized user IDs on to the next access control module in
line. Only if the user ID is recognized and the password is validated
(or not) will it give the usual success or "authentication
failed" messages.
However, if the last access module in line 'declines' the validation
request (because it has never heard of the user ID or because it is not
configured), the http_request handler will give one of
the following, confusing, errors:
- check access
- check user. No user file?
- check access. No groups file?
This does not mean that you have to add an
'AuthUserFile /dev/null' line as some magazines suggest!
The solution is to ensure that at least the last module is authoritative
and CONFIGURED. By default, mod_auth is
authoritative and will give an OK/Denied, but only if it is configured
with the proper AuthUserFile. Likewise, if a valid group
is required. (Remember that the modules are processed in the reverse
order from that in which they appear in your compile-time
Configuration file.)
A typical situation for this error is when you are using the
mod_auth_dbm, mod_auth_msql,
mod_auth_mysql, mod_auth_anon or
mod_auth_cookie modules on their own. These are by
default not authoritative, and this will pass the
buck on to the (non-existent) next authentication module when the
user ID is not in their respective database. Just add the appropriate
'XXXAuthoritative yes' line to the configuration.
In general it is a good idea (though not terribly efficient) to have the
file-based mod_auth a module of last resort. This allows
you to access the web server with a few special passwords even if the
databases are down or corrupted. This does cost a
file open/seek/close for each request in a protected area.
-
Do I have to keep the (mSQL) authentication information
on the same machine?
Some organizations feel very strongly about keeping the authentication
information on a different machine than the webserver. With the
mod_auth_msql, mod_auth_mysql, and other SQL
modules connecting to (R)DBMses this is quite possible. Just configure
an explicit host to contact.
Be aware that with mSQL and Oracle, opening and closing these database
connections is very expensive and time consuming. You might want to
look at the code in the auth_* modules and play with the
compile time flags to alleviate this somewhat, if your RDBMS licences
allow for it.
-
Why is my mSQL authentication terribly slow?
You have probably configured the Host by specifying a FQHN,
and thus the libmsql will use a full blown TCP/IP socket
to talk to the database, rather than a fast internal device. The
libmsql, the mSQL FAQ, and the mod_auth_msql
documentation warn you about this. If you have to use different
hosts, check out the mod_auth_msql code for
some compile time flags which might - or might not - suit you.
-
Where can I find mod_rewrite rulesets which already solve
particular URL-related problems?
There is a collection of
Practical Solutions for URL-Manipulation
where you can
find all typical solutions the author of
mod_rewrite
currently knows of. If you have more
interesting rulesets which solve particular problems not currently covered in
this document, send it to
Ralf S. Engelschall
for inclusion. The
other webmasters will thank you for avoiding the reinvention of the wheel.
-
Where can I find any published information about
URL-manipulations and mod_rewrite?
There is an article from
Ralf S. Engelschall
about URL-manipulations based on
mod_rewrite
in the "iX Multiuser Multitasking Magazin" issue #12/96. The
german (original) version
can be read online at
<http://www.heise.de/ix/artikel/9612149/>,
the English (translated) version can be found at
<http://www.heise.de/ix/artikel/E/9612149/>.
-
Why is mod_rewrite so difficult to learn and seems so
complicated?
Hmmm... there are a lot of reasons. First, mod_rewrite itself is a powerful
module which can help you in really all aspects of URL
rewriting, so it can be no trivial module per definition. To accomplish
its hard job it uses software leverage and makes use of a powerful regular
expression
library by Henry Spencer which is an integral part of Apache since its
version 1.2. And regular expressions itself can be difficult to newbies,
while providing the most flexible power to the advanced hacker.
On the other hand mod_rewrite has to work inside the Apache API environment
and needs to do some tricks to fit there. For instance the Apache API as of
1.x really was not designed for URL rewriting at the .htaccess
level of processing. Or the problem of multiple rewrites in sequence, which
is also not handled by the API per design. To provide this features
mod_rewrite has to do some special (but API compliant!) handling which leads
to difficult processing inside the Apache kernel. While the user usually
doesn't see anything of this processing, it can be difficult to find
problems when some of your RewriteRules seem not to work.
-
What can I do if my RewriteRules don't work as expected?
Use "RewriteLog somefile" and
"RewriteLogLevel 9" and have a precise look at the
steps the rewriting engine performs. This is really the only one and best
way to debug your rewriting configuration.
- Why don't some of my URLs
get prefixed with DocumentRoot when using mod_rewrite?
If the rule starts with /somedir/... make sure that really no
/somedir exists on the filesystem if you don't want to lead the
URL to match this directory, i.e. there must be no root directory named
somedir on the filesystem. Because if there is such a
directory, the URL will not get prefixed with DocumentRoot. This behaviour
looks ugly, but is really important for some other aspects of URL
rewriting.
-
How can I make all my URLs case-insensitive with mod_rewrite?
You can't! The reason is: First, case translations for arbitrary length URLs
cannot be done via regex patterns and corresponding substitutions.
One need
a per-character pattern like sed/Perl tr|..|..| feature.
Second, just
making URLs always upper or lower case will not resolve the complete problem
of case-INSENSITIVE URLs, because actually the URLs had to be rewritten to
the correct case-variant residing on the filesystem because in later
processing Apache needs to access the file. And Unix filesystem is always
case-SENSITIVE.
But there is a module named mod_speling.c
(yes, it is named
this way!) out there on the net. Try this one.
-
Why are RewriteRules in my VirtualHost parts ignored?
Because you have to enable the engine for every virtual host explicitly due
to security concerns. Just add a "RewriteEngine on" to your
virtual host configuration parts.
-
How can I use strings with whitespaces in RewriteRule's ENV
flag?
There is only one ugly solution: You have to surround the complete flag
argument by quotation marks ("[E=...]"). Notice: The argument
to quote here is not the argument to the E-flag, it is the argument of the
Apache config file parser, i.e. the third argument of the RewriteRule here.
So you have to write "[E=any text with whitespaces]".
-
Where can I find the "CGI specification"?
The Common Gateway Interface (CGI) specification can be found at
the original NCSA site
<
http://hoohoo.ncsa.uiuc.edu/cgi/interface.html>.
This version hasn't been updated since 1995, and there have been
some efforts to update it.
A new draft is being worked on with the intent of making it an informational
RFC; you can find out more about this project at
<http://web.golux.com/coar/cgi/>.
-
Is Apache Year 2000 compliant?
Yes, Apache is Year 2000 compliant.
Apache internally never stores years as two digits.
On the HTTP protocol level RFC1123-style addresses are generated
which is the only format a HTTP/1.1-compliant server should
generate. To be compatible with older applications Apache
recognizes ANSI C's asctime()
and
RFC850-/RFC1036-style date formats, too.
The asctime()
format uses four-digit years,
but the RFC850 and RFC1036 date formats only define a two-digit year.
If Apache sees such a date with a value less than 70 it assumes that
the century is 20 rather than 19.
Some aspects of Apache's output may use two-digit years, such as the
automatic listing of directory contents provided by
mod_autoindex
with the
FancyIndexing
option enabled, but it is improper to depend upon such displays for
specific syntax. And even that issue is being addressed by the
developers; a future version of Apache should allow you to format that
display as you like.
Although Apache is Year 2000 compliant, you may still get problems
if the underlying OS has problems with dates past year 2000
(e.g., OS calls which accept or return year numbers).
Most (UNIX) systems store dates internally as signed 32-bit integers
which contain the number of seconds since 1st January 1970, so
the magic boundary to worry about is the year 2038 and not 2000.
But modern operating systems shouldn't cause any trouble
at all.
-
I upgraded to Apache 1.3b and now my virtual hosts don't
work!
In versions of Apache prior to 1.3b2, there was a lot of confusion
regarding address-based virtual hosts and (HTTP/1.1) name-based
virtual hosts, and the rules concerning how the server processed
<VirtualHost> definitions were very complex and not
well documented.
Apache 1.3b2 introduced a new directive,
NameVirtualHost,
which simplifies the rules quite a bit. However, changing the rules
like this means that your existing name-based
<VirtualHost> containers probably won't work
correctly immediately following the upgrade.
To correct this problem, add the following line to the beginning of
your server configuration file, before defining any virtual hosts:
NameVirtualHost n.n.n.n
Replace the "n.n.n.n" with the IP address to
which the name-based virtual host names resolve; if you have multiple
name-based hosts on multiple addresses, repeat the directive for each
address.
Make sure that your name-based <VirtualHost> blocks
contain ServerName and possibly ServerAlias
directives so Apache can be sure to tell them apart correctly.
Please see the
Apache
Virtual Host documentation for further details about configuration.
-
I'm using RedHat Linux and I have problems with httpd
dying randomly or not restarting properly
RedHat Linux versions 4.x (and possibly earlier) RPMs contain
various nasty scripts which do not stop or restart Apache properly.
These can affect you even if you're not running the RedHat supplied
RPMs.
If you're using the default install then you're probably running
Apache 1.1.3, which is outdated. From RedHat's ftp site you can
pick up a more recent RPM for Apache 1.2.x. This will solve one of
the problems.
If you're using a custom built Apache rather than the RedHat RPMs
then you should rpm -e apache
. In particular you want
the mildly broken /etc/logrotate.d/apache
script to be
removed, and you want the broken /etc/rc.d/init.d/httpd
(or httpd.init
) script to be removed. The latter is
actually fixed by the apache-1.2.5 RPMs but if you're building your
own Apache then you probably don't want the RedHat files.
We can't stress enough how important it is for folks, especially
vendors to follow the stopping Apache
directions given in our documentation. In RedHat's defense,
the broken scripts were necessary with Apache 1.1.x because the
Linux support in 1.1.x was very poor, and there were various race
conditions on all platforms. None of this should be necessary with
Apache 1.2 and later.
-
I upgraded from an Apache version earlier
than 1.2.0 and suddenly I have problems with Apache dying randomly
or not restarting properly
You should read the previous note about
problems with RedHat installations. It is entirely likely that your
installation has start/stop/restart scripts which were built for
an earlier version of Apache. Versions earlier than 1.2.0 had
various race conditions that made it necessary to use
kill -9
at times to take out all the httpd servers.
But that should not be necessary any longer. You should follow
the directions on how to stop
and restart Apache.
As of Apache 1.3 there is a script
src/support/apachectl
which, after a bit of
customization, is suitable for starting, stopping, and restarting
your server.
-
I'm using RedHat Linux and my .htm files are showing
up as HTML source rather than being formatted!
RedHat messed up and forgot to put a content type for .htm
files into /etc/mime.types
. Edit /etc/mime.types
,
find the line containing html
and add htm
to it.
Then restart your httpd server:
kill -HUP `cat /var/run/httpd.pid`
Then clear your browsers' caches. (Many browsers won't
re-examine the content type after they've reloaded a page.)
-
I'm using RedHat Linux 5.0, or some other
glibc-based Linux system, and I get errors with the
crypt
function when I attempt to build Apache 1.2.
glibc puts the crypt
function into a separate
library. Edit your src/Configuration
file and set this:
EXTRA_LIBS=-lcrypt
Then re-run src/Configure and re-execute the make.
-
Server hangs, or fails to start, and/or error log
fills with "fcntl: F_SETLKW: No record locks
available" or similar messages
These are symptoms of a fine locking problem, which usually means that
the server is trying to use a synchronization file on an NFS filesystem.
Because of its parallel-operation model, the Apache Web server needs to
provide some form of synchronization when accessing certain resources.
One of these synchronization methods involves taking out locks on a file,
which means that the filesystem whereon the lockfile resides must support
locking. In many cases this means it can't be kept on an
NFS-mounted filesystem.
To cause the Web server to work around the NFS locking limitations, include
a line such as the following in your server configuration files:
LockFile /var/run/apache-lock
The directory should not be generally writable (e.g., don't use
/var/tmp).
See the LockFile
documentation for more information.
-
What's the best hardware/operating system/... How do
I get the most out of my Apache Web server?
Check out Dean Gaudet's
performance tuning page.
-
What are "regular expressions"?
Regular expressions are a way of describing a pattern - for example, "all the words
that begin with the letter A" or "every 10-digit phone number" or even "Every sentence
with two commas in it, and no capital letter Q". Regular expressions (aka "regexp"s)
are useful in Apache because they let you apply certain attributes against collections
of files or resources in very flexible ways - for example, all .gif and .jpg files under
any "images" directory could be written as /.*\/images\/.*[jpg|gif]/.
The best overview around is probably the one which comes with
Perl. We implement a simple subset of Perl's regexp support, but
it's still a good way to learn what they mean. You can start by
going to the CPAN
page on regular expressions, and branching out from there.
- I'm using gcc and I get some
compilation errors, what is wrong?
GCC parses your system header files and produces a modified subset which
it uses for compiling. This behaviour ties GCC tightly to the version
of your operating system. So, for example, if you were running IRIX 5.3
when you built GCC and then upgrade to IRIX 6.2 later, you will have to
rebuild GCC. Similarly for Solaris 2.4, 2.5, or 2.5.1 when you upgrade
to 2.6. Sometimes you can type "gcc -v" and it will tell you the version
of the operating system it was built against.
If you fail to do this, then it is very likely that Apache will fail
to build. One of the most common errors is with readv
,
writev
, or uio.h
. This is not a
bug with Apache. You will need to re-install GCC.