[EdCert previous] [EdCert next] [EdCert top]

Introduction to Security

A computer can be considered secure if it and it's software can be depended upon. This does not mean that it is absolutely impenetrable to crackers, viruses and other forms of unauthorized entry. The only computer that is absolutely secure is unplugged and locked in a vault somewhere. Security is a responsibility shared between the organization who owns the system, the system administrator, anyone who uses the system, and anyone who who walks in the room where the system is.

It is up to the organization that owns the system to set security policies including who has physical access to the machines, who receives accounts on them, how security infractions are dealt with, and plans for dealing with emergencies. It is the responsibility of the users to adhere to the policies that have been set in place and not do things that will compromise security like sharing passwords. At IU the first time a user logs in a responsibilities statement appears on the screen. Users must agree to this set of responsibilities before they can use their account. The responsibility for installing and maintaining a secure system falls to the system administrator.

Physical Security

Security can be roughly divided into four areas of concern; accounts, networks, filesystems, and physical security. Physical security is the first line of defense. All the network, filesystem and account security in the world will do little good if someone walks off with the computer. Physical security includes:

Making sure the building and/or room where the computer is kept are secure is the first step in preventing vandalism and theft. It is also recommended, particularly in public sites, that the machines themselves be locked to tables or hooked in to a security system. If the machines are part of a public computing site it is a good idea to have an administrator or other trusted person on site when it is open for use.

Environmental issues include everything from insuring there are fire extinguishers and surge protectors to preventing cups of coffee from being spilled on the keyboard.

Vandalism is frequently done to exposed system components, like networking cables, rather than to the main system. Exposed cables can also be vulnerable to tapping and eavesdropping.

Types and Risk Assessment

In term of accounts, networks and filesystems there are several types of security.

Privacy
Insuring that information is protected so that people who are not authorized by the owner of the information cannot access it.
Data integrity
Protecting data from unauthorized alteration or deletion. This includes files, programs, accounting records, backup tapes, documentation, and file modification times.
Availability
Making sure that services are not corrupted, degraded or crashed.
Consistency
Insuring that the system behaves as expected.
Isolation
Protecting the system against unauthorized users and software.
Audits
Protecting the integrity of the system by closely monitoring the changes that take place.

Depending on who owns the system and how it is used, some types of security may be more important than others. Assessing risks on a particular system involves understanding the potential risks while taking into account the operating system environment and the needs of users. Having a clear idea of what needs to be protected and how to prioritize those needs is the key to reducing risks.

Passwords

Passwords are the first internal line of defense in a system as breakins are frequently the result of poorly chosen passwords. For recommended information on the importance of passwords see The Care and Feeding of Passwords.

Unix deals with passwords by not storing the actual password anywhere on the system. When a password is set, what's stored is a value generated by taking the password that was typed in and using it to encrypt a block of zeros. When a user logs in /bin/login takes the password the user types in and uses it encrypt another block of zeros. This is then compared with the stored block of zeros. If the two match the user is permitted to log in. The algorithm that Unix uses for encryption is based on the Data Encryption Standard (DES).

5. For increased password security why not just use a different encryption program, one that can't be decrypted?

All this means that even encrypted passwords are not secure if kept in a world-readable file like /etc/passwd. /etc/passwd needs to be world-readable for a number of reasons, including the user's ability to and change their own passwords. So passwords are stored in /etc/shadow which is readable only by root. Even this does not make passwords absolutely secure, it just makes them harder to get to. On some systems password shadowing can be defeated with a program that makes repeated calls to getpwent. Under SunOS a program making use of pwdauth can do the same thing.

From a security point of view /etc/passwd is one of the most important files on a system. If an unauthorized user can alter that file they can change any user's password or make themselves root by changing their UID to 0. The /etc/group file can also be crucial. By gaining access to the right group an intruder can gain write privileges to /etc/passwd. The use of groups is discouraged on general principle as it involves a number of people accessing the same files and even sharing a password. In reality it is a tool and widely used. Using groups effectively involves defining them not only on the basis of who needs to share files, but on who should be prevented from accessing the files.

Password aging can be used to force users to change their passwords. The decision to use this tool deserves some serious consideration as it can be a double edged sword. It will force users to change their passwords regularly. But due to the minimum life span field it can also prevent them from changing them if they know that someone else has obtained the password for their account. If this occurs the system administrator would have to change the password for them. On the other hand, forcing users to change passwords means if an intruder does obtain a password for a users account the password will only be good for a limited time. Password aging can also be added to individual accounts.

On an SGI the aging information can be added manually by entering it after the existing fields in /etc/passwd with a coma separating the fields and aging information. The syntax is:

arushkin:*:132:20:Amy Rushkin:/usr/people/arushkin;/bin/csh, M:m:ww

The system automatically updates the week when the password was last changed. If the values of M and m are equal the user must change the password at the next log in. If the value of m is greater than M then root must change the password.

The passwd command can also be used on an SGI. Under HP-UX or Solaris passwd and usermod are used. With the introduction of HP-UX 10.X, password protection is no longer implemented with an /etc/shadow file. For recommended information on this subject see Password protection under HP-UX 10.X.

6. Are password aging controls kept in /etc/passwd or /etc/shadow?

Note: Because of the way aging information is stored, it is not practical at this time to use account aging under Linux.

Permissions

The names of files in a directory are not stored in the files themselves. They are stored in the directory. So any user with read permission for a directory can look in and see what files are there. If a user has execute permission only for a directory they can use any file in that directory, as long as they know the name of the file. Execute permission combined with read permission can be used to modify a file. A user with these permissions can copy any file they know the name of and the copy will belong to them because they created it.

By default every directory that's created receives 777 permissions (rwxrwxrwx) and every file gets 666 permissions (rw-rw-rw-). To avoid having to do a chmod on every new directory and file umask is used to change the default permissions. umask subtracts r, w and x values from the systems default values. The entry looks something like this:

UMASK=022

umask values are listed in octal. The entry listed here is a conventional umask value that subtracts 2 or write permissions for group and world. A user can set an individual umask by entering the value in their .login or .profile.

umask is a built-in shell command and may function differently under different shells. It may be best to consult the umaskman page before using the command.

An additional tip on directory permissions is the use of the sticky bit. When the sticky bit is turned on a user can access the directory but cannot delete or rename any files that don't belong to them. This can be a useful tool for directories such as /tmp that are world writable.

System Login Options and Restricted Shells

Unix allows options to be set that determine the way the system deals with logins. A large number of failed login attempts, particularly on the same account, can indicate that someone was trying to break in to the system. Under older Unix "flavors" login options were kept in /etc/config/login.options . Newer OS versions use /etc/default/login instead. This may not prevent an intruder from gaining access to the system, but it can make the process more time consuming and leave a record of it.

Login options include:

MAXTRYS
The maximum number of login attempts permitted. After this number has been reached the login program sleeps for a few seconds before allowing the user to try again.
DISABLETIME
The number of seconds after an unsuccessful login that the line is disabled. This is used in conjunction with MAXTRYS.
SYSLOG=FAIL/ALL
Determines how logins are recorded in syslog, whether only failures (FAIL) are recorded or all logins (ALL).
PASSREQ=YES/NO
Determines whether all accounts on this system must have passwords. If this is set to YES and a user has no password, they will be prompted for one when they log in.
TIMEOUT
Sets the number of seconds of inactivity to wait before abandoning a login session.

SGI supports an additional option, LOGFAILURES. This can be used to set the number of consecutive unsuccessful login attempts permitted before they are recorded in /var/adm/lastlog. The lastlog file holds the date of the most recent login and the terminal line (tty name) or remote host where the login came from. This can also used to have users help maintain system security. Placing the line lastlog in the /etc/config/login.options will cause login to display the lastlog information when the user logs in. In more recent versions of IRIX, this file has been replaced by /etc/default/login. This gives users the opportunity to notice odd login times or locations and alert the system administrator.

A security option for temporary accounts or accounts set up for a specific purpose is the restricted shell, rsh. Care must be taken not to confuse this with the remote shell command, rsh. When a restricted shell starts it executes the commands found in $HOME/.profile. Once this is done the user cannot:

7. Couldn't the user interrupt rsh during this process and start a new shell?

It is also possible to prevent a restricted shell from being used over the network. This entails having the shell script issue the tty command to make sure the user is attached to the physical terminal and not to a network port.

On BSD-like systems, such as SunOS, a restricted shell can be created by making a hard link to an sh program and naming it rsh.

Setting up a restricted account involves creating a special directory containing only programs that the restricted shell may run and making a user account with rsh as the login shell.

Due to the existence of shell escapes, restricted shells should not be viewed as a cure all. A shell escape is a program feature that allows another shell to be created. Many programs with this feature do not document it. If a program run by rsh has the ability to run subprograms the restrictions placed on the shell can be nullified. As an example, if rsh runs a program that can use man to read man pages the user can employ man to start an editor, which can spawn a new shell free of rsh restrictions.

SUID and SGID

SUID and SGID programs can be double edged swords as whoever executes them gets the UID of the programs owner. If the program is owned by root, the user becomes root. They can be used to give a user access to something they would normally need root privilege for without giving them the root password. They can also be a serious security risk.

Shell scripts with SUID and SGID bits are not secure, period. This does not mean they should never be used. Rather this goes back to the larger issue of minimizing risk vs. eliminating it. Risk can be minimized by making sure that programs with SUID and SGID are not world readable. This can prevent people from studying the code, discovering how it works and exploiting its weaknesses.

The following command can be used to check for SUID programs owned by root:

find / -user root -perm -004000 -print

It is a good idea to run this command soon after the system has been set up, send the output to a file and keep it for making comparisons later. Writing programs that SUID to root should only be done when absolutely necessary. There are other ways to restrict access such as creating an account in /etc/passwd for a psuedo-user who can own restricted resources. Programs can then be written SUID to that user. The convention for creating psuedo-user accounts is that an asterisk be placed in the password field and the home directory is /dev/null.

Another useful trick for a program that needs SUID to begin execution is to use seteuid to control the power of setuid. An EUID or effective UID is whatever a user's UID is at the moment. When a user is executing a SUID program, their effective UID becomes the UID of the programs owner. An SUID program may not need to run SUID for the duration. In that case the program can be written so that when whoever is running the program no longer needs the permissions of the programs owner, their EUID will revert back to their UID rather than remaining the UID of the programs owner.

The following command may be used to check for all SUID programs on the root filesystem:

find / -xdev -perm -004000 -exec ls -l {} \;

To check for SUID programs on other filesystems:

find /name of filesystem -xdev -perm -004000 -exec ls -l {} \;

To check for SUID and GUID programs:

find /directory to be searched -type f -a \( -perm -004000 -o -perm -002000 \) -exec ls -l {} \;

Type options:

-perm -004000 is used to specify a match if the 004000 bit (SUID) is on and -perm -002000 specifies a match if the 002000 bit (GUID) is on.

SUID and GUID programs are normally found in:

When checking for SUID and GUID programs it is a good idea to keep a look out for programs not in one of these locations.

Network Security

Networks in general are vulnerable to eavesdropping . This is due partially to the fact that many network applications transmit sensitive information, such as passwords and UID's, when requesting services. Security risks can be reduced by encrypting the information and/or using an authentication program. One of the more popular authentication programs is Kerberos. It uses DES cryptology to send sensitive information over open networks and is an addition that can be used with any network protocol.

Another important factor in network security is controlling network access. The /etc/hosts.equiv, .rhosts and /etc/passwd files control whether access is given to rlogin , rcp, and rsh. /etc/hosts.equiv contain a list of hosts that are trusted or considered equivalent to the that machine. Some systems use /etc/hosts.allow and /etc/hosts.deny rather than a single /etc/hosts.equiv file. The .rhosts file holds a list of hosts that are permitted access to a specific user account.

When a request is received, /etc/hosts.equiv is checked to see if the host is listed. Next /etc/passwd is checked to see if the desired account is listed. If the host and account are listed, access is granted without prompting the user for a password.

A root login bypasses the /etc/hosts.equiv file completely and uses only /.rhosts. This means that anyone who has access to machine with /.rhosts privileges can log in as anyone on the system. Once again no password is needed. A user could log in as root and become root on any system that has that machines name in its /.rhosts file. This can have interesting end results. A user can also set up an .rhosts file in their home directory to allow another user access to their account.

An additional note, SunOS 4.1.x is distributed with a + in the /etc/hosts.equiv file. This means that any and all hosts on the network are trusted and have access. This should be removed as soon as the machine is set up.

Many Unix services require no password and are available, across a network, to anyone. An example of this is finger which can generally be used by anyone to find out who is logged in to a given host, how long they've been idle and what their full name is. A cracker breaking into Unix systems at the Eindhoven University of Technology used repeated finger calls to determine the level of system activity. It was in response to this attack that the TCP wrapper was developed.

TCP wrappers log and control Internet access to tftp, ftp, telnet, remote shells, rlogin, finger, exec and talk. Access restrictions can be set individually for each service. Each remote access is logged including the name of the service, whether the connection was accepted or rejected, the name of the remote host, and a date/time stamp. This additional protection can serve as a deterrent for intruders, while the log information can make break-ins easier to trace.

Another security consideration is the way ftp is set up. There are two options for building ftp; restricted and anonymous. Restricted ftp accounts allow limited access to files. When a client requests services the server sends a chroot system call to prevent the client from moving outside the part of the filesystem where the ftp home directory is. A password is required to use a restricted account. Because the password is transmitted over the network it is vulnerable to interception.

Restricted accounts must be listed in the /etc/ftpusers file followed by the word restricted. /etc/ftpusers can also be used to list the names of non-trusted users, who cannot use ftp to access any files. It is a good idea to put the names of accounts that do not belong to human users in this file.

Anonymous ftp is special type of restricted account that does not require a password. It may be set up without a listing in /etc/ftpusers. Instead an account named ftp must be created and all the files available via anonymous ftp must be put in that accounts home directory. Like restricted ftp, anonymous ftp uses a chroot system call to change the root of the filesystem to the ftp home directory. For recommended information on setting up anonymous ftp see anonymous-ftp FAQ.

UUCP, the Unix-to-Unix-CoPy system, also requires some attention in terms of security. UUCP is mainly used for:

UUCP comes with almost every version of Unix. Because it runs over standard serial cables it requires no special hardware. Although the prevalence of networking has overtaken some of its functions, UUCP is still widely used. It utilizes batching to transfer data, which can lower the cost of networking. Usenet uses UUCP to transfer messages.

Basically UUCP consists of two programs; uucp and uux. The uucp command enables the transfer of files between Unix systems, while uux allows commands to be executed on remote systems. Neither of these programs actually transmit information. They send it to a local spool file, /usr/spool/uucp on BSD systems or /var/spool under SVR4. At some designated time the uucico program contacts the remote computer and transmits the files. When UUCP contacts a remote system it receives the login: and password: prompts, logs into a special account and uses another copy of uucico as its shell.

It is possible to run UUCP without a password, but this leaves an open account on the system. uucp programs run SUID to uucp. So uucp can only read files that it owns or files that are world-readable. It can only write files in directories owned by uucp or those that are world-writable. The UUCP login does not receive a normal system shell. It uses another copy of uucico that permits specific functions as specified by the system administrator. It is also possible to create /etc/passwd entries for every system that makes contact via UUCP. These entries can be used to specify privileges and access for each machine. UUCP can also be restricted to retrieving files from certain directories or this function can be turned off completely.

8. Since UUCP consists of two separate programs, does it require two entries in /etc/passwd?

uucico records the names, telephone numbers, account names, and passwords it uses to log into every machine. They are kept in the /usr/lib/uucp/Systems file. This file needs to be protected as anyone able to read it could obtain enough information to access all of the machines listed. By convention /usr/lib/uucp/Systems is owned by uucp or nuucp and readable only by the owner.

For more optional information on network security see An Architectural Overview of UNIX Network Security

X Windows

The X Window System was designed to allow a client connecting to the X server complete control of the display. The ramifications of this are that a client can take over the keyboard, send keystrokes to other clients and even kill the windows of other clients. There are basically four access controls in X, one is host-based and the rest are client-based.

X workstations can run client programs transparently on other hosts over a network. This occurs independently of login accounts and passwords. It is controlled by X protocols. When X starts, it checks for the /etc/Xn.hosts. The n corresponds to the server number. Server 0 checks for /etc/X0.hosts, etc. If this file is missing or empty no remote hosts are permitted access. If the file contains a +, all remote hosts are allowed access. The /etc/Xn.hosts is not included in default X configurations and has to be added manually. In conjunction with this file, the xhost client can be used to allow or deny access to the server. Entering:

xhosts +banana.big.com

allows the host banana access to the server. Entering:

xhosts -banana.big.com

denies banana access to the server, even it was previously permitted access by the /etc/Xn.hosts file. Using the minus sign will only prevent future access, it will not affect any current processes that the client is running. Entering xhost with no arguments will print a list of hosts that are currently permitted access. The liability of this host-based access control is that it works both ways. If grizzly. big.com denies access to banana.big.com then grizzly can't access banana either. The xhost program cannot be run remotely.

An additional note on xhost under IRIX:

" 'xdm' does 'xhost +' by default when you log in. This allows anyone to open windows on your display and even to record what you type at your keyboard. Close this hole by removing the 'xhost +' from /usr/lib/X11/xdm/Xsession, /usr/lib/X11/xdm/Xsession-remote and (in IRIX 5.x) /usr/lib/X11/xdm/Xsession.dt. In IRIX 5.2 and later you can use X authorization to control access to remote displays; see below. In IRIX 5.1.x and earlier X authorization doesn't work, so you'll need to use 'xhost' judiciously to get to remote displays: say 'xhost +localhost' to run DGL programs and 'xhost +otherhost' to display remote X programs."
-- SGI FAQ:"How can I configure IRIX to be more secure?"

For optional information on X authorization, see How can I get X authorization to work?.

The most common client-based access control mechanism is MIT's MAGIC-COOKIE-1.Under MAGIC-COOKIE when a user logs in under xdm a machine readable code is put into the .Xauthority file in their home directory. This code is referred to as a magic cookie and is passed on to the server. Once the code as been established for an X session, the client must give the code before they are permitted to connect to the server. The clients obtain the code by reading .Xauthority which, is readable and writable only by the owner of the file. This access control is based entirely on Unix file permissions and should only be considered as secure as the account of the user who owns the .Xauthority file.

One possible drawback of this method is that it relies on all clients being able to access the magic cookie. This can be a problem when running clients on a host which does not have a shared home directory. The solution to this is xauth, a program used to share the magic cookie between hosts. xauth can also be employed for using the magic cookie with xinit. Two more recent access control mechanisms, XDM-AUTHORIZATION-1 and SUN-DES-1, exist and are considered more secure than MIT-MAGIC-COOKIE-1 because they use DES encryption to transmit the authorization code.

9. If a client is denied access under xauth and listed in /etc/Xn.hosts, can they get gain access?

For recommended information on X Windows security see Crash Course in X Windows Security, and Securing X Windows .

Denial of Service Attacks

Unix is particularly vulnerable to denial of service attacks. A denial of service attack is an attack where one user takes up so much of the shared system resources that there is not enough available for other users. This results in services either degenerating or stopping altogether. Attacks can be intentional or accidental. Fortunately it is generally not difficult to figure out who is responsible for one. Attacks come in two types; an attack that is intended to damage or destroy resources and one that overloads a system service so that no one else can use it.

Attacks that damage or destroy system resources can, for the most part, be prevented by limiting access to crucial accounts and files so they cannot be deleted, and making sure the system is physically secure. The simplest way to execute a destructive denial of service attack is to cut off power to a system or cut the cable to it.

An attack in the form of overloading the system can be directed at a variety of resources and services. In the case of shared computers, a user can run so many processes simultaneously that the system overloads. This can be prevented by limiting the number of processes that a single user can run with the MAXUPROC variable. Even with this limit set someone logged in as root could overload the system with multiple processes because there is no process limit for the superuser. This condition can occur inadvertently, even with process limits, if there are too many users on the system.

It is also possible to overload the system by creating multiple processes that demand large amounts of CPU. If this occurs the system administrator can log in as root, set their own priority as high as possible with the renice command and use ps to see what processes are running.

A full disk partition will also cause the system to slow down. This can be prevented by issuing space quotas for each user and checking periodically to see that the quotas are adhered to. If a disk partition does become full there ways to deal with it. The du command can be used to find the directories with the most data. find with the -size option can be used to locate larger files. The quot command can also be useful as it allows filesystem use to be summarized by user. Filling up all the inodes on a disk can produce similar results. Inodes are used to store information about files. Creating a mass of empty files could fill them up. This would render the system unable to create any new files even if disk space was available. The command df -i will print out the number of free inodes.

Bad Bugs-programmed threats

Talk about computer viruses is fairly common--so common that often anything in a computer system that causes it to behave erratically is mistakenly referred to as a virus.

10. If all these aren't all viruses then what are they?

Backdoors

A backdoor, also known as a trapdoor, is a privileged entry point into the system. Generally a backdoor is a way to get around permissions and gain superuser status. Programmers use them to monitor and test programs. They also provide an access point if there are problems that need to be fixed. Backdoors are usually removed before the program is shipped to the customer, but not always. They can also provide an entry point into the system if the root password is lost or corrupted. Without a root password there is no way to use privileged commands that are necessary to repair or replace the password file. The problem with backdoors is that anyone who finds them can use them. Fortunately there are alternatives. In the case of a corrupt or lost root password the system can be brought down and restarted in single user mode. The password file can then be repaired. Another alternative is to create a root alias entry in the password file.

Trojan horses and search paths

A Trojan horse is a piece of code that hides inside another program and performs a concealed function. Trojan horses can be used to hide other sorts of bugs such as a virus, bomb, bacteria or worm. An example is a program that was distributed via Usenet a few years ago over Thanksgiving. The program was called turkey and was supposed to draw a turkey on the user's monitor. It did indeed draw a turkey, but it also removed all the files in the users home directory. Another common type of Trojan horse mimics a normal login, but records the password that the user types. The program then exits and returns the user to the real login screen.

Frequently Trojan horses are given the name of some basic system program like ls to entice a user into running the program. The Trojan horse is then placed in a directory like the users home directory or a bin subdirectory of the users home directory. This is done in hopes that the next time the user enters ls the program will be activated. This is all dependent on the placement of the users home directory in their search path. If the users home directory is at the end of their search path list then the system copy of ls will be found first and used rather than the Trojan horse copy.

Even stricter precautions should be taken with the root search path. It is a good idea to use only absolute pathnames rather than relative ones. It is also recommended that no directories in the root path be writable by anyone other than root and that the current directory not be placed in the path at all.

Viruses

A virus is a piece of code that inserts itself into another program and modifies it. A virus is not an independent program, it depends on the program it modifies and executes only when that program runs. It then reproduces and infects other programs. A virus can infect any place where data is stored altering or destroying the data. Software is available that will scan for and destroy the more than 300 known types of computer viruses.

Bacteria

Bacteria, also known as rabbits, are programs that do not directly damage the the system. Instead they replicate themselves until they monopolize CPU, memory or disk space. This constitutes a denial of service attack.

Bombs

A bomb is actually a type of Trojan horse that can be used to release a virus or bacteria. Bombs work by causing an unauthorized action at a specified date, time or when a particular condition occurs. There are two types of bombs; logic and time. Logic bombs are set to go off when a particular event occurs. Time bombs go off at a specified time, date or after a set amount of time elapses.

Salami

Salamis cut away tiny pieces of data. They can be particularly dangerous as the damage they do is small and can be attributed to some truncation of the system. It is possible for a salami to do a great deal of damage before it is found

Worms

An increased awareness of the need for computer security came as the result of the 1988 Great Internet Worm. This was a program that moved from computer to to computer, across the network, through a backdoor in Sendmail. It was stopped because a flaw in its code caused it to behave like a bacteria on some systems.

Worms are independent programs designed to move from system to system over a network. They reproduce by copying themselves from one computer to another. Although they do not destroy data or modify other programs, they can tie up system resources as they reproduce. They can also be used to carry viruses, bacteria or bombs. Protection against worms is the same as protection against other types of break ins. If a user can break into a system, then so can a worm program.

System monitoring

Unix has several files that track logins, logouts, commands run, and use of the su command. Every time a user attempts to su a record is made in /usr/adm/sulog. Entries look something like this:

SU 10/27 14:38 - ttyq3 arsmith-root
SU 10/30 16:23 + ttyq3 arsmith-root

First the command is listed, then the date and the time the login attempt occurred based on a twenty-four hour clock. This is followed by either a + or a - indicating that the su succeeded or failed. Then the tty that the login attempt came from is listed followed by the username of the person issuing the command and which user they attempted to become. It is a good idea to review this file periodically as repeated failed attempts to su to root can be a n indication that someone is trying to break in. On BSD systems this information is written to /usr/adm/messages.

Other important files for system monitoring are; /usr/adm/lastlog, /usr/adm/utmp, /usr/adm/wtmp, /usr/adm/acct and /usr/adm/syslog. /usr/adm/lastlog, as already mentioned in this module, keeps track of each user's most recent login time. /usr/adm/umtp keeps a record of each time a user logs and /usr/adm/wtmp keeps a record of each time a user logs in or out. Commands like finger, who and users which print information on who is logged on to the system, scan /usr/adm/utmp for that information. The ps command is another way to see who is logged in and can give more accurate information, as it possible for a user to have a process running although their username does not appear in utmp or wtmp. The last command displays the contents of wtmp beginning with the most recent logins. A specific username or tty may be given as an argument. This file also logs reboots and shut downs.

The /usr/adm/acct holds a record of all the commands run by every user on the system. This feature was originally intended for accounting purposes such as users being billed for CPU time. It must be turned on manually with:

# /usr/adm/accton /usr/adm/acct

Information in this file can be displayed using the lastcomm command. The output looks something like this:

cd S arsmith ttyp1 0.00 secs Wed Oct 14 13:43
man arsmith ttyp4 0.00 secs Thu Oct 15 15:09

First the command that was executed is listed. This may be followed by a flag indicating additional information. Flags include:

If there is no flag this space is blank. Next the name of the user who executed the command is listed, the tty it was entered from and the number of seconds the command ran followed by date and time information. This file does not record options to a command or which directory they were issued in.

The /usr/adm/acct file can grow quickly. Running the command /usr/etc/sa is a way to keep the file size down without losing the information. This writes the accounting information to summary file kept in /usr/adm/savaccount.

SVR4 also supports the acctcms command which displays information in /usr/adm/acct organized according to the time of day the command was executed.

The syslog program is a general purpose logging tool originally developed for Sendmail. It uses a file called /etc/syslog.conf to determine what messages should be logged and where. Messages can be stored in files, sent to users or passed on to other syslog daemons on other systems. Each message is given a priority:

SGI provides a graphical way to view this information with the View System Log option on the System menu. Messages are prioritized into Critical, Warning, Error and Info. Messages with a small question mark may clicked on for further information.

For recommended information on syslog and other log files see Syslog notes.

Another useful command is w, which is a cross between who and ps. It displays a list of who is logged in, how long they've been idle, and what commands they're running.

Security Audits

A security audit involves checking a system for potential security problems. Things to check for include:

For a recommended security checklist see UNIX Computer Security Checklist.

Security audits can be automated by maintaining a checklist of files that are monitored on a regular basis for changes and writing shell scripts to check them. Running audits as cron jobs is an option. However, if security audits are run on a set schedule an intruder can obtain the schedule and use it to their advantage.

The listusers command can be used to obtain information on user accounts. If used with no options, listusers will display information for all users whose UID is greater than 100. It also provides -l to limit the display to designated user and -g to limit the display to a particular group.

Unowned files can be searched for using:

find / -nouser -print


find / -nouser -0 nogroup -print

The second command string will match files not owned by any user or group.

The following can be used to find .rhosts files in users home directories

find /home -name .rhosts -print

Because .rhost files allow access to the system without using a password it is recommended that users not create them in their home directories.

Comparing files on a system requires the existence of a master copy to compare them with. Backups, if they are done regularly, can be a useful tool for protection against data loss and filesystem damage. In a worst case scenario, if the computer itself is damaged or destroyed backups can be used to restore the data on a new system. There are basically three types of backups.

Day zero
A backup which is done right after the system is installed and back ups every file and program on the system.
Full backup
A backup, which should be done on a regular basis. How often this is done depends on the number of users and level of activity on a system. A full backup makes a record of every device in every filesystem.
Incremental backup
Backs up every file in a filesystem that has been modified after a particular date.
It's a good idea to try restoring a couple of files from backups as a test to ensure that equipment and software are functioning as expected. Also, for security reasons backup media should not be stored in the same room as the computer. It may be possible to hide it well enough that an intruder wouldn't find it. But in the case of a natural disaster, whatever damaged the computer may also damage the backup media.

In addition to system backups, copies of critical system files should also be made. A master copy can be made right after the system is set up with subsequent copies created after files have been changed. Files to be copied include:

Copies of/etc/passwd and /etc/groups can used to find new accounts and groups. The /etc/rc* files are useful for uncovering any changes made to the system boot sequence. /etc/ttys, /etc/ttytab, or /etc/inittab are terminal configuration files and important to monitor for modifications. The /usr/lib/crontab file, found on BSD systems, is the cron configuration file. It should be checked periodically to make sure no new commands have been put in it. /usr/lib/aliases is used to assign additional names to a user, route a user's mail to a particular machine or define mail lists. /etc/exports is accessed by any command that uses the network. It contains a list of IP addresses, hostnames and hostname aliases. A change in this file could indicate a potential NFS security problem. The /etc/netgroups file is used on BSD systems set up as servers. It is used by YP (Yellow Pages) to verify permissions for remote mounting. /etc/fstab, which is sometimes called /etc/filesystems, holds a list of all available disk partitions. For regular partitions a customary mount point is also listed.

The diff, sdiff and cmp commands can be used to compare files. cmp can compare any two files. It is not restricted to text files as diff and sdiff are. When used without options cmp displays the byte position and line number of the first difference between the two. The -l option will display all the difference between the files.

diff and sdiff both compare files and report on differing lines. In addition sdiff compares the files side by side and reports identical as well as differing lines. Output from diff looks like:

< Dear Mr. Jones
---
> Dear Mr. Davis

The < indicates that this is a line from the first file, while > denotes a line from the second file. Contents of the two files are separated by three hyphens.

11. diff can be used with the -r to compare directories, but not large files. What would you use to compare two large files?

sdiff returns four forms of output.

The dog is blue. The dog is blue.

This is the output for identical lines. It can be suppressed with the -s option.

The chicken is aqua.<

Indicates that the line was found only in the first file.

The ostrich is chartreuse.>

Indicates that the line was found only in the second file.

The cow is magenta. | The cow is shocking pink.

Displays the differing line from each file.

Looking for differences on a file by file basis can be tedious. Shell scripts can be written to compare entire directory trees and groups of files. When using scripts for security audits it is important to periodically check the scripts themselves to make sure they have not been altered.

The rdist command can also be useful for comparing files between two systems. rdist can compare two versions of a file and update the one that has been altered. This can be a convenient way to update software in an environment that has more than one system.

When checking the integrity of system files comparisons need to be made not only of ownership and permissions, but of modification date, size and inode number. On BSD systems the command ls -lsidg will display file information including inode number and size in both blocks and bytes. The SVR4 version of this command is ls -lsid .

Another way to check files for evidence of tampering is to create a unique signature for that file and save it as a master copy for comparing future signatures to. The most basic file signature could be the output of ls -lsidg or ls -lsid. Every file will produce a unique output to these commands. Theoretically if anything in the file changes the output of these commands should change also. Unfortunately the the output of these commands doesn't provide quite enough information to be a good signature because it may not indicate changes that have been made to the file. It is possible to rewrite the last modification date. It's also possible to alter a file so that it remains the same size as the original.

Signatures can also be created with checksums. A checksum is a mathematical function that will change if even one byte of the file changes. Checksum algorithms are readily available. The sum command, which comes as part of many Unix systems, can also calculate a checksum. A GNU version of sum also exists.

12. Is this enough to keep file alterations from going undetected?

Another possibility for generating checksums is a cryptographic hash or signature program. These programs use a cryptographic algorithm to hash input to a value of a set length that is difficult to reverse. However, in order for any checksum to be secure the code and signatures must be protected. If they are left lying around for people to examine, someone will figure out how they work.

A couple more useful commands are pwck, which checks the password file for inconsistencies and grpk, which checks and verifies entries in /etc/group.

For essential information on finding security holes please go to Finding Holes in Your System.

Security Software and Organizations

A variety of software is available for running security audits. Tiger is publicly available and can be used to check for ways in which root can be compromised. Tripwire is a program for checking the integrity of filesystems and comes with eight signature routines. SATAN, the Security Administrator Tool for Analyzing Networks, is also available via ftp. It can be used for checking Internet host and network security. ISS, the Internet Security Scanner, can also be used to check network security. It locates and checks all TCP/IP devices for 130 different security holes. ISS is more thorough than SATAN, but is not free software. For an optional list of additional Unix security software see General Security tools.

Part of maintaining the security of a system involves staying informed of new bugs, patches, software, etc. CIAC , Computer Incident Advisory Capability, provides a variety of security information including security tools, bulletins and documentation. CERT, the Computer Emergency Response Team, also provides security documentation as well as maintaining a 24 hour service for technical assistance with security incidents.

Security Tips

Handling Break-ins and Reporting Bugs

For essential information on handling breakins please go to security-compromise FAQ

Publicly announcing new security holes and/or software bugs can endanger the security of other sites. This is why security patches are distributed without a detailed explanation of the what they are intended to fix. Software bugs should be reported to whoever is responsible for the development of software. If a new security hole is found in a specific operating system the vendor should be notified.

For additional optional security information see SAIC Security Documents and UNIX Security Topics.


Terms used: DES, encryption, restricted shell, rcp, rlogin, ftp, anonymous ftp, chroot, UUCP, backdoor, Trojan horse, virus, bacteria, bomb, salami, worm, backup, checksum, signature, CERT, CIAC, YP, inode.




[EdCert previous] [EdCert next] [EdCert top]