FAQ: The TCP/IP FAQ by George V. Neville-Neil (April 1, 1996)

Archive-name:tcp-ip/FAQ
Last-modified: 1996/4/1

Internet Protocol Frequently Asked Questions

Maintained by: George V. Neville-Neil (gnn@wrs.com)
Contributions from:
Ran Atkinson
Mark Bergman
Stephane Bortzmeyer
Rodney Brown
Dr. Charles E. Campbell Jr.
Phill Conrad
Alan Cox
Rick Jones
Jon Kay
Jay Kreibrich
William Manning
Barry Margolin
Jim Muchow
Subu Rama
W. Richard Stevens

Version 3.3

************************************************************************

The following is a list of Frequently Asked Questions, and
their answers, for people interested in the Internet Protocols,
including TCP, UDP, ICMP and others. Please send all additions,
corrections, complaints and kudos to the above address. This FAQ will
be posted on or about the first of every month.

This FAQ is available for anonymous ftp from :
ftp.netcom.com:/pub/gnn/tcp-ip.faq . You may get it from my home page at
ftp://ftp.netcom.com/pub/gnn/gnn.html
You can read the FAQ in HTMl format on Netcom or from the mirror
site http://web.cnam.fr/Network/TCP-IP/tcp-ip.html

************************************************************************

Table of Contents:
Glossary
1) Are there any good books on IP?
2) Where can I find example source code for TCP/UDP/IP?
3) Are there any public domain programs to check the performance of an
IP link?
4) Where do I find RFCs?
5) How can I detect that the other end of a TCP connection has
crashed? Can I use “keepalives” for this?
6) Can the keepalive timeouts be configured?
7) Can I set up a gateway to the Internet that translates IP
addresses, so that I don’t have to change all our internal addresses
to an official network?
8) Are there object-oriented network programming tools?
9) What other FAQs are related to this one?
10) What newsgroups contain information on networks/protocols?
11) Van Jacobson explains TCP congestion avoidance.
12) Can I use a single bit subnet?

Glossary:

I felt this should be first given the plethora of acronyms used in the
rest of this FAQ.

IP: Internet Protocol. The lowest layer protocol defined in TCP/IP.
This is the base layer on which all other protocols mentioned herein
are built. IP is often referred to as TCP/IP as well.

UDP: User Datagram Protocol. This is a connectionless protocol built
on top of IP. It does not provide any guarantees on the ordering or
delivery of messages. This protocol is layered on top of IP.

TCP: Transmission Control Protocol. TCP is a connection oriented
protocol that guarantees that messages are delivered in the order in
which they were sent and that all messages are delivered. If a TCP
connection cannot deliver a message it closes the connection and
informs the entity that created it. This protocol is layered on top
of IP.

ICMP: Internet Control Message Protocol. ICMP is used for
diagnostics in the network. The Unix program, ping, uses ICMP
messages to detect the status of other hosts in the net. ICMP
messages can either be queries (in the case of ping) or error reports,
such as when a network is unreachable.

RFC: Request For Comment. RFCs are documents that define the
protocols used in the IP Internet. Some are only suggestions, some
are even jokes, and others are published standards. Several sites in
the Internet store RFCs and make them available for anonymous ftp.

SLIP: Serial Line IP. An implementation of IP for use over a serial
link (modem). CSLIP is an optimized (compressed) version of SLIP that
gives better throughput.

Bandwidth: The amount of data that can be pushed through a link in
unit time. Usually measured in bits or bytes per second.

Latency: The amount of time that a message spends in a network going
from point A to point B.

Jitter: The effect seen when latency is not a constant. That is, if
messages experience a different latencies between two points in a
network.

RPC: Remote Procedure Call. RPC is a method of making network access
to resource transparent to the application programmer by supplying a
“stub” routine that is called in the same way as a regular procedure
call. The stub actually performs the call across the network to
another computer.

Marshalling: The process of taking arbitrary data (characters,
integers, structures) and packing them up for transmission across a
network.

MBONE: A virtual network that is a Multicast backBONE. It is still a
research prototype, but it extends through most of the core of the
Internet (including North America, Europe, and Australia). It uses IP
Multicasting which is defined in RFC-1112. An MBONE FAQ is available
via anonymous ftp from: ftp.isi.edu” There are frequent broadcasts of
multimedia programs (audio and low bandwidth video) over the MBONE.
Though the MBONE is used for mutlicasting, the long haul parts of the
MBONE use point-to-point connections through unicast tunnels to
connect the various multicast networks worldwide.

1) Are there any good books on IP?

A) Yes. Please see the following:

Internetworking with TCP/IP Volume I
(Principles, Protocols, and Architecture)
Douglas E. Comer
Prentice Hall 1991 ISBN 0-13-468505-9

This volume covers all of the protocols, including IP, UDP, TCP, and
the gateway protocols. It also includes discussions of higher level
protocols such as FTP, TELNET, and NFS.

Internetworking with TCP/IP Volume II
(Design, Implementation, and Internals)
Douglas E. Comer / David L. Stevens
Prentice Hall 1991 ISBN 0-13-472242-6

Discusses the implementation of the protocols and gives numerous code
examples.

Internetworking with TCP/IP Volume III (BSD Socket Version)
(Client – Server Programming and Applications)
Douglas E. Comer / David L. Stevens
Prentice Hall 1993 ISBN 0-13-474222-2

This book discusses programming applications that use the internet
protocols. It includes examples of telnet, ftp clients and servers.
Discusses RPC and XDR at length.

TCP/IP Illustrated, Volume 1: The Protocols,
W. Richard Stevens
(c) Addison-Wesley, 1994 ISBN 0-201-63346-9

An excellent introduction to the entire TCP/IP protocol suite,
covering all the major protocols, plus several important applications.

“TCP/IP Illustrated, Volume 2: The Implementation”,
by Gary R. Wright and W. Richard Stevens
(c) Addison-Wesley, 1995
ISBN 0-201-63354-X

This is a complete, and lenthy, discussion of the internals of TCP/IP
based on the Net/2 release of BSD.

Unix Network Programming
W. Richard Stevens
Prentice Hall 1990 ISBN 0-13-949876

An excellent introduction to network programming under Unix.

The Design and Implementation of the 4.3 BSD Operating System
Samuel J. Leffler, Marshall Kirk McKusick, Michael J. Karels, John S.
Quarterman
Addison-Wesley 1989 ISBN 0-201-06196-1

Though this book is a reference for the entire operating system, the
eleventh and twelfth chapters completely explain how the networking
protocols are implemented in the kernel.

Stevens, W. Richard, Unix Network Programming. 1990, Prentice-Hall.

An excellent introduction to network programming under Unix. Widely
cited on the Usenet bulliten boards as the “best place to start” if you
want to actually learn how to write Unix programs that communicate over
a network.

Rago, Steven A. Unix System V. Network Programming. 1993, Addison-Wesley.

A book that covers the same kinds of topics as W. Richard Stevens Unix
Network Programming, but is more specific to Unix System V Release 4
(SVR4), and so perhaps is more useful and up to date if you are
working specifically with that implementation. (Stevens book covers
Unix System V release 3.x). There is a much more extensive coverage
of Streams in Rago’s book; 4 chapters, where Stevens only provides a
couple of subsections. The design project at the end of the book is
an implementation of SLIP.

2) Where can I find example source code for TCP/UDP/IP?

A) Code from the Internetworking with TCP/IP Volume III is available
for anonymous ftp from:

arthur.cs.purdue.edu:/pub/dls

Code used in the Net-2 version of Berkeley Unix is available for
anonymous ftp from:

ftp.uu.net:systems/unix/bsd-sources/sys/netinet

and

gatekeeper.dec.com:/pub/BSD/net2/sys/netinet

Code from Richard Steven’s book is available on:
ftp.uu.net:/published/books/stevens.*

Example source code and libraries to make coding quicker is available
in the Simple Sockets Library written at NASA. The Simple Sockets
Library makes sockets easy to use! And, it comes as source code. It
has been tested on: Unix (SGI, DecStation, AIX, Sun 3, Sparcstation;
version 2.02+: Solaris 2.1, SCO), VMS, and MSDOS (client only since
there’s no background there). It is provided in source code form, of
course, and sits atop Berkeley sockets and tcp/ip.

You can order the “Simple Sockets Library” from

Austin Code Works
11100 Leafwood Lane
Austin, TX 78750-3464 USA
Phone (512) 258-0785

Ask for the “SSL – The Simple Sockets Library”. Last I checked, they
were asking $20 US for it.

For DOS there is WATTCP.ZIP (numerous sites):

WATTCP is a DOS TCP/IP stack derived from the NCSA Telnet program and
much enhanced. It comes with some example programs and complete source
code. The interface isn’t BSD sockets but is well suited to PC type
work. It is also written so that it can be used and memory
allocation).

3) Are there any public domain programs to check the performance of
an IP link?

A)

TTCP: Available for anonymous ftp from….

wuarchive.wustl.edu:/graphics/graphics/mirrors/sgi.com/sgi/src/ttcp

On ftp.sgi.com are netperf (from Rick Jones at HP) and nettest
(from Dave Borman at Cray). ttcp is also availabel at ftp.sgi.com.

You can get to the NetPerf home page via:

http://www.cup.hp.com/netperf/NetperfPage.html

There is suite of Bandwidth Measuring programs from gnn@netcom.com.
Available for anonymous ftp from ftp.netcom.com in
~ftp/gnn/bwmeas-0.3.tar.Z These are several programs that meausre
bandwidth and jitter over several kinds of IPC links, including TCP
and UDP.

4) Where do I find RFCs?

A) This is the latest info on obtaining RFCs:
Details on obtaining RFCs via FTP or EMAIL may be obtained by sending
an EMAIL message to rfc-info@ISI.EDU with the message body
help: ways_to_get_rfcs. For example:

To: rfc-info@ISI.EDU
Subject: getting rfcs

help: ways_to_get_rfcs

The response to this mail query is quite long and has been omitted.

RFCs can be obtained via FTP from DS.INTERNIC.NET, NIS.NSF.NET,
NISC.JVNC.NET, FTP.ISI.EDU, WUARCHIVE.WUSTL.EDU, SRC.DOC.IC.AC.UK,
FTP.CONCERT.NET, or FTP.SESQUI.NET.

Using Web, WAIS, and gopher:

Web:

http://web.nexor.co.uk/rfc-index/rfc-index-search-form.html

WAIS access by keyword:

wais://wais.cnam.fr/RFC

Excellent presentation with a full-text search too:

http://www.cis.ohio-state.edu/hypertext/information/rfc.html

With Gopher:

gopher://r2d2.jvnc.net/11/Internet%20Resources/RFC
gopher://muspin.gsfc.nasa.gov:4320/1g2go4%20ds.internic.net%2070%201%201/.ds/
.internetdocs

5) How can I detect that the other end of a TCP connection has crashed?
Can I use “keepalives” for this?

A) Detecting crashed systems over TCP/IP is difficult. TCP doesn’t require
any transmission over a connection if the application isn’t sending
anything, and many of the media over which TCP/IP is used (e.g. ethernet)
don’t provide a reliable way to determine whether a particular host is up.
If a server doesn’t hear from a client, it could be because it has nothing
to say, some network between the server and client may be down, the server
or client’s network interface may be disconnected, or the client may have
crashed. Network failures are often temporary (a thin ethernet will appear
down while someone is adding a link to the daisy chain, and it often takes
a few minutes for new routes to stabilize when a router goes down), and TCP
connections shouldn’t be dropped as a result.

Keepalives are a feature of the sockets API that requests that an empty
packet be sent periodically over an idle connection; this should evoke an
acknowledgement from the remote system if it is still up, a reset if it has
rebooted, and a timeout if it is down. These are not normally sent until
the connection has been idle for a few hours. The purpose isn’t to detect
a crash immediately, but to keep unnecessary resources from being allocated
forever.

If more rapid detection of remote failures is required, this should be
implemented in the application protocol. There is no standard mechanism
for this, but an example is requiring clients to send a “no-op” message
every minute or two. An example protocol that uses this is X Display
Manager Control Protocol (XDMCP), part of the X Window System, Version 11;
the XDM server managing a session periodically sends a Sync command to the
display server, which should evoke an application-level response, and
resets the session if it doesn’t get a response (this is actually an
example of a poor implementation, as a timeout can occur if another client
“grabs” the server for too long).

6) Can the keepalive timeouts be configured?

A) This varies by operating system. There is a program that works on
many Unices (though not Linux or Solaris), called netconfig, that
allows one to do this and documents many of the variables. It is
available by anonymous FTP from

cs.ucsd.edu:pub/csl/Netconfig/netconfig2.2.tar.Z

In addition, Richard Stevens’ TCP/IP Illustrated, Volume 1 includes a
good discussion of setting the most useful variables on many
platforms.

7) Can I set up a gateway to the Internet that translates IP addresses, so
that I don’t have to change all our internal addresses to an official
network?

A) There’s no general solution to this. Many protocols include IP
addresses in the application-level data (FTP’s “PORT” command is the most
notable), so it isn’t simply a matter of translating addresses in the IP
header. Also, if the network number(s) you’re using match those assigned
to another organization, your gateway won’t be able to communicate with
that organization (RFC 1597 proposes network numbers that are reserved for
private use, to avoid such conflicts, but if you’re already using a
different network number this won’t help you).

However, if you’re willing to live with limited access to the Internet from
internal hosts, the “proxy” servers developed for firewalls can be used as
a substitute for an address-translating gateway. See the firewall FAQ.

8) Are there object-oriented network programming tools?

A) Yes, and one such system is called ACE (ADAPTIVE Communication
Environment). Here is how to get more information and the software:

OBTAINING ACE

An HTML version of this README file is available at URL
http://www.cs.wustl.edu/~schmidt/ACE.html. All software and
documentation is available via both anonymous ftp and the Web.

ACE is available for anonymous ftp from the ics.uci.edu (128.195.1.1)
host in the gnu/C++_wrappers.tar.Z file (approximately .5 meg
compressed). This release contains contains the source code,
documentation, and example test drivers for C++ wrapper libras.

9) What other FAQs might you want to look in?
comp.protocols.tcp-ip.ibmpc
Aboba, Bernard D.(1994) “comp.protocols.tcp-ip.ibmpc Frequently
Asked Questions (FAQ)” Usenet news.answers, available via
file://ftp.netcom.com/pub/ma/mailcom/IBMTCP/ibmtcp.zip,
57 pages.

comp.protocols.ppp
Archive-name: ppp-faq/part[1-8]
URL: http://cs.uni-bonn.de/ppp/part[1-8].html

comp.dcom.lans.ethernet
ftp site: dorm.rutgers.edu, pub/novell/DOCS
Ethernet Network Questions and Answers
Summarized from UseNet group comp.dcom.lans.ethernet

10) What other newsgroups deal with networking?

comp.dcom.cabling Cabling selection, installation and use.
comp.dcom.isdn The Integrated Services Digital Network
(ISDN).
comp.dcom.lans.ethernet Discussions of the Ethernet/IEEE 802.3
protocols.
comp.dcom.lans.fddi Discussions of the FDDI protocol suite.
comp.dcom.lans.misc Local area network hardware and software.
comp.dcom.lans.token-ring Installing and using token ring
networks.
comp.dcom.servers Selecting and operating data communications
servers.
comp.dcom.sys.cisco Info on Cisco routers and bridges.
comp.dcom.sys.wellfleet Wellfleet bridge & router systems hardware &
software.
comp.protocols.ibm Networking with IBM mainframes.
comp.protocols.iso The ISO protocol stack.
comp.protocols.kerberos The Kerberos authentication server.
comp.protocols.misc Various forms and types of protocol.
comp.protocols.nfs Discussion about the Network File System
protocol.
comp.protocols.ppp Discussion of the Internet Point to Point
Protocol.
comp.protocols.smb SMB file sharing protocol and Samba SMB
server/client.
comp.protocols.tcp-ip TCP and IP network protocols.
comp.protocols.tcp-ip.ibmpc TCP/IP for IBM(-like) personal
computers.
comp.security.misc Security isuipment for the PC.
comp.os.ms-windows.networking.misc Windows and other networks.
comp.os.ms-windows.networking.tcp-ip Windows and TCP/IP networking.
comp.os.ms-windows.networking.windows Windows’ built-in networking.
comp.os.os2.networking.misc Miscellaneous networking issues of
OS/2.
comp.os.os2.networking.tcp-ip TCP/IP under OS/2.
comp.sys.novell Discussion of Novell Netware products.

11) Van Jacobson explains TCP congestion avoidance.

I’ve attached Van J’s original posting on it (I seem to repost this every
6 months or so). If you want to see some real examples of this in action,
take a look at Chapter 21 of my “TCP/IP Illustrated, Volume 1”.

Rich Stevens

—————————————————————————
>From van@helios.ee.lbl.gov Mon Apr 30 01:44:05 1990
To: end2end-interest@ISI.EDU
Subject: modified TCP congestion avoidance algorithm
Date: Mon, 30 Apr 90 01:40:59 PDT
From: Van Jacobson <van@helios.ee.lbl.gov>
Status: RO

This is a description of the modified TCP congestion avoidance
algorithm that I promised at the teleconference.

BTW, on re-reading, I noticed there were several errors in
Lixia’s note besides the problem I noted at the teleconference.
I don’t know whether that’s because I mis-communicated the
algorithm at dinner (as I recall, I’d had some wine) or because
she’s convinced that TCP is ultimately irrelevant :). Either
way, you will probably be disappointed if you experiment with
what’s in that note.

First, I should point out once again that there are two
completely independent window adjustment algorithms running in
the sender: Slow-start is run when the pipe is empty (i.e.,
when first starting or re-starting after a timeout). Its goal
is to get the “ack clock” started so packets will be metered
into the network at a reasonable rate. The other algorithm,
congestion avoidance, is run any time *but* when (re-)starting
and is responsible for estimating the (dynamically varying)
pipesize. You will cause yourself, or me, no end of confusion
if you lump these separate algorithms (as Lixia’s message did).

The modifications described here are only to the congestion
avoidance algorithm, not to slow-start, and they are intended to
apply to large bandwidth-delay product paths (though they don’t
do any harm on other paths). Remember that with regular TCP (or
with slow-start/c-a TCP), throughput really starts to go to hell
when the probability of packet loss is on the order of the
bandwidth-delay product. E.g., you might expect a 1% packet
loss rate to translate into a 1% lower throughput but for, say,
a TCP connection with a 100 packet b-d p. (= window), it results
in a 50-75% throughput loss. To make TCP effective on fat
pipes, it would be nice if throughput degraded only as function
of loss probability rather than as the product of the loss
probabilty and the b-d p. (Assuming, of course, that we can do
this without sacrificing congestion avoidance.)

These mods do two things: (1) prevent the pipe from going empty
after a loss (if the pipe doesn’t go empty, you won’t have to
waste round-trip times re-filling it) and (2) correctly account
for the amount of data actually in the pipe (since that’s what
congestion avoidance is supposed to be estimating and adapting to).

For (1), remember that we use a packet loss as a signal that the
pipe is overfull (congested) and that packet loss can be
detected one of two different ways: (a) via a retransmit
timeout or (b) when some small number (3-4) of consecutive
duplicate acks has been received (the “fast retransmit”
algorithm). In case (a), the pipe is guaranteed to be empty so
we must slow-start. In case (b), if the duplicate ack
threshhold is small compared to the bandwidth-delay product, we
will detect the loss with the pipe almost full. I.e., given a
threshhold of 3 packets and an LBL-MIT bandwidth-delay of around
24KB or 16 packets (assuming 1500 byte MTUs), the pipe is 75%
full when fast-retransmit detects a loss (actually, until
gateways start doing some sort of congestion control, the pipe
is overfull when the loss is detected so *at least* 75% of the
packets needed for ack clocking are in transit when
fast-retransmit happens). Since the pipe is full, there’s no
need to slow-start after a fast-retransmit.

For (2), consider what a duplicate ack means: either the
network duplicated a packet (i.e., the NSFNet braindead IBM
token ring adapters) or the receiver got an out-of-order packet.
The usual cause of out-of-order packets at the receiver is a
missing packet. I.e., if there are W packets in transit and one
is dropped, the receiver will get W-1 out-of-order and
(4.3-tahoe TCP will) generate W-1 duplicate acks. If the
`consecutive duplicates’ threshhold is set high enough, we can
reasonably assume that duplicate acks mean dropped packets.

But there’s more information in the ack: The receiver can only
generate one in response to a packet arrival. I.e., a duplicate
ack means that a packet has left the network (it is now cached
at the receiver). If the sender is limitted by the congestion
window, a packet can now be sent. (The congestion window is a
count of how many packets will fit in the pipe. The ack says a
packet has left the pipe so a new one can be added to take its
place.) To put this another way, say the current congestion
window is C (i.e, C packets will fit in the pipe) and D
duplicate acks have been received. Then only C-D packets are
actually in the pipe and the sender wants to use a window of C+D
packets to fill the pipe to its estimated capacity (C+D sent –
D received = C in pipe).

So, conceptually, the slow-start/cong.avoid/fast-rexmit changes
are:

– The sender’s input routine is changed to set `cwnd’ to `ssthresh’
when the dup ack threshhold is reached. [It used to set cwnd to
mss to force a slow-start.] Everything else stays the same.

– The sender’s output routine is changed to use an effective window
of min(snd_wnd, cwnd + dupacks*mss) [the change is the addition
of the `dupacks*mss’ term.] `Dupacks’ is zero until the rexmit
threshhold is reached and zero except when receiving a sequence
of duplicate acks.

The actual implementation is slightly different than the above
because I wanted to avoid the multiply in the output routine
(multiplies are expensive on some risc machines). A diff of the
old and new fastrexmit code is attached (your line numbers will
vary).

Note that we still do congestion avoidance (i.e., the window is
reduced by 50% when we detect the packet loss). But, as long as
the receiver’s offered window is large enough (it needs to be at
most twice the bandwidth-delay product), we continue sending
packets (at exactly half the rate we were sending before the
loss) even after the loss is detected so the pipe stays full at
exactly the level we want and a slow-start isn’t necessary.

Some algebra might make this last clear: Say U is the sequence
number of the first un-acked packet and we are using a window
size of W when packet U is dropped. Packets [U..U+W) are in
transit. When the loss is detected, we send packet U and pull
the window back to W/2. But in the round-trip time it takes
the U retransmit to fill the receiver’s hole and an ack to get
back, W-1 dup acks will arrive (one for each packet in transit).
The window is effectively inflated by one packet for each of
these acks so packets [U..U+W/2+W-1) are sent. But we don’t
re-send packets unless we know they’ve been lost so the amount
actually sent between the loss detection and the recovery ack is
U+W/2+W-1 – U+W = W/2-1 which is exactly the amount congestion
avoidance allows us to send (if we add in the rexmit of U). The
recovery ack is for packet U+W so when the effective window is
pulled back from W/2+W-1 to W/2 (which happens because the
recovery ack is `new’ and sets dupack to zero), we are allowed
to send up to packet U+W+W/2 which is exactly the first packet
we haven’t yet sent. (I.e., there is no sudden burst of packets
as the `hole’ is filled.) Also, when sending packets between
the loss detection and the recovery ack, we do nothing for the
first W/2 dup acks (because they only allow us to send packets
we’ve already sent) and the bottleneck gateway is given W/2
packet times to clean out its backlog. Thus when we start
sending our W/2-1 new packets, the bottleneck queue is as empty
as it can be.

[I don’t know if you can get the flavor of what happens from
this description — it’s hard to see without a picture. But I
was delighted by how beautifully it worked — it was like
watching the innards of an engine when all the separate motions
of crank, pistons and valves suddenly fit together and
everything appears in exactly the right place at just the right
time.]

Also note that this algorithm interoperates with old tcp’s: Most
pre-tahoe tcp’s don’t generate the dup acks on out-of-order packets.
If we don’t get the dup acks, fast retransmit never fires and the
window is never inflated so everything happens in the old way (via
timeouts). Everything works just as it did without the new algorithm
(and just as slow).

If you want to simulate this, the intended environment is:

– large bandwidth-delay product (say 20 or more packets)

– receiver advertising window of two b-d p (or, equivalently,
advertised window of the unloaded b-d p but two or more
connections simultaneously sharing the path).

– average loss rate (from congestion or other source) less than
one lost packet per round-trip-time per active connection.
(The algorithm works at higher loss rate but the TCP selective
ack option has to be implemented otherwise the pipe will go empty
waiting to fill the second hole and throughput will once again
degrade at the product of the loss rate and b-d p. With selective
ack, throughput is insensitive to b-d p at any loss rate.)

And, of course, we should always remember that good engineering
practise suggests a b-d p worth of buffer at each bottleneck —
less buffer and your simulation will exhibit the interesting
pathologies of a poorly engineered network but will probably
tell you little about the workings of the algorithm (unless the
algorithm misbehaves badly under these conditions but my
simulations and measurements say that it doesn’t). In these
days of $100/megabyte memory, I dearly hope that this particular
example of bad engineering is of historical interest only.

– Van

—————–
*** /tmp/,RCSt1a26717 Mon Apr 30 01:35:17 1990
— tcp_input.c Mon Apr 30 01:33:30 1990
***************
*** 834,850 ****
* Kludge snd_nxt & the congestion
* window so we send only this one
! * packet. If this packet fills the
! * only hole in the receiver’s seq.
! * space, the next real ack will fully
! * open our window. This means we
! * have to do the usual slow-start to
! * not overwhelm an intermediate gateway
! * with a burst of packets. Leave
! * here with the congestion window set
! * to allow 2 packets on the next real
! * ack and the exp-to-linear thresh
! * set for half the current window
! * size (since we know we’re losing at
! * the current window size).
*/
if (tp->t_timer[TCPT_REXMT] == 0 ||
— 834,850 —-
* Kludge snd_nxt & the congestion
* window so we send only this one
! * packet.
! *
! * We know we’re losing at the current
! * window size so do congestion avoidance
! * (set ssthresh to half the current window
! * and pull our congestion window back to
! * the new ssthresh).
! *
! * Dup acks mean that packets have left the
! * network (they’re now cached at the receiver)
! * so bump cwnd by the amount in the receiver
! * to keep a constant cwnd packets in the
! * network.
*/
if (tp->t_timer[TCPT_REXMT] == 0 ||
***************
*** 853,864 ****
else if (++tp->t_dupacks == tcprexmtthresh) {
tcp_seq onxt = tp->snd_nxt;
! u_int win =
! MIN(tp->snd_wnd, tp->snd_cwnd) / 2 /
! tp->t_maxseg;

if (win < 2) win = 2; tp->snd_ssthresh = win * tp->t_maxseg;

tp->t_timer[TCPT_REXMT] = 0;
tp->t_rtt = 0;
— 853,864 —-
else if (++tp->t_dupacks == tcprexmtthresh) {
tcp_seq onxt = tp->snd_nxt;
! u_int win = MIN(tp->snd_wnd,
! tp->snd_cwnd);

+ win /= tp->t_maxseg;
+ win >>= 1;
if (win < 2) win = 2; tp->snd_ssthresh = win * tp->t_maxseg;
tp->t_timer[TCPT_REXMT] = 0;
tp->t_rtt = 0;
***************
*** 866,873 ****
tp->snd_cwnd = tp->t_maxseg;
(void) tcp_output(tp);
!
if (SEQ_GT(onxt, tp->snd_nxt))
tp->snd_nxt = onxt;
goto drop;
}
} else
— 866,879 —-
tp->snd_cwnd = tp->t_maxseg;
(void) tcp_output(tp);
! tp->snd_cwnd = tp->snd_ssthresh +
! tp->t_maxseg *
! tp->t_dupacks;
if (SEQ_GT(onxt, tp->snd_nxt))
tp->snd_nxt = onxt;
goto drop;
+ } else if (tp->t_dupacks > tcprexmtthresh) {
+ tp->snd_cwnd += tp->t_maxseg;
+ (void) tcp_output(tp);
+ goto drop;
}
} else
***************
*** 874,877 ****
— 880,890 —-
tp->t_dupacks = 0;
break;
+ }
+ if (tp->t_dupacks) {
+ /*
+ * the congestion window was inflated to account for
+ * the other side’s cached packets – retract it.
+ */
+ tp->snd_cwnd = tp->snd_ssthresh;
}
tp->t_dupacks = 0;
*** /tmp/,RCSt1a26725 Mon Apr 30 01:35:23 1990
— tcp_timer.c Mon Apr 30 00:36:29 1990
***************
*** 223,226 ****
— 223,227 —-
tp->snd_cwnd = tp->t_maxseg;
tp->snd_ssthresh = win * tp->t_maxseg;
+ tp->t_dupacks = 0;
}
(void) tcp_output(tp);

>From van@helios.ee.lbl.gov Mon Apr 30 10:37:36 1990
To: end2end-interest@ISI.EDU
Subject: modified TCP congestion avoidance algorithm (correction)
Date: Mon, 30 Apr 90 10:36:12 PDT
From: Van Jacobson <van@helios.ee.lbl.gov>
Status: RO

I shouldn’t make last minute ‘fixes’. The code I sent out last
night had a small error:

*** t.c Mon Apr 30 10:28:52 1990
— tcp_input.c Mon Apr 30 10:30:41 1990
***************
*** 885,893 ****
* the congestion window was inflated to account for
* the other side’s cached packets – retract it.
*/
! tp->snd_cwnd = tp->snd_ssthresh;
}
– tp->t_dupacks = 0;
if (SEQ_GT(ti->ti_ack, tp->snd_max)) {
tcpstat.tcps_rcvacktoomuch++;
goto dropafterack;
— 885,894 —-
* the congestion window was inflated to account for
* the other side’s cached packets – retract it.
*/
! if (tp->snd_cwnd > tp->snd_ssthresh)
! tp->snd_cwnd = tp->snd_ssthresh;
! tp->t_dupacks = 0;
}
if (SEQ_GT(ti->ti_ack, tp->snd_max)) {
tcpstat.tcps_rcvacktoomuch++;
goto dropafterack;

12) Can I use a single bit subnet?

A) It would seem that the consensus is no. The best citable answer
follows.

>From RFC1122:
“3.3.6 Broadcasts
Section 3.2.1.3 defined the four standard IP broadcast address
forms:
Limited Broadcast: {-1, -1}
Directed Broadcast: {,-1}
Subnet Directed Broadcast:
{,,-1}
All-Subnets Directed Broadcast: {,-1,-1}”

All-Subnets Directed broadcasts are being deprecated in favor of IP
multicast, but were very much defined at the time RFC1122 was written.
Thus a Subnet Directed Broadcast to a subnet of all ones is not
distinguishable from an All-Subnets Directed Broadcast.

For those old systems that used all zeros for broadcast in IP addresses,
a similar argument can be made against the subnet of all zeros.

Also, for old routing protocols like RIP, a route to subnet zero
is not distinguishable from the route to the entire network number
(except possibly by context).

Most of today’s systems don’t support variable length subnet masks
(VLSM), and for such systems the above is true. However, all the major
router vendors and *some* Unix systems (BSD 4.4 based ones) support
VLSMs, and in that case the situation is more complicated 🙂

With VLSMs (necessary to support CIDR, see RFC 1519), you can utilize the
address space more efficiently. Routing lookups are based on *longest*
match, and this means that you can for instance subnet the class C net
with a mask of 255.255.255.224 (27 bits) in addition to the subnet mask
of 255.255.255.192 (26 bits) given above. You will then be able to use
the addresses x.x.x.33 through x.x.x.62 (first three bits 001) and the
addresses x.x.x.193 through x.x.x.222 (first three bits 110) with this
new subnet mask. And you can continue with a subnet mask of 28 bits, etc.
(Note also, by the way, that non-contiguous subnet masks are deprecated.)

This is all very nicely covered in the paper by Havard Eidnes:

Practical Considerations for Network Address using a CIDR Block Allocation
Proceedings of INET ’93

This paper is available with anonymous FTP from

aun.uninett.no:/pub/misc/eidnes-cidr.ps

The same paper, with minor revisions, is one of the articles in the
special Internetworking issue of Communications of the ACM (last month,
I believe).

> I have be told that some network equipment (Cisco I think was the vendor
> named) will not correctly handle subnets that violated that standard.
As far as I know cisco is one of the router vendors that *do* handle
VLSMs correctly. Could you substantiate this claim?

Steinar Haug, SINTEF RUNIT, University of Trondheim, NORWAY
Email: Steinar.Haug@runit.sintef.no


George V. Neville-Neil work: gnn@wrs.com home:gnn@netcom.com
NIC: GN82

This signature kept blank due to the CDA.

Site Security Handbook, 1991

Network Working Group P. Holbrook
Request for Comments: 1244 CICNet
FYI: 8 J. Reynolds
ISI
Editors
July 1991

Site Security Handbook

Status of this Memo

This handbook is the product of the Site Security Policy Handbook
Working Group (SSPHWG), a combined effort of the Security Area and
User Services Area of the Internet Engineering Task Force (IETF).
This FYI RFC provides information for the Internet community. It
does not specify an Internet standard. Distribution of this memo is
unlimited.

Contributing Authors

The following are the authors of the Site Security Handbook. Without
their dedication, this handbook would not have been possible.

Dave Curry (Purdue University), Sean Kirkpatrick (Unisys), Tom
Longstaff (LLNL), Greg Hollingsworth (Johns Hopkins University),
Jeffrey Carpenter (University of Pittsburgh), Barbara Fraser (CERT),
Fred Ostapik (SRI NISC), Allen Sturtevant (LLNL), Dan Long (BBN), Jim
Duncan (Pennsylvania State University), and Frank Byrum (DEC).

Editors’ Note

This FYI RFC is a first attempt at providing Internet users guidance
on how to deal with security issues in the Internet. As such, this
document is necessarily incomplete. There are some clear shortfalls;
for example, this document focuses mostly on resources available in
the United States. In the spirit of the Internet’s “Request for
Comments” series of notes, we encourage feedback from users of this
handbook. In particular, those who utilize this document to craft
their own policies and procedures.

This handbook is meant to be a starting place for further research
and should be viewed as a useful resource, but not the final
authority. Different organizations and jurisdictions will have
different resources and rules. Talk to your local organizations,
consult an informed lawyer, or consult with local and national law
enforcement. These groups can help fill in the gaps that this
document cannot hope to cover.

Site Security Policy Handbook Working Group [Page 1]

RFC 1244 Site Security Handbook July 1991

Finally, we intend for this FYI RFC to grow and evolve. Please send
comments and suggestions to: ssphwg@cert.sei.cmu.edu.

Table of Contents

1. Introduction…………………………………………….. 3
1.1 Purpose of this Work…………………………………….. 3
1.2 Audience……………………………………………….. 3
1.3 Definitions…………………………………………….. 4
1.4 Related Work……………………………………………. 4
1.5 Scope………………………………………………….. 4
1.6 Why Do We Need Security Policies and Procedures?……………. 5
1.7 Basic Approach………………………………………….. 7
1.8 Organization of this Document…………………………….. 7
2. Establishing Official Site Policy on Computer Security……….. 9
2.1 Brief Overview………………………………………….. 9
2.2 Risk Assessment…………………………………………. 10
2.3 Policy Issues…………………………………………… 13
2.4 What Happens When the Policy Is Violated…………………… 19
2.5 Locking In or Out……………………………………….. 21
2.6 Interpreting the Policy………………………………….. 23
2.7 Publicizing the Policy…………………………………… 23
3. Establishing Procedures to Prevent Security Problems…………. 24
3.1 Security Policy Defines What Needs to be Protected………….. 24
3.2 Identifing Possible Problems……………………………… 24
3.3 Choose Controls to Protect Assets in a Cost-Effective Way……. 26
3.4 Use Multiple Strategies to Protect Assets………………….. 26
3.5 Physical Security……………………………………….. 27
3.6 Procedures to Recognize Unauthorized Activity………………. 27
3.7 Define Actions to Take When Unauthorized Activity is Suspected.. 29
3.8 Communicating Security Policy…………………………….. 30
3.9 Resources to Prevent Security Breaches…………………….. 34
4. Types of Security Procedures………………………………. 56
4.1 System Security Audits…………………………………… 56
4.2 Account Management Procedures…………………………….. 57
4.3 Password Management Procedures……………………………. 57
4.4 Configuration Management Procedures……………………….. 60
5. Incident Handling………………………………………… 61
5.1 Overview……………………………………………….. 61
5.2 Evaluation……………………………………………… 65
5.3 Possible Types of Notification……………………………. 67
5.4 Response……………………………………………….. 71
5.5 Legal/Investigative……………………………………… 73
5.6 Documentation Logs………………………………………. 77
6. Establishing Post-Incident Procedures………………………. 78
6.1 Overview……………………………………………….. 78
6.2 Removing Vulnerabilities…………………………………. 78
6.3 Capturing Lessons Learned………………………………… 80

Site Security Policy Handbook Working Group [Page 2]

RFC 1244 Site Security Handbook July 1991

6.4 Upgrading Policies and Procedures…………………………. 81
7. References………………………………………………. 81
8. Annotated Bibliography……………………………………. 83
8.1 Computer Law……………………………………………. 84
8.2 Computer Security……………………………………….. 85
8.3 Ethics…………………………………………………. 91
8.4 The Internet Worm……………………………………….. 93
8.5 National Computer Security Center (NCSC)…………………… 95
8.6 Security Checklists……………………………………… 99
8.7 Additional Publications………………………………….. 99
9. Acknlowledgements…………………………………………101
10. Security Considerations…………………………………..101
11. Authors’ Addresses……………………………………….101

1. Introduction

1.1 Purpose of this Work

This handbook is a guide to setting computer security policies and
procedures for sites that have systems on the Internet. This guide
lists issues and factors that a site must consider when setting their
own policies. It makes some recommendations and gives discussions of
relevant areas.

This guide is only a framework for setting security policies and
procedures. In order to have an effective set of policies and
procedures, a site will have to make many decisions, gain agreement,
and then communicate and implement the policies.

1.2 Audience

The audience for this work are system administrators and decision
makers (who are more traditionally called “administrators” or “middle
management”) at sites. This document is not directed at programmers
or those trying to create secure programs or systems. The focus of
this document is on the policies and procedures that need to be in
place to support any technical security features that a site may be
implementing.

The primary audience for this work are sites that are members of the
Internet community. However, this document should be useful to any
site that allows communication with other sites. As a general guide
to security policies, this document may also be useful to sites with
isolated systems.

Site Security Policy Handbook Working Group [Page 3]

RFC 1244 Site Security Handbook July 1991

1.3 Definitions

For the purposes of this guide, a “site” is any organization that
owns computers or network-related resources. These resources may
include host computers that users use, routers, terminal servers,
PC’s or other devices that have access to the Internet. A site may
be a end user of Internet services or a service provider such as a
regional network. However, most of the focus of this guide is on
those end users of Internet services.

We assume that the site has the ability to set policies and
procedures for itself with the concurrence and support from those who
actually own the resources.

The “Internet” is those set of networks and machines that use the
TCP/IP protocol suite, connected through gateways, and sharing a
common name and address spaces [1].

The term “system administrator” is used to cover all those who are
responsible for the day-to-day operation of resources. This may be a
number of individuals or an organization.

The term “decision maker” refers to those people at a site who set or
approve policy. These are often (but not always) the people who own
the resources.

1.4 Related Work

The IETF Security Policy Working Group (SPWG) is working on a set of
recommended security policy guidelines for the Internet [23]. These
guidelines may be adopted as policy by regional networks or owners of
other resources. This handbook should be a useful tool to help sites
implement those policies as desired or required. However, even
implementing the proposed policies isn’t enough to secure a site.
The proposed Internet policies deal only with network access
security. It says nothing about how sites should deal with local
security issues.

1.5 Scope

This document covers issues about what a computer security policy
should contain, what kinds of procedures are need to enforce
security, and some recommendations about how to deal with the
problem. When developing a security policy, close attention should
be made not only on the security needs and requirements of the local
network, but also the security needs and requirements of the other
interconnected networks.

Site Security Policy Handbook Working Group [Page 4]

RFC 1244 Site Security Handbook July 1991

This is not a cookbook for computer security. Each site has
different needs; the security needs of a corporation might well be
different than the security needs of an academic institution. Any
security plan has to conform to the needs and culture of the site.

This handbook does not cover details of how to do risk assessment,
contingency planning, or physical security. These things are
essential in setting and implementing effective security policy, but
this document leaves treatment of those issues to other documents.
We will try to provide some pointers in that direction.

This document also doesn’t talk about how to design or implement
secure systems or programs.

1.6 Why Do We Need Security Policies and Procedures?

For most sites, the interest in computer security is proportional to
the perception of risk and threats.

The world of computers has changed dramatically over the past
twenty-five years. Twenty-five years ago, most computers were
centralized and managed by data centers. Computers were kept in
locked rooms and staffs of people made sure they were carefully
managed and physically secured. Links outside a site were unusual.
Computer security threats were rare, and were basically concerned
with insiders: authorized users misusing accounts, theft and
vandalism, and so forth. These threats were well understood and
dealt with using standard techniques: computers behind locked doors,
and accounting for all resources.

Computing in the 1990’s is radically different. Many systems are in
private offices and labs, often managed by individuals or persons
employed outside a computer center. Many systems are connected into
the Internet, and from there around the world: the United States,
Europe, Asia, and Australia are all connected together.

Security threats are different today. The time honored advice says
“don’t write your password down and put it in your desk” lest someone
find it. With world-wide Internet connections, someone could get
into your system from the other side of the world and steal your
password in the middle of the night when your building is locked up.
Viruses and worms can be passed from machine to machine. The
Internet allows the electronic equivalent of the thief who looks for
open windows and doors; now a person can check hundreds of machines
for vulnerabilities in a few hours.

System administrators and decision makers have to understand the
security threats that exist, what the risk and cost of a problem

Site Security Policy Handbook Working Group [Page 5]

RFC 1244 Site Security Handbook July 1991

would be, and what kind of action they want to take (if any) to
prevent and respond to security threats.

As an illustration of some of the issues that need to be dealt with
in security problems, consider the following scenarios (thanks to
Russell Brand [2, BRAND] for these):

– A system programmer gets a call reporting that a
major underground cracker newsletter is being
distributed from the administrative machine at his
center to five thousand sites in the US and
Western Europe.

Eight weeks later, the authorities call to inform
you the information in one of these newsletters
was used to disable “911” in a major city for
five hours.

– A user calls in to report that he can’t login to his
account at 3 o’clock in the morning on a Saturday. The
system staffer can’t login either. After rebooting to
single user mode, he finds that password file is empty.
By Monday morning, your staff determines that a number
of privileged file transfers took place between this
machine and a local university.

Tuesday morning a copy of the deleted password file is
found on the university machine along with password
files for a dozen other machines.

A week later you find that your system initialization
files had been altered in a hostile fashion.

– You receive a call saying that a breakin to a government
lab occurred from one of your center’s machines. You
are requested to provide accounting files to help
trackdown the attacker.

A week later you are given a list of machines at your
site that have been broken into.

– A reporter calls up asking about the breakin at your
center. You haven’t heard of any such breakin.

Three days later, you learn that there was a breakin.
The center director had his wife’s name as a password.

Site Security Policy Handbook Working Group [Page 6]

RFC 1244 Site Security Handbook July 1991

– A change in system binaries is detected.

The day that it is corrected, they again are changed.
This repeats itself for some weeks.

– If an intruder is found on your system, should you
leave the system open to monitor the situation or should
you close down the holes and open them up again later?

– If an intruder is using your site, should you call law
enforcement? Who makes that decision? If law enforcement asks
you to leave your site open, who makes that decision?

– What steps should be taken if another site calls you and says
they see activity coming from an account on your system? What
if the account is owned by a local manager?

1.7 Basic Approach

Setting security policies and procedures really means developing a
plan for how to deal with computer security. One way to approach
this task is suggested by Fites, et. al. [3, FITES]:

– Look at what you are trying to protect.
– Look at what you need to protect it from.
– Determine how likely the threats are.
– Implement measures which will protect your assets in a
cost-effective manner.
– Review the process continuously, and improve things every time
a weakness is found.

This handbook will concentrate mostly on the last two steps, but the
first three are critically important to making effective decisions
about security. One old truism in security is that the cost of
protecting yourself against a threat should be less than the cost
recovering if the threat were to strike you. Without reasonable
knowledge of what you are protecting and what the likely threats are,
following this rule could be difficult.

1.8 Organization of this Document

This document is organized into seven parts in addition to this
introduction.

The basic form of each section is to discuss issues that a site might
want to consider in creating a computer security policy and setting
procedures to implement that policy. In some cases, possible options
are discussed along with the some of the ramifications of those

Site Security Policy Handbook Working Group [Page 7]

RFC 1244 Site Security Handbook July 1991

choices. As far as possible, this document tries not to dictate the
choices a site should make, since these depend on local
circumstances. Some of the issues brought up may not apply to all
sites. Nonetheless, all sites should at least consider the issues
brought up here to ensure that they do not miss some important area.

The overall flow of the document is to discuss policy issues followed
by the issues that come up in creating procedures to implement the
policies.

Section 2 discusses setting official site policies for access to
computing resources. It also goes into the issue of what happens
when the policy is violated. The policies will drive the procedures
that need to be created, so decision makers will need to make choices
about policies before many of the procedural issues in following
sections can be dealt with. A key part of creating policies is doing
some kind of risk assessment to decide what really needs to be
protected and the level of resources that should be applied to
protect them.

Once policies are in place, procedures to prevent future security
problems should be established. Section 3 defines and suggests
actions to take when unauthorized activity is suspected. Resources
to prevent secruity breaches are also discussed.

Section 4 discusses types of procedures to prevent security problems.
Prevention is a key to security; as an example, the Computer
Emergency Response Team/Coordination Center (CERT/CC) at Carnegie-
Mellon University (CMU) estimates that 80% or more of the problems
they see have to do with poorly chosen passwords.

Section 5 discusses incident handling: what kinds of issues does a
site face when someone violates the security policy. Many decisions
will have to made on the spot as the incident occurs, but many of the
options and issues can be discussed in advance. At very least,
responsibilities and methods of communication can be established
before an incident. Again, the choices here are influenced by the
policies discussed in section 2.

Section 6 deals with what happens after a security violation has been
dealt with. Security planning is an on-going cycle; just after an
incident has occurred is an excellent opportunity to improve policies
and procedures.

The rest of the document provides references and an annotated
bibliography.

Site Security Policy Handbook Working Group [Page 8]

RFC 1244 Site Security Handbook July 1991

2. Establishing Official Site Policy on Computer Security

2.1 Brief Overview

2.1.1 Organization Issues

The goal in developing an official site policy on computer
security is to define the organization’s expectations of proper
computer and network use and to define procedures to prevent and
respond to security incidents. In order to do this, aspects of
the particular organization must be considered.

First, the goals and direction of the organization should be
considered. For example, a military base may have very different
security concerns from a those of a university.

Second, the site security policy developed must conform to
existing policies, rules, regulations and laws that the
organization is subject to. Therefore it will be necessary to
identify these and take them into consideration while developing
the policy.

Third, unless the local network is completely isolated and
standalone, it is necessary to consider security implications in a
more global context. The policy should address the issues when
local security problems develop as a result of a remote site as
well as when problems occur on remote systems as a result of a
local host or user.

2.1.2 Who Makes the Policy?

Policy creation must be a joint effort by technical personnel, who
understand the full ramifications of the proposed policy and the
implementation of the policy, and by decision makers who have the
power to enforce the policy. A policy which is neither
implementable nor enforceable is useless.

Since a computer security policy can affect everyone in an
organization, it is worth taking some care to make sure you have
the right level of authority in on the policy decisions. Though a
particular group (such as a campus information services group) may
have responsibility for enforcing a policy, an even higher group
may have to support and approve the policy.

2.1.3 Who is Involved?

Establishing a site policy has the potential for involving every
computer user at the site in a variety of ways. Computer users

Site Security Policy Handbook Working Group [Page 9]

RFC 1244 Site Security Handbook July 1991

may be responsible for personal password administration. Systems
managers are obligated to fix security holes and to oversee the
system.

It is critical to get the right set of people involved at the
start of the process. There may already be groups concerned with
security who would consider a computer security policy to be their
area. Some of the types of groups that might be involved include
auditing/control, organizations that deal with physical security,
campus information systems groups, and so forth. Asking these
types of groups to “buy in” from the start can help facilitate the
acceptance of the policy.

2.1.4 Responsibilities

A key element of a computer security policy is making sure
everyone knows their own responsibility for maintaining security.
A computer security policy cannot anticipate all possibilities;
however, it can ensure that each kind of problem does have someone
assigned to deal with it.

There may be levels of responsibility associated with a policy on
computer security. At one level, each user of a computing
resource may have a responsibility to protect his account. A user
who allows his account to be compromised increases the chances of
compromising other accounts or resources.

System managers may form another responsibility level: they must
help to ensure the security of the computer system. Network
managers may reside at yet another level.

2.2 Risk Assessment

2.2.1 General Discussion

One of the most important reasons for creating a computer security
policy is to ensure that efforts spent on security yield cost
effective benefits. Although this may seem obvious, it is
possible to be mislead about where the effort is needed. As an
example, there is a great deal of publicity about intruders on
computers systems; yet most surveys of computer security show that
for most organizations, the actual loss from “insiders” is much
greater.

Risk analysis involves determining what you need to protect, what
you need to protect it from, and how to protect it. Is is the
process of examining all of your risks, and ranking those risks by
level of severity. This process involves making cost-effective

Site Security Policy Handbook Working Group [Page 10]

RFC 1244 Site Security Handbook July 1991

decisions on what you want to protect. The old security adage
says that you should not spend more to protect something than it
is actually worth.

A full treatment of risk analysis is outside the scope of this
document. [3, FITES] and [16, PFLEEGER] provide introductions to
this topic. However, there are two elements of a risk analysis
that will be briefly covered in the next two sections:

1. Identifying the assets
2. Identifying the threats

For each asset, the basic goals of security are availability,
confidentiality, and integrity. Each threat should be examined
with an eye to how the threat could affect these areas.

2.2.2 Identifying the Assets

One step in a risk analysis is to identify all the things that
need to be protected. Some things are obvious, like all the
various pieces of hardware, but some are overlooked, such as the
people who actually use the systems. The essential point is to
list all things that could be affected by a security problem.

One list of categories is suggested by Pfleeger [16, PFLEEGER,
page 459]; this list is adapted from that source:

1. Hardware: cpus, boards, keyboards, terminals,
workstations, personal computers, printers, disk
drives, communication lines, terminal servers, routers.

2. Software: source programs, object programs,
utilities, diagnostic programs, operating systems,
communication programs.

3. Data: during execution, stored on-line, archived off-line,
backups, audit logs, databases, in transit over
communication media.

4. People: users, people needed to run systems.

5. Documentation: on programs, hardware, systems, local
administrative procedures.

6. Supplies: paper, forms, ribbons, magnetic media.

Site Security Policy Handbook Working Group [Page 11]

RFC 1244 Site Security Handbook July 1991

2.2.3 Identifying the Threats

Once the assets requiring protection are identified, it is
necessary to identify threats to those assests. The threats can
then be examined to determine what potential for loss exists. It
helps to consider from what threats you are trying to protect your
assets.

The following sections describe a few of the possible threats.

2.2.3.1 Unauthorized Access

A common threat that concerns many sites is unauthorized access
to computing facilities. Unauthorized access takes many forms.
One means of unauthorized access is the use of another user’s
account to gain access to a system. The use of any computer
resource without prior permission may be considered
unauthorized access to computing facilities.

The seriousness of an unauthorized access will vary from site
to site. For some sites, the mere act of granting access to an
unauthorized user may cause irreparable harm by negative media
coverage. For other sites, an unauthorized access opens the
door to other security threats. In addition, some sites may be
more frequent targets than others; hence the risk from
unauthorized access will vary from site to site. The Computer
Emergency Response Team (CERT – see section 3.9.7.3.1) has
observed that well-known universities, government sites, and
military sites seem to attract more intruders.

2.2.3.2 Disclosure of Information

Another common threat is disclosure of information. Determine
the value or sensitivity of the information stored on your
computers. Disclosure of a password file might allow for
future unauthorized accesses. A glimpse of a proposal may give
a competitor an unfair advantage. A technical paper may
contain years of valuable research.

2.2.3.3 Denial of Service

Computers and networks provide valuable services to their
users. Many people rely on these services in order to perform
their jobs efficiently. When these services are not available
when called upon, a loss in productivity results.

Denial of service comes in many forms and might affect users in
a number of ways. A network may be rendered unusable by a

Site Security Policy Handbook Working Group [Page 12]

RFC 1244 Site Security Handbook July 1991

rogue packet, jamming, or by a disabled network component. A
virus might slow down or cripple a computer system. Each site
should determine which services are essential, and for each of
these services determine the affect to the site if that service
were to become disabled.

2.3 Policy Issues

There are a number of issues that must be addressed when developing a
security policy. These are:

1. Who is allowed to use the resources?
2. What is the proper use of the resources?
3. Who is authorized to grant access and approve usage?
4. Who may have system administration privileges?
5. What are the user’s rights and responsibilities?
6. What are the rights and responsibilities of the
system administrator vs. those of the user?
7. What do you do with sensitive information?

These issues will be discussed below. In addition you may wish to
include a section in your policy concerning ethical use of computing
resources. Parker, Swope and Baker [17, PARKER90] and Forester and
Morrison [18, FORESTER] are two useful references that address
ethical issues.

2.3.1 Who is Allowed to use the Resources?

One step you must take in developing your security policy is
defining who is allowed to use your system and services. The
policy should explicitly state who is authorized to use what
resources.

2.3.2 What is the Proper Use of the Resources?

After determining who is allowed access to system resources it is
necessary to provide guidelines for the acceptable use of the
resources. You may have different guidelines for different types
of users (i.e., students, faculty, external users). The policy
should state what is acceptable use as well as unacceptable use.
It should also include types of use that may be restricted.

Define limits to access and authority. You will need to consider
the level of access various users will have and what resources
will be available or restricted to various groups of people.

Your acceptable use policy should clearly state that individual
users are responsible for their actions. Their responsibility

Site Security Policy Handbook Working Group [Page 13]

RFC 1244 Site Security Handbook July 1991

exists regardless of the security mechanisms that are in place.
It should be clearly stated that breaking into accounts or
bypassing security is not permitted.

The following points should be covered when developing an
acceptable use policy:

o Is breaking into accounts permitted?
o Is cracking passwords permitted?
o Is disrupting service permitted?
o Should users assume that a file being world-readable
grants them the authorization to read it?
o Should users be permitted to modify files that are
not their own even if they happen to have write
permission?
o Should users share accounts?

The answer to most of these questions will be “no”.

You may wish to incorporate a statement in your policies
concerning copyrighted and licensed software. Licensing
agreements with vendors may require some sort of effort on your
part to ensure that the license is not violated. In addition, you
may wish to inform users that the copying of copyrighted software
may be a violation of the copyright laws, and is not permitted.

Specifically concerning copyrighted and/or licensed software, you
may wish to include the following information:

o Copyrighted and licensed software may not be duplicated
unless it is explicitly stated that you may do so.
o Methods of conveying information on the
copyright/licensed status of software.
o When in doubt, DON’T COPY.

Your acceptable use policy is very important. A policy which does
not clearly state what is not permitted may leave you unable to
prove that a user violated policy.

There are exception cases like tiger teams and users or
administrators wishing for “licenses to hack” — you may face the
situation where users will want to “hack” on your services for
security research purposes. You should develop a policy that will
determine whether you will permit this type of research on your
services and if so, what your guidelines for such research will
be.

Points you may wish to cover in this area:

Site Security Policy Handbook Working Group [Page 14]

RFC 1244 Site Security Handbook July 1991

o Whether it is permitted at all.
o What type of activity is permitted: breaking in, releasing
worms, releasing viruses, etc..
o What type of controls must be in place to ensure that it
does not get out of control (e.g., separate a segment of
your network for these tests).
o How you will protect other users from being victims of
these activities, including external users and networks.
o The process for obtaining permission to conduct these
tests.

In cases where you do permit these activities, you should isolate
the portions of the network that are being tested from your main
network. Worms and viruses should never be released on a live
network.

You may also wish to employ, contract, or otherwise solicit one or
more people or organizations to evaluate the security of your
services, of which may include “hacking”. You may wish to provide
for this in your policy.

2.3.3 Who Is Authorized to Grant Access and Approve Usage?

Your policy should state who is authorized to grant access to your
services. Further, it must be determined what type of access they
are permitted to give. If you do not have control over who is
granted access to your system, you will not have control over who
is using your system. Controlling who has the authorization to
grant access will also enable you to know who was or was not
granting access if problems develop later.

There are many schemes that can be developed to control the
distribution of access to your services. The following are the
factors that you must consider when determining who will
distribute access to your services:

o Will you be distributing access from a centralized
point or at various points?

You can have a centralized distribution point to a distributed
system where various sites or departments independently authorize
access. The trade off is between security and convenience. The
more centralized, the easier to secure.

o What methods will you use for creating accounts and
terminating access?

From a security standpoint, you need to examine the mechanism that

Site Security Policy Handbook Working Group [Page 15]

RFC 1244 Site Security Handbook July 1991

you will be using to create accounts. In the least restrictive
case, the people who are authorized to grant access would be able
to go into the system directly and create an account by hand or
through vendor supplied mechanisms. Generally, these mechanisms
place a great deal of trust in the person running them, and the
person running them usually has a large amount of privileges. If
this is the choice you make, you need to select someone who is
trustworthy to perform this task. The opposite solution is to
have an integrated system that the people authorized to create
accounts run, or the users themselves may actually run. Be aware
that even in the restrictive case of having a mechanized facility
to create accounts does not remove the potential for abuse.

You should have specific procedures developed for the creation of
accounts. These procedures should be well documented to prevent
confusion and reduce mistakes. A security vulnerability in the
account authorization process is not only possible through abuse,
but is also possible if a mistake is made. Having clear and well
documented procedure will help ensure that these mistakes won’t
happen. You should also be sure that the people who will be
following these procedures understand them.

The granting of access to users is one of the most vulnerable of
times. You should ensure that the selection of an initial
password cannot be easily guessed. You should avoid using an
initial password that is a function of the username, is part of
the user’s name, or some algorithmically generated password that
can easily be guessed. In addition, you should not permit users
to continue to use the initial password indefinitely. If
possible, you should force users to change the initial password
the first time they login. Consider that some users may never
even login, leaving their password vulnerable indefinitely. Some
sites choose to disable accounts that have never been accessed,
and force the owner to reauthorize opening the account.

2.3.4 Who May Have System Administration Privileges?

One security decision that needs to be made very carefully is who
will have access to system administrator privileges and passwords
for your services. Obviously, the system administrators will need
access, but inevitably other users will request special
privileges. The policy should address this issue. Restricting
privileges is one way to deal with threats from local users. The
challenge is to balance restricting access to these to protect
security with giving people who need these privileges access so
that they can perform their tasks. One approach that can be taken
is to grant only enough privilege to accomplish the necessary
tasks.

Site Security Policy Handbook Working Group [Page 16]

RFC 1244 Site Security Handbook July 1991

Additionally, people holding special privileges should be
accountable to some authority and this should also be identified
within the site’s security policy. If the people you grant
privileges to are not accountable, you run the risk of losing
control of your system and will have difficulty managing a
compromise in security.

2.3.5 What Are The Users’ Rights and Responsibilities?

The policy should incorporate a statement on the users’ rights and
responsibilities concerning the use of the site’s computer systems
and services. It should be clearly stated that users are
responsible for understanding and respecting the security rules of
the systems they are using. The following is a list of topics
that you may wish to cover in this area of the policy:

o What guidelines you have regarding resource consumption
(whether users are restricted, and if so, what the
restrictions are).
o What might constitute abuse in terms of system performance.
o Whether users are permitted to share accounts or let others
use their accounts.
o How “secret” users should keep their passwords.
o How often users should change their passwords and any other
password restrictions or requirements.
o Whether you provide backups or expect the users to create
their own.
o Disclosure of information that may be proprietary.
o Statement on Electronic Mail Privacy (Electronic
Communications Privacy Act).
o Your policy concerning controversial mail or postings to
mailing lists or discussion groups (obscenity, harassment,
etc.).
o Policy on electronic communications: mail forging, etc.

The Electronic Mail Association sponsored a white paper on the
privacy of electronic mail in companies [4]. Their basic
recommendation is that every site should have a policy on the
protection of employee privacy. They also recommend that
organizations establish privacy policies that deal with all media,
rather than singling out electronic mail.

They suggest five criteria for evaluating any policy:

1. Does the policy comply with law and with duties to
third parties?

2. Does the policy unnecessarily compromise the interest of

Site Security Policy Handbook Working Group [Page 17]

RFC 1244 Site Security Handbook July 1991

the employee, the employer or third parties?

3. Is the policy workable as a practical matter and likely to
be enforced?

4. Does the policy deal appropriately with all different
forms of communications and record keeping with the office?

5. Has the policy been announced in advance and agreed to by
all concerned?

2.3.6 What Are The Rights and Responsibilities of System
Administrators Versus Rights of Users

There is a tradeoff between a user’s right to absolute privacy and
the need of system administrators to gather sufficient information
to diagnose problems. There is also a distinction between a
system administrator’s need to gather information to diagnose
problems and investigating security violations. The policy should
specify to what degree system administrators can examine user
files to diagnose problems or for other purposes, and what rights
you grant to the users. You may also wish to make a statement
concerning system administrators’ obligation to maintaining the
privacy of information viewed under these circumstances. A few
questions that should be answered are:

o Can an administrator monitor or read a user’s files
for any reason?
o What are the liabilities?
o Do network administrators have the right to examine
network or host traffic?

2.3.7 What To Do With Sensitive Information

Before granting users access to your services, you need to
determine at what level you will provide for the security of data
on your systems. By determining this, you are determining the
level of sensitivity of data that users should store on your
systems. You do not want users to store very sensitive
information on a system that you are not going to secure very
well. You need to tell users who might store sensitive
information what services, if any, are appropriate for the storage
of sensitive information. This part should include storing of
data in different ways (disk, magnetic tape, file servers, etc.).
Your policy in this area needs to be coordinated with the policy
concerning the rights of system administrators versus users (see
section 2.3.6).

Site Security Policy Handbook Working Group [Page 18]

RFC 1244 Site Security Handbook July 1991

2.4 What Happens When the Policy is Violated

It is obvious that when any type of official policy is defined, be it
related to computer security or not, it will eventually be broken.
The violation may occur due to an individual’s negligence, accidental
mistake, having not been properly informed of the current policy, or
not understanding the current policy. It is equally possible that an
individual (or group of individuals) may knowingly perform an act
that is in direct violation of the defined policy.

When a policy violation has been detected, the immediate course of
action should be pre-defined to ensure prompt and proper enforcement.
An investigation should be performed to determine how and why the
violation occurred. Then the appropriate corrective action should be
executed. The type and severity of action taken varies depending on
the type of violation that occurred.

2.4.1 Determining the Response to Policy Violations

Violations to policy may be committed by a wide variety of users.
Some may be local users and others may be from outside the local
environment. Sites may find it helpful to define what it
considers “insiders” and “outsiders” based upon administrative,
legal or political boundaries. These boundaries imply what type
of action must be taken to correct the offending party; from a
written reprimand to pressing legal charges. So, not only do you
need to define actions based on the type of violation, you also
need to have a clearly defined series of actions based on the kind
of user violating your computer security policy. This all seems
rather complicated, but should be addressed long before it becomes
necessary as the result of a violation.

One point to remember about your policy is that proper education
is your best defense. For the outsiders who are using your
computer legally, it is your responsibility to verify that these
individuals are aware of the policies that you have set forth.
Having this proof may assist you in the future if legal action
becomes necessary.

As for users who are using your computer illegally, the problem is
basically the same. What type of user violated the policy and how
and why did they do it? Depending on the results of your
investigation, you may just prefer to “plug” the hole in your
computer security and chalk it up to experience. Or if a
significant amount of loss was incurred, you may wish to take more
drastic action.

Site Security Policy Handbook Working Group [Page 19]

RFC 1244 Site Security Handbook July 1991

2.4.2 What to do When Local Users Violate the Policy of a Remote
Site

In the event that a local user violates the security policy of a
remote site, the local site should have a clearly defined set of
administrative actions to take concerning that local user. The
site should also be prepared to protect itself against possible
actions by the remote site. These situations involve legal issues
which should be addressed when forming the security policy.

2.4.3 Defining Contacts and Responsibilities to Outside
Organizations

The local security policy should include procedures for
interaction with outside organizations. These include law
enforcement agencies, other sites, external response team
organizations (e.g., the CERT, CIAC) and various press agencies.
The procedure should state who is authorized to make such contact
and how it should be handled. Some questions to be answered
include:

o Who may talk to the press?
o When do you contact law enforcement and investigative agencies?
o If a connection is made from a remote site, is the
system manager authorized to contact that site?
o Can data be released? What kind?

Detailed contact information should be readily available along
with clearly defined procedures to follow.

2.4.4 What are the Responsibilities to our Neighbors and Other
Internet Sites?

The Security Policy Working Group within the IETF is working on a
document entitled, “Policy Guidelines for the Secure Operation of
the Internet” [23]. It addresses the issue that the Internet is a
cooperative venture and that sites are expected to provide mutual
security assistance. This should be addressed when developing a
site’s policy. The major issue to be determined is how much
information should be released. This will vary from site to site
according to the type of site (e.g., military, education,
commercial) as well as the type of security violation that
occurred.

2.4.5 Issues for Incident Handling Procedures

Along with statements of policy, the document being prepared
should include procedures for incident handling. This is covered

Site Security Policy Handbook Working Group [Page 20]

RFC 1244 Site Security Handbook July 1991

in detail in the next chapter. There should be procedures
available that cover all facets of policy violation.

2.5 Locking In or Out

Whenever a site suffers an incident which may compromise computer
security, the strategies for reacting may be influenced by two
opposing pressures.

If management fears that the site is sufficiently vulnerable, it may
choose a “Protect and Proceed” strategy. This approach will have as
its primary goal the protection and preservation of the site
facilities and to provide for normalcy for its users as quickly as
possible. Attempts will be made to actively interfere with the
intruder’s processes, prevent further access and begin immediate
damage assessment and recovery. This process may involve shutting
down the facilities, closing off access to the network, or other
drastic measures. The drawback is that unless the intruder is
identified directly, they may come back into the site via a different
path, or may attack another site.

The alternate approach, “Pursue and Prosecute”, adopts the opposite
philosophy and goals. The primary goal is to allow intruders to
continue their activities at the site until the site can identify the
responsible persons. This approach is endorsed by law enforcement
agencies and prosecutors. The drawback is that the agencies cannot
exempt a site from possible user lawsuits if damage is done to their
systems and data.

Prosecution is not the only outcome possible if the intruder is
identified. If the culprit is an employee or a student, the
organization may choose to take disciplinary actions. The computer
security policy needs to spell out the choices and how they will be
selected if an intruder is caught.

Careful consideration must be made by site management regarding their
approach to this issue before the problem occurs. The strategy
adopted might depend upon each circumstance. Or there may be a
global policy which mandates one approach in all circumstances. The
pros and cons must be examined thoroughly and the users of the
facilities must be made aware of the policy so that they understand
their vulnerabilities no matter which approach is taken.

The following are checklists to help a site determine which strategy
to adopt: “Protect and Proceed” or “Pursue and Prosecute”.

Site Security Policy Handbook Working Group [Page 21]

RFC 1244 Site Security Handbook July 1991

Protect and Proceed

1. If assets are not well protected.

2. If continued penetration could result in great
financial risk.

3. If the possibility or willingness to prosecute
is not present.

4. If user base is unknown.

5. If users are unsophisticated and their work is
vulnerable.

6. If the site is vulnerable to lawsuits from users, e.g.,
if their resources are undermined.

Pursue and Prosecute

1. If assets and systems are well protected.

2. If good backups are available.

3. If the risk to the assets is outweighed by the
disruption caused by the present and possibly future
penetrations.

4. If this is a concentrated attack occurring with great
frequency and intensity.

5. If the site has a natural attraction to intruders, and
consequently regularly attracts intruders.

6. If the site is willing to incur the financial (or other)
risk to assets by allowing the penetrator continue.

7. If intruder access can be controlled.

8. If the monitoring tools are sufficiently well-developed
to make the pursuit worthwhile.

9. If the support staff is sufficiently clever and knowledgable
about the operating system, related utilities, and systems
to make the pursuit worthwhile.

10. If there is willingness on the part of management to
prosecute.

Site Security Policy Handbook Working Group [Page 22]

RFC 1244 Site Security Handbook July 1991

11. If the system adminitrators know in general what kind of
evidence would lead to prosecution.

12. If there is established contact with knowledgeable law
enforcement.

13. If there is a site representative versed in the relevant
legal issues.

14. If the site is prepared for possible legal action from
its own users if their data or systems become compromised
during the pursuit.

2.6 Interpreting the Policy

It is important to define who will interpret the policy. This could
be an individual or a committee. No matter how well written, the
policy will require interpretation from time to time and this body
would serve to review, interpret, and revise the policy as needed.

2.7 Publicizing the Policy

Once the site security policy has been written and established, a
vigorous process should be engaged to ensure that the policy
statement is widely and thoroughly disseminated and discussed. A
mailing of the policy should not be considered sufficient. A period
for comments should be allowed before the policy becomes effective to
ensure that all affected users have a chance to state their reactions
and discuss any unforeseen ramifications. Ideally, the policy should
strike a balance between protection and productivity.

Meetings should be held to elicit these comments, and also to ensure
that the policy is correctly understood. (Policy promulgators are
not necessarily noted for their skill with the language.) These
meetings should involve higher management as well as line employees.
Security is a collective effort.

In addition to the initial efforts to publicize the policy, it is
essential for the site to maintain a continual awareness of its
computer security policy. Current users may need periodic reminders
New users should have the policy included as part of their site
introduction packet. As a condition for using the site facilities,
it may be advisable to have them sign a statement that they have read
and understood the policy. Should any of these users require legal
action for serious policy violations, this signed statement might
prove to be a valuable aid.

Site Security Policy Handbook Working Group [Page 23]

RFC 1244 Site Security Handbook July 1991

3. Establishing Procedures to Prevent Security Problems

The security policy defines what needs to be protected. This section
discusses security procedures which specify what steps will be used
to carry out the security policy.

3.1 Security Policy Defines What Needs to be Protected

The security policy defines the WHAT’s: what needs to be protected,
what is most important, what the priorities are, and what the general
approach to dealing with security problems should be.

The security policy by itself doesn’t say HOW things are protected.
That is the role of security procedures, which this section
discusses. The security policy should be a high level document,
giving general strategy. The security procedures need to set out, in
detail, the precise steps your site will take to protect itself.

The security policy should include a general risk assessment of the
types of threats a site is mostly likely to face and the consequences
of those threats (see section 2.2). Part of doing a risk assessment
will include creating a general list of assets that should be
protected (section 2.2.2). This information is critical in devising
cost-effective procedures.

It is often tempting to start creating security procedures by
deciding on different mechanisms first: “our site should have logging
on all hosts, call-back modems, and smart cards for all users.” This
approach could lead to some areas that have too much protection for
the risk they face, and other areas that aren’t protected enough.
Starting with the security policy and the risks it outlines should
ensure that the procedures provide the right level of protect for all
assets.

3.2 Identifing Possible Problems

To determine risk, vulnerabilities must be identified. Part of the
purpose of the policy is to aid in shoring up the vulnerabilities and
thus to decrease the risk in as many areas as possible. Several of
the more popular problem areas are presented in sections below. This
list is by no means complete. In addition, each site is likely to
have a few unique vulnerabilities.

3.2.1 Access Points

Access points are typically used for entry by unauthorized users.
Having many access points increases the risk of access to an
organization’s computer and network facilities.

Site Security Policy Handbook Working Group [Page 24]

RFC 1244 Site Security Handbook July 1991

Network links to networks outside the organization allow access
into the organization for all others connected to that external
network. A network link typically provides access to a large
number of network services, and each service has a potential to be
compromised.

Dialup lines, depending on their configuration, may provide access
merely to a login port of a single system. If connected to a
terminal server, the dialup line may give access to the entire
network.

Terminal servers themselves can be a source of problem. Many
terminal servers do not require any kind of authentication.
Intruders often use terminal servers to disguise their actions,
dialing in on a local phone and then using the terminal server to
go out to the local network. Some terminal servers are configured
so that intruders can TELNET [19] in from outside the network, and
then TELNET back out again, again serving to make it difficult to
trace them.

3.2.2 Misconfigured Systems

Misconfigured systems form a large percentage of security holes.
Today’s operating systems and their associated software have
become so complex that understanding how the system works has
become a full-time job. Often, systems managers will be non-
specialists chosen from the current organization’s staff.

Vendors are also partly responsible for misconfigured systems. To
make the system installation process easier, vendors occasionally
choose initial configurations that are not secure in all
environments.

3.2.3 Software Bugs

Software will never be bug free. Publicly known security bugs are
common methods of unauthorized entry. Part of the solution to
this problem is to be aware of the security problems and to update
the software when problems are detected. When bugs are found,
they should be reported to the vendor so that a solution to the
problem can be implemented and distributed.

3.2.4 “Insider” Threats

An insider to the organization may be a considerable threat to the
security of the computer systems. Insiders often have direct
access to the computer and network hardware components. The
ability to access the components of a system makes most systems

Site Security Policy Handbook Working Group [Page 25]

RFC 1244 Site Security Handbook July 1991

easier to compromise. Most desktop workstations can be easily
manipulated so that they grant privileged access. Access to a
local area network provides the ability to view possibly sensitive
data traversing the network.

3.3 Choose Controls to Protect Assets in a Cost-Effective Way

After establishing what is to be protected, and assessing the risks
these assets face, it is necessary to decide how to implement the
controls which protect these assets. The controls and protection
mechanisms should be selected in a way so as to adequately counter
the threats found during risk assessment, and to implement those
controls in a cost effective manner. It makes little sense to spend
an exorbitant sum of money and overly constrict the user base if the
risk of exposure is very small.

3.3.1 Choose the Right Set of Controls

The controls that are selected represent the physical embodiment
of your security policy. They are the first and primary line of
defense in the protection of your assets. It is therefore most
important to ensure that the controls that you select are the
right set of controls. If the major threat to your system is
outside penetrators, it probably doesn’t make much sense to use
biometric devices to authenticate your regular system users. On
the other hand, if the major threat is unauthorized use of
computing resources by regular system users, you’ll probably want
to establish very rigorous automated accounting procedures.

3.3.2 Use Common Sense

Common sense is the most appropriate tool that can be used to
establish your security policy. Elaborate security schemes and
mechanisms are impressive, and they do have their place, yet there
is little point in investing money and time on an elaborate
implementation scheme if the simple controls are forgotten. For
example, no matter how elaborate a system you put into place on
top of existing security controls, a single user with a poor
password can still leave your system open to attack.

3.4 Use Multiple Strategies to Protect Assets

Another method of protecting assets is to use multiple strategies.
In this way, if one strategy fails or is circumvented, another
strategy comes into play to continue protecting the asset. By using
several simpler strategies, a system can often be made more secure
than if one very sophisticated method were used in its place. For
example, dial-back modems can be used in conjunction with traditional

Site Security Policy Handbook Working Group [Page 26]

RFC 1244 Site Security Handbook July 1991

logon mechanisms. Many similar approaches could be devised that
provide several levels of protection for assets. However, it’s very
easy to go overboard with extra mechanisms. One must keep in mind
exactly what it is that needs to be protected.

3.5 Physical Security

It is a given in computer security if the system itself is not
physically secure, nothing else about the system can be considered
secure. With physical access to a machine, an intruder can halt the
machine, bring it back up in privileged mode, replace or alter the
disk, plant Trojan horse programs (see section 2.13.9.2), or take any
number of other undesirable (and hard to prevent) actions.

Critical communications links, important servers, and other key
machines should be located in physically secure areas. Some security
systems (such as Kerberos) require that the machine be physically
secure.

If you cannot physically secure machines, care should be taken about
trusting those machines. Sites should consider limiting access from
non-secure machines to more secure machines. In particular, allowing
trusted access (e.g., the BSD Unix remote commands such as rsh) from
these kinds of hosts is particularly risky.

For machines that seem or are intended to be physically secure, care
should be taken about who has access to the machines. Remember that
custodial and maintenance staff often have keys to rooms.

3.6 Procedures to Recognize Unauthorized Activity

Several simple procedures can be used to detect most unauthorized
uses of a computer system. These procedures use tools provided with
the operating system by the vendor, or tools publicly available from
other sources.

3.6.1 Monitoring System Use

System monitoring can be done either by a system administrator, or
by software written for the purpose. Monitoring a system involves
looking at several parts of the system and searching for anything
unusual. Some of the easier ways to do this are described in this
section.

The most important thing about monitoring system use is that it be
done on a regular basis. Picking one day out of the month to
monitor the system is pointless, since a security breach can be
isolated to a matter of hours. Only by maintaining a constant

Site Security Policy Handbook Working Group [Page 27]

RFC 1244 Site Security Handbook July 1991

vigil can you expect to detect security violations in time to
react to them.

3.6.2 Tools for Monitoring the System

This section describes tools and methods for monitoring a system
against unauthorized access and use.

3.6.2.1 Logging

Most operating systems store numerous bits of information in
log files. Examination of these log files on a regular basis
is often the first line of defense in detecting unauthorized
use of the system.

– Compare lists of currently logged in users and past
login histories. Most users typically log in and out
at roughly the same time each day. An account logged
in outside the “normal” time for the account may be in
use by an intruder.

– Many systems maintain accounting records for billing
purposes. These records can also be used to determine
usage patterns for the system; unusual accounting records
may indicate unauthorized use of the system.

– System logging facilities, such as the UNIX “syslog”
utility, should be checked for unusual error messages
from system software. For example, a large number of
failed login attempts in a short period of time may
indicate someone trying to guess passwords.

– Operating system commands which list currently executing
processes can be used to detect users running programs
they are not authorized to use, as well as to detect
unauthorized programs which have been started by an
intruder.

3.6.2.2 Monitoring Software

Other monitoring tools can easily be constructed using standard
operating system software, by using several, often unrelated,
programs together. For example, checklists of file ownerships
and permission settings can be constructed (for example, with
“ls” and “find” on UNIX) and stored off-line. These lists can
then be reconstructed periodically and compared against the
master checklist (on UNIX, by using the “diff” utility).
Differences may indicate that unauthorized modifications have

Site Security Policy Handbook Working Group [Page 28]

RFC 1244 Site Security Handbook July 1991

been made to the system.

Still other tools are available from third-party vendors and
public software distribution sites. Section 3.9.9 lists
several sources from which you can learn what tools are
available and how to get them.

3.6.2.3 Other Tools

Other tools can also be used to monitor systems for security
violations, although this is not their primary purpose. For
example, network monitors can be used to detect and log
connections from unknown sites.

3.6.3 Vary the Monitoring Schedule

The task of system monitoring is not as daunting as it may seem.
System administrators can execute many of the commands used for
monitoring periodically throughout the day during idle moments
(e.g., while talking on the telephone), rather than spending fixed
periods of each day monitoring the system. By executing the
commands frequently, you will rapidly become used to seeing
“normal” output, and will easily spot things which are out of the
ordinary. In addition, by running various monitoring commands at
different times throughout the day, you make it hard for an
intruder to predict your actions. For example, if an intruder
knows that each day at 5:00 p.m. the system is checked to see that
everyone has logged off, he will simply wait until after the check
has completed before logging in. But the intruder cannot guess
when a system administrator might type a command to display all
logged-in users, and thus he runs a much greater risk of
detection.

Despite the advantages that regular system monitoring provides,
some intruders will be aware of the standard logging mechanisms in
use on systems they are attacking. They will actively pursue and
attempt to disable monitoring mechanisms. Regular monitoring
therefore is useful in detecting intruders, but does not provide
any guarantee that your system is secure, nor should monitoring be
considered an infallible method of detecting unauthorized use.

3.7 Define Actions to Take When Unauthorized Activity is Suspected

Sections 2.4 and 2.5 discussed the course of action a site should
take when it suspects its systems are being abused. The computer
security policy should state the general approach towards dealing
with these problems.

Site Security Policy Handbook Working Group [Page 29]

RFC 1244 Site Security Handbook July 1991

The procedures for dealing with these types of problems should be
written down. Who has authority to decide what actions will be
taken? Should law enforcement be involved? Should your
organization cooperate with other sites in trying to track down an
intruder? Answers to all the questions in section 2.4 should be
part of the incident handling procedures.

Whether you decide to lock out or pursue intruders, you should
have tools and procedures ready to apply. It is best to work up
these tools and procedures before you need them. Don’t wait until
an intruder is on your system to figure out how to track the
intruder’s actions; you will be busy enough if an intruder
strikes.

3.8 Communicating Security Policy

Security policies, in order to be effective, must be communicated to
both the users of the system and the system maintainers. This
section describes what these people should be told, and how to tell
them.

3.8.1 Educating the Users

Users should be made aware of how the computer systems are
expected to be used, and how to protect themselves from
unauthorized users.

3.8.1.1 Proper Account/Workstation Use

All users should be informed about what is considered the
“proper” use of their account or workstation (“proper” use is
discussed in section 2.3.2). This can most easily be done at
the time a user receives their account, by giving them a policy
statement. Proper use policies typically dictate things such
as whether or not the account or workstation may be used for
personal activities (such as checkbook balancing or letter
writing), whether profit-making activities are allowed, whether
game playing is permitted, and so on. These policy statements
may also be used to summarize how the computer facility is
licensed and what software licenses are held by the
institution; for example, many universities have educational
licenses which explicitly prohibit commercial uses of the
system. A more complete list of items to consider when writing
a policy statement is given in section 2.3.

3.8.1.2 Account/Workstation Management Procedures

Each user should be told how to properly manage their account

Site Security Policy Handbook Working Group [Page 30]

RFC 1244 Site Security Handbook July 1991

and workstation. This includes explaining how to protect files
stored on the system, how to log out or lock the terminal or
workstation, and so on. Much of this information is typically
covered in the “beginning user” documentation provided by the
operating system vendor, although many sites elect to
supplement this material with local information.

If your site offers dial-up modem access to the computer
systems, special care must be taken to inform users of the
security problems inherent in providing this access. Issues
such as making sure to log out before hanging up the modem
should be covered when the user is initially given dial-up
access.

Likewise, access to the systems via local and wide-area
networks presents its own set of security problems which users
should be made aware of. Files which grant “trusted host” or
“trusted user” status to remote systems and users should be
carefully explained.

3.8.1.3 Determining Account Misuse

Users should be told how to detect unauthorized access to their
account. If the system prints the last login time when a user
logs in, he or she should be told to check that time and note
whether or not it agrees with the last time he or she actually
logged in.

Command interpreters on some systems (e.g., the UNIX C shell)
maintain histories of the last several commands executed.
Users should check these histories to be sure someone has not
executed other commands with their account.

3.8.1.4 Problem Reporting Procedures

A procedure should be developed to enable users to report
suspected misuse of their accounts or other misuse they may
have noticed. This can be done either by providing the name
and telephone number of a system administrator who manages
security of the computer system, or by creating an electronic
mail address (e.g., “security”) to which users can address
their problems.

3.8.2 Educating the Host Administrators

In many organizations, computer systems are administered by a wide
variety of people. These administrators must know how to protect
their own systems from attack and unauthorized use, as well as how

Site Security Policy Handbook Working Group [Page 31]

RFC 1244 Site Security Handbook July 1991

to communicate successful penetration of their systems to other
administrators as a warning.

3.8.2.1 Account Management Procedures

Care must be taken when installing accounts on the system in
order to make them secure. When installing a system from
distribution media, the password file should be examined for
“standard” accounts provided by the vendor. Many vendors
provide accounts for use by system services or field service
personnel. These accounts typically have either no password or
one which is common knowledge. These accounts should be given
new passwords if they are needed, or disabled or deleted from
the system if they are not.

Accounts without passwords are generally very dangerous since
they allow anyone to access the system. Even accounts which do
not execute a command interpreter (e.g., accounts which exist
only to see who is logged in to the system) can be compromised
if set up incorrectly. A related concept, that of “anonymous”
file transfer (FTP) [20], allows users from all over the
network to access your system to retrieve files from (usually)
a protected disk area. You should carefully weigh the benefits
that an account without a password provides against the
security risks of providing such access to your system.

If the operating system provides a “shadow” password facility
which stores passwords in a separate file accessible only to
privileged users, this facility should be used. System V UNIX,
SunOS 4.0 and above, and versions of Berkeley UNIX after 4.3BSD
Tahoe, as well as others, provide this feature. It protects
passwords by hiding their encrypted values from unprivileged
users. This prevents an attacker from copying your password
file to his or her machine and then attempting to break the
passwords at his or her leisure.

Keep track of who has access to privileged user accounts (e.g.,
“root” on UNIX or “MAINT” on VMS). Whenever a privileged user
leaves the organization or no longer has need of the privileged
account, the passwords on all privileged accounts should be
changed.

3.8.2.2 Configuration Management Procedures

When installing a system from the distribution media or when
installing third-party software, it is important to check the
installation carefully. Many installation procedures assume a
“trusted” site, and hence will install files with world write

Site Security Policy Handbook Working Group [Page 32]

RFC 1244 Site Security Handbook July 1991

permission enabled, or otherwise compromise the security of
files.

Network services should also be examined carefully when first
installed. Many vendors provide default network permission
files which imply that all outside hosts are to be “trusted”,
which is rarely the case when connected to wide-area networks
such as the Internet.

Many intruders collect information on the vulnerabilities of
particular system versions. The older a system, the more
likely it is that there are security problems in that version
which have since been fixed by the vendor in a later release.
For this reason, it is important to weigh the risks of not
upgrading to a new operating system release (thus leaving
security holes unplugged) against the cost of upgrading to the
new software (possibly breaking third-party software, etc.).
Bug fixes from the vendor should be weighed in a similar
fashion, with the added note that “security” fixes from a
vendor usually address fairly serious security problems.

Other bug fixes, received via network mailing lists and the
like, should usually be installed, but not without careful
examination. Never install a bug fix unless you’re sure you
know what the consequences of the fix are – there’s always the
possibility that an intruder has suggested a “fix” which
actually gives him or her access to your system.

3.8.2.3 Recovery Procedures – Backups

It is impossible to overemphasize the need for a good backup
strategy. File system backups not only protect you in the
event of hardware failure or accidental deletions, but they
also protect you against unauthorized changes made by an
intruder. Without a copy of your data the way it’s “supposed”
to be, it can be difficult to undo something an attacker has
done.

Backups, especially if run daily, can also be useful in
providing a history of an intruder’s activities. Looking
through old backups can establish when your system was first
penetrated. Intruders may leave files around which, although
deleted later, are captured on the backup tapes. Backups can
also be used to document an intruder’s activities to law
enforcement agencies if necessary.

A good backup strategy will dump the entire system to tape at
least once a month. Partial (or “incremental”) dumps should be

Site Security Policy Handbook Working Group [Page 33]

RFC 1244 Site Security Handbook July 1991

done at least twice a week, and ideally they should be done
daily. Commands specifically designed for performing file
system backups (e.g., UNIX “dump” or VMS “BACKUP”) should be
used in preference to other file copying commands, since these
tools are designed with the express intent of restoring a
system to a known state.

3.8.2.4 Problem Reporting Procedures

As with users, system administrators should have a defined
procedure for reporting security problems. In large
installations, this is often done by creating an electronic
mail alias which contains the names of all system
administrators in the organization. Other methods include
setting up some sort of response team similar to the CERT, or
establishing a “hotline” serviced by an existing support group.

3.9 Resources to Prevent Security Breaches

This section discusses software, hardware, and procedural resources
that can be used to support your site security policy.

3.9.1 Network Connections and Firewalls

A “firewall” is put in place in a building to provide a point of
resistance to the entry of flames into another area. Similarly, a
secretary’s desk and reception area provides a point of
controlling access to other office spaces. This same technique
can be applied to a computer site, particularly as it pertains to
network connections.

Some sites will be connected only to other sites within the same
organization and will not have the ability to connect to other
networks. Sites such as these are less susceptible to threats
from outside their own organization, although intrusions may still
occur via paths such as dial-up modems. On the other hand, many
other organizations will be connected to other sites via much
larger networks, such as the Internet. These sites are
susceptible to the entire range of threats associated with a
networked environment.

The risks of connecting to outside networks must be weighed
against the benefits. It may be desirable to limit connection to
outside networks to those hosts which do not store sensitive
material, keeping “vital” machines (such as those which maintain
company payroll or inventory systems) isolated. If there is a
need to participate in a Wide Area Network (WAN), consider
restricting all access to your local network through a single

Site Security Policy Handbook Working Group [Page 34]

RFC 1244 Site Security Handbook July 1991

system. That is, all access to or from your own local network
must be made through a single host computer that acts as a
firewall between you and the outside world. This firewall system
should be rigorously controlled and password protected, and
external users accessing it should also be constrained by
restricting the functionality available to remote users. By using
this approach, your site could relax some of the internal security
controls on your local net, but still be afforded the protection
of a rigorously controlled host front end.

Note that even with a firewall system, compromise of the firewall
could result in compromise of the network behind the firewall.
Work has been done in some areas to construct a firewall which
even when compromised, still protects the local network [6,
CHESWICK].

3.9.2 Confidentiality

Confidentiality, the act of keeping things hidden or secret, is
one of the primary goals of computer security practitioners.
Several mechanisms are provided by most modern operating systems
to enable users to control the dissemination of information.
Depending upon where you work, you may have a site where
everything is protected, or a site where all information is
usually regarded as public, or something in-between. Most sites
lean toward the in-between, at least until some penetration has
occurred.

Generally, there are three instances in which information is
vulnerable to disclosure: when the information is stored on a
computer system, when the information is in transit to another
system (on the network), and when the information is stored on
backup tapes.

The first of these cases is controlled by file permissions, access
control lists, and other similar mechanisms. The last can be
controlled by restricting access to the backup tapes (by locking
them in a safe, for example). All three cases can be helped by
using encryption mechanisms.

3.9.2.1 Encryption (hardware and software)

Encryption is the process of taking information that exists in
some readable form and converting it into a non-readable form.
There are several types of commercially available encryption
packages in both hardware and software forms. Hardware
encryption engines have the advantage that they are much faster
than the software equivalent, yet because they are faster, they

Site Security Policy Handbook Working Group [Page 35]

RFC 1244 Site Security Handbook July 1991

are of greater potential benefit to an attacker who wants to
execute a brute-force attack on your encrypted information.

The advantage of using encryption is that, even if other access
control mechanisms (passwords, file permissions, etc.) are
compromised by an intruder, the data is still unusable.
Naturally, encryption keys and the like should be protected at
least as well as account passwords.

Information in transit (over a network) may be vulnerable to
interception as well. Several solutions to this exist, ranging
from simply encrypting files before transferring them (end-to-
end encryption) to special network hardware which encrypts
everything it sends without user intervention (secure links).
The Internet as a whole does not use secure links, thus end-
to-end encryption must be used if encryption is desired across
the Internet.

3.9.2.1.1 Data Encryption Standard (DES)

DES is perhaps the most widely used data encryption
mechanism today. Many hardware and software implementations
exist, and some commercial computers are provided with a
software version. DES transforms plain text information
into encrypted data (or ciphertext) by means of a special
algorithm and “seed” value called a key. So long as the key
is retained (or remembered) by the original user, the
ciphertext can be restored to the original plain text.

One of the pitfalls of all encryption systems is the need to
remember the key under which a thing was encrypted (this is
not unlike the password problem discussed elsewhere in this
document). If the key is written down, it becomes less
secure. If forgotten, there is little (if any) hope of
recovering the original data.

Most UNIX systems provide a DES command that enables a user
to encrypt data using the DES algorithm.

3.9.2.1.2 Crypt

Similar to the DES command, the UNIX “crypt” command allows
a user to encrypt data. Unfortunately, the algorithm used
by “crypt” is very insecure (based on the World War II
“Enigma” device), and files encrypted with this command can
be decrypted easily in a matter of a few hours. Generally,
use of the “crypt” command should be avoided for any but the
most trivial encryption tasks.

Site Security Policy Handbook Working Group [Page 36]

RFC 1244 Site Security Handbook July 1991

3.9.2.2 Privacy Enhanced Mail

Electronic mail normally transits the network in the clear
(i.e., anyone can read it). This is obviously not the optimal
solution. Privacy enhanced mail provides a means to
automatically encrypt electronic mail messages so that a person
eavesdropping at a mail distribution node is not (easily)
capable of reading them. Several privacy enhanced mail
packages are currently being developed and deployed on the
Internet.

The Internet Activities Board Privacy Task Force has defined a
draft standard, elective protocol for use in implementing
privacy enhanced mail. This protocol is defined in RFCs 1113,
1114, and 1115 [7,8,9]. Please refer to the current edition of
the “IAB Official Protocol Standards” (currently, RFC 1200
[21]) for the standardization state and status of these
protocols.

3.9.3 Origin Authentication

We mostly take it on faith that the header of an electronic mail
message truly indicates the originator of a message. However, it
iseasy to “spoof”, or forge the source of a mail message. Origin
authentication provides a means to be certain of the originator of
a message or other object in the same way that a Notary Public
assures a signature on a legal document. This is done by means of
a “Public Key” cryptosystem.

A public key cryptosystem differs from a private key cryptosystem
in several ways. First, a public key system uses two keys, a
Public Key that anyone can use (hence the name) and a Private Key
that only the originator of a message uses. The originator uses
the private key to encrypt the message (as in DES). The receiver,
who has obtained the public key for the originator, may then
decrypt the message.

In this scheme, the public key is used to authenticate the
originator’s use of his or her private key, and hence the identity
of the originator is more rigorously proven. The most widely
known implementation of a public key cryptosystem is the RSA
system [26]. The Internet standard for privacy enhanced mail
makes use of the RSA system.

3.9.4 Information Integrity

Information integrity refers to the state of information such that
it is complete, correct, and unchanged from the last time in which

Site Security Policy Handbook Working Group [Page 37]

RFC 1244 Site Security Handbook July 1991

it was verified to be in an “integral” state. The value of
information integrity to a site will vary. For example, it is
more important for military and government installations to
prevent the “disclosure” of classified information, whether it is
right or wrong. A bank, on the other hand, is far more concerned
with whether the account information maintained for its customers
is complete and accurate.

Numerous computer system mechanisms, as well as procedural
controls, have an influence on the integrity of system
information. Traditional access control mechanisms maintain
controls over who can access system information. These mechanisms
alone are not sufficient in some cases to provide the degree of
integrity required. Some other mechanisms are briefly discussed
below.

It should be noted that there are other aspects to maintaining
system integrity besides these mechanisms, such as two-person
controls, and integrity validation procedures. These are beyond
the scope of this document.

3.9.4.1 Checksums

Easily the simplest mechanism, a simple checksum routine can
compute a value for a system file and compare it with the last
known value. If the two are equal, the file is probably
unchanged. If not, the file has been changed by some unknown
means.

Though it is the easiest to implement, the checksum scheme
suffers from a serious failing in that it is not very
sophisticated and a determined attacker could easily add enough
characters to the file to eventually obtain the correct value.

A specific type of checksum, called a CRC checksum, is
considerably more robust than a simple checksum. It is only
slightly more difficult to implement and provides a better
degree of catching errors. It too, however, suffers from the
possibility of compromise by an attacker.

Checksums may be used to detect the altering of information.
However, they do not actively guard against changes being made.
For this, other mechanisms such as access controls and
encryption should be used.

Site Security Policy Handbook Working Group [Page 38]

RFC 1244 Site Security Handbook July 1991

3.9.4.2 Cryptographic Checksums

Cryptographic checksums (also called cryptosealing) involve
breaking a file up into smaller chunks, calculating a (CRC)
checksum for each chunk, and adding the CRCs together.
Depending upon the exact algorithm used, this can result in a
nearly unbreakable method of determining whether a file has
been changed. This mechanism suffers from the fact that it is
sometimes computationally intensive and may be prohibitive
except in cases where the utmost integrity protection is
desired.

Another related mechanism, called a one-way hash function (or a
Manipulation Detection Code (MDC)) can also be used to uniquely
identify a file. The idea behind these functions is that no
two inputs can produce the same output, thus a modified file
will not have the same hash value. One-way hash functions can
be implemented efficiently on a wide variety of systems, making
unbreakable integrity checks possible. (Snefru, a one-way hash
function available via USENET as well as the Internet is just
one example of an efficient one-way hash function.) [10]

3.9.5 Limiting Network Access

The dominant network protocols in use on the Internet, IP (RFC
791) [11], TCP (RFC 793) [12], and UDP (RFC 768) [13], carry
certain control information which can be used to restrict access
to certain hosts or networks within an organization.

The IP packet header contains the network addresses of both the
sender and recipient of the packet. Further, the TCP and UDP
protocols provide the notion of a “port”, which identifies the
endpoint (usually a network server) of a communications path. In
some instances, it may be desirable to deny access to a specific
TCP or UDP port, or even to certain hosts and networks altogether.

3.9.5.1 Gateway Routing Tables

One of the simplest approaches to preventing unwanted network
connections is to simply remove certain networks from a
gateway’s routing tables. This makes it “impossible” for a
host to send packets to these networks. (Most protocols
require bidirectional packet flow even for unidirectional data
flow, thus breaking one side of the route is usually
sufficient.)

This approach is commonly taken in “firewall” systems by
preventing the firewall from advertising local routes to the

Site Security Policy Handbook Working Group [Page 39]

RFC 1244 Site Security Handbook July 1991

outside world. The approach is deficient in that it often
prevents “too much” (e.g., in order to prevent access to one
system on the network, access to all systems on the network is
disabled).

3.9.5.2 Router Packet Filtering

Many commercially available gateway systems (more correctly
called routers) provide the ability to filter packets based not
only on sources or destinations, but also on source-destination
combinations. This mechanism can be used to deny access to a
specific host, network, or subnet from any other host, network,
or subnet.

Gateway systems from some vendors (e.g., cisco Systems) support
an even more complex scheme, allowing finer control over source
and destination addresses. Via the use of address masks, one
can deny access to all but one host on a particular network.
The cisco Systems also allow packet screening based on IP
protocol type and TCP or UDP port numbers [14].

This can also be circumvented by “source routing” packets
destined for the “secret” network. Source routed packets may
be filtered out by gateways, but this may restrict other
legitimate activities, such as diagnosing routing problems.

3.9.6 Authentication Systems

Authentication refers to the process of proving a claimed identity
to the satisfaction of some permission-granting authority.
Authentication systems are hardware, software, or procedural
mechanisms that enable a user to obtain access to computing
resources. At the simplest level, the system administrator who
adds new user accounts to the system is part of the system
authentication mechanism. At the other end of the spectrum,
fingerprint readers or retinal scanners provide a very high-tech
solution to establishing a potential user’s identity. Without
establishing and proving a user’s identity prior to establishing a
session, your site’s computers are vulnerable to any sort of
attack.

Typically, a user authenticates himself or herself to the system
by entering a password in response to a prompt.
Challenge/Response mechanisms improve upon passwords by prompting
the user for some piece of information shared by both the computer
and the user (such as mother’s maiden name, etc.).

Site Security Policy Handbook Working Group [Page 40]

RFC 1244 Site Security Handbook July 1991

3.9.6.1 Kerberos

Kerberos, named after the dog who in mythology is said to stand
at the gates of Hades, is a collection of software used in a
large network to establish a user’s claimed identity.
Developed at the Massachusetts Institute of Technology (MIT),
it uses a combination of encryption and distributed databases
so that a user at a campus facility can login and start a
session from any computer located on the campus. This has
clear advantages in certain environments where there are a
large number of potential users who may establish a connection
from any one of a large number of workstations. Some vendors
are now incorporating Kerberos into their systems.

It should be noted that while Kerberos makes several advances
in the area of authentication, some security weaknesses in the
protocol still remain [15].

3.9.6.2 Smart Cards

Several systems use “smart cards” (a small calculator-like
device) to help authenticate users. These systems depend on
the user having an object in their possession. One such system
involves a new password procedure that require a user to enter
a value obtained from a “smart card” when asked for a password
by the computer. Typically, the host machine will give the
user some piece of information that is entered into the
keyboard of the smart card. The smart card will display a
response which must then be entered into the computer before
the session will be established. Another such system involves
a smart card which displays a number which changes over time,
but which is synchronized with the authentication software on
the computer.

This is a better way of dealing with authentication than with
the traditional password approach. On the other hand, some say
it’s inconvenient to carry the smart card. Start-up costs are
likely to be high as well.

3.9.7 Books, Lists, and Informational Sources

There are many good sources for information regarding computer
security. The annotated bibliography at the end of this document
can provide you with a good start. In addition, information can
be obtained from a variety of other sources, some of which are
described in this section.

Site Security Policy Handbook Working Group [Page 41]

RFC 1244 Site Security Handbook July 1991

3.9.7.1 Security Mailing Lists

The UNIX Security mailing list exists to notify system
administrators of security problems before they become common
knowledge, and to provide security enhancement information. It
is a restricted-access list, open only to people who can be
verified as being principal systems people at a site. Requests
to join the list must be sent by either the site contact listed
in the Defense Data Network’s Network Information Center’s (DDN
NIC) WHOIS database, or from the “root” account on one of the
major site machines. You must include the destination address
you want on the list, an indication of whether you want to be
on the mail reflector list or receive weekly digests, the
electronic mail address and voice telephone number of the site
contact if it isn’t you, and the name, address, and telephone
number of your organization. This information should be sent
to SECURITY-REQUEST@CPD.COM.

The RISKS digest is a component of the ACM Committee on
Computers and Public Policy, moderated by Peter G. Neumann. It
is a discussion forum on risks to the public in computers and
related systems, and along with discussing computer security
and privacy issues, has discussed such subjects as the Stark
incident, the shooting down of the Iranian airliner in the
Persian Gulf (as it relates to the computerized weapons
systems), problems in air and railroad traffic control systems,
software engineering, and so on. To join the mailing list,
send a message to RISKS-REQUEST@CSL.SRI.COM. This list is also
available in the USENET newsgroup “comp.risks”.

The VIRUS-L list is a forum for the discussion of computer
virus experiences, protection software, and related topics.
The list is open to the public, and is implemented as a
moderated digest. Most of the information is related to
personal computers, although some of it may be applicable to
larger systems. To subscribe, send the line:

SUB VIRUS-L your full name

to the address LISTSERV%LEHIIBM1.BITNET@MITVMA.MIT.EDU. This
list is also available via the USENET newsgroup “comp.virus”.

The Computer Underground Digest “is an open forum dedicated to
sharing information among computerists and to the presentation
and debate of diverse views.” While not directly a security
list, it does contain discussions about privacy and other
security related topics. The list can be read on USENET as
alt.society.cu-digest, or to join the mailing list, send mail

Site Security Policy Handbook Working Group [Page 42]

RFC 1244 Site Security Handbook July 1991

to Gordon Myer (TK0JUT2%NIU.bitnet@mitvma.mit.edu).
Submissions may be mailed to: cud@chinacat.unicom.com.

3.9.7.2 Networking Mailing Lists

The TCP-IP mailing list is intended to act as a discussion
forum for developers and maintainers of implementations of the
TCP/IP protocol suite. It also discusses network-related
security problems when they involve programs providing network
services, such as “Sendmail”. To join the TCP-IP list, send a
message to TCP-IP-REQUEST@NISC.SRI.COM. This list is also
available in the USENET newsgroup “comp.protocols.tcp-ip”.

SUN-NETS is a discussion list for items pertaining to
networking on Sun systems. Much of the discussion is related
to NFS, NIS (formally Yellow Pages), and name servers. To
subscribe, send a message to SUN-NETS-REQUEST@UMIACS.UMD.EDU.

The USENET groups misc.security and alt.security also discuss
security issues. misc.security is a moderated group and also
includes discussions of physical security and locks.
alt.security is unmoderated.

3.9.7.3 Response Teams

Several organizations have formed special groups of people to
deal with computer security problems. These teams collect
information about possible security holes and disseminate it to
the proper people, track intruders, and assist in recovery from
security violations. The teams typically have both electronic
mail distribution lists as well as a special telephone number
which can be called for information or to report a problem.
Many of these teams are members of the CERT System, which is
coordinated by the National Institute of Standards and
Technology (NIST), and exists to facilitate the exchange of
information between the various teams.

3.9.7.3.1 DARPA Computer Emergency Response Team

The Computer Emergency Response Team/Coordination Center
(CERT/CC) was established in December 1988 by the Defense
Advanced Research Projects Agency (DARPA) to address
computer security concerns of research users of the
Internet. It is operated by the Software Engineering
Institute (SEI) at Carnegie-Mellon University (CMU). The
CERT can immediately confer with experts to diagnose and
solve security problems, and also establish and maintain
communications with the affected computer users and

Site Security Policy Handbook Working Group [Page 43]

RFC 1244 Site Security Handbook July 1991

government authorities as appropriate.

The CERT/CC serves as a clearing house for the
identification and repair of security vulnerabilities,
informal assessments of existing systems, improvement of
emergency response capability, and both vendor and user
security awareness. In addition, the team works with
vendors of various systems in order to coordinate the fixes
for security problems.

The CERT/CC sends out security advisories to the CERT-
ADVISORY mailing list whenever appropriate. They also
operate a 24-hour hotline that can be called to report
security problems (e.g., someone breaking into your system),
as well as to obtain current (and accurate) information
about rumored security problems.

To join the CERT-ADVISORY mailing list, send a message to
CERT@CERT.SEI.CMU.EDU and ask to be added to the mailing
list. The material sent to this list also appears in the
USENET newsgroup “comp.security.announce”. Past advisories
are available for anonymous FTP from the host
CERT.SEI.CMU.EDU. The 24-hour hotline number is (412) 268-
7090.

The CERT/CC also maintains a CERT-TOOLS list to encourage
the exchange of information on tools and techniques that
increase the secure operation of Internet systems. The
CERT/CC does not review or endorse the tools described on
the list. To subscribe, send a message to CERT-TOOLS-
REQUEST@CERT.SEI.CMU.EDU and ask to be added to the mailing
list.

The CERT/CC maintains other generally useful security
information for anonymous FTP from CERT.SEI.CMU.EDU. Get
the README file for a list of what is available.

For more information, contact:

CERT
Software Engineering Institute
Carnegie Mellon University
Pittsburgh, PA 15213-3890

(412) 268-7090
cert@cert.sei.cmu.edu.

Site Security Policy Handbook Working Group [Page 44]

RFC 1244 Site Security Handbook July 1991

3.9.7.3.2 DDN Security Coordination Center

For DDN users, the Security Coordination Center (SCC) serves
a function similar to CERT. The SCC is the DDN’s clearing-
house for host/user security problems and fixes, and works
with the DDN Network Security Officer. The SCC also
distributes the DDN Security Bulletin, which communicates
information on network and host security exposures, fixes,
and concerns to security and management personnel at DDN
facilities. It is available online, via kermit or anonymous
FTP, from the host NIC.DDN.MIL, in SCC:DDN-SECURITY-yy-
nn.TXT (where “yy” is the year and “nn” is the bulletin
number). The SCC provides immediate assistance with DDN-
related host security problems; call (800) 235-3155 (6:00
a.m. to 5:00 p.m. Pacific Time) or send email to
SCC@NIC.DDN.MIL. For 24 hour coverage, call the MILNET
Trouble Desk (800) 451-7413 or AUTOVON 231-1713.

3.9.7.3.3 NIST Computer Security Resource and Response Center

The National Institute of Standards and Technology (NIST)
has responsibility within the U.S. Federal Government for
computer science and technology activities. NIST has played
a strong role in organizing the CERT System and is now
serving as the CERT System Secretariat. NIST also operates
a Computer Security Resource and Response Center (CSRC) to
provide help and information regarding computer security
events and incidents, as well as to raise awareness about
computer security vulnerabilities.

The CSRC team operates a 24-hour hotline, at (301) 975-5200.
For individuals with access to the Internet, on-line
publications and computer security information can be
obtained via anonymous FTP from the host CSRC.NCSL.NIST.GOV
(129.6.48.87). NIST also operates a personal computer
bulletin board that contains information regarding computer
viruses as well as other aspects of computer security. To
access this board, set your modem to 300/1200/2400 BPS, 1
stop bit, no parity, and 8-bit characters, and call (301)
948-5717. All users are given full access to the board
immediately upon registering.

NIST has produced several special publications related to
computer security and computer viruses in particular; some
of these publications are downloadable. For further
information, contact NIST at the following address:

Site Security Policy Handbook Working Group [Page 45]

RFC 1244 Site Security Handbook July 1991

Computer Security Resource and Response Center
A-216 Technology
Gaithersburg, MD 20899
Telephone: (301) 975-3359
Electronic Mail: CSRC@nist.gov

3.9.7.3.4 DOE Computer Incident Advisory Capability (CIAC)

CIAC is the Department of Energy’s (DOE’s) Computer Incident
Advisory Capability. CIAC is a four-person team of computer
scientists from Lawrence Livermore National Laboratory
(LLNL) charged with the primary responsibility of assisting
DOE sites faced with computer security incidents (e.g.,
intruder attacks, virus infections, worm attacks, etc.).
This capability is available to DOE sites on a 24-hour-a-day
basis.

CIAC was formed to provide a centralized response capability
(including technical assistance), to keep sites informed of
current events, to deal proactively with computer security
issues, and to maintain liaisons with other response teams
and agencies. CIAC’s charter is to assist sites (through
direct technical assistance, providing information, or
referring inquiries to other technical experts), serve as a
clearinghouse for information about threats/known
incidents/vulnerabilities, develop guidelines for incident
handling, develop software for responding to
events/incidents, analyze events and trends, conduct
training and awareness activities, and alert and advise
sites about vulnerabilities and potential attacks.

CIAC’s business hours phone number is (415) 422-8193 or FTS
532-8193. CIAC’s e-mail address is CIAC@TIGER.LLNL.GOV.

3.9.7.3.5 NASA Ames Computer Network Security Response Team

The Computer Network Security Response Team (CNSRT) is NASA
Ames Research Center’s local version of the DARPA CERT.
Formed in August of 1989, the team has a constituency that
is primarily Ames users, but it is also involved in
assisting other NASA Centers and federal agencies. CNSRT
maintains liaisons with the DOE’s CIAC team and the DARPA
CERT. It is also a charter member of the CERT System. The
team may be reached by 24 hour pager at (415) 694-0571, or
by electronic mail to CNSRT@AMES.ARC.NASA.GOV.

Site Security Policy Handbook Working Group [Page 46]

RFC 1244 Site Security Handbook July 1991

3.9.7.4 DDN Management Bulletins

The DDN Management Bulletin is distributed electronically by
the DDN NIC under contract to the Defense Communications Agency
(DCA). It is a means of communicating official policy,
procedures, and other information of concern to management
personnel at DDN facilities.

The DDN Security Bulletin is distributed electronically by the
DDN SCC, also under contract to DCA, as a means of
communicating information on network and host security
exposures, fixes, and concerns to security and management
personnel at DDN facilities.

Anyone may join the mailing lists for these two bulletins by
sending a message to NIC@NIC.DDN.MIL and asking to be placed on
the mailing lists. These messages are also posted to the
USENET newsgroup “ddn.mgt-bulletin”. For additional
information, see section 8.7.

3.9.7.5 System Administration List

The SYSADM-LIST is a list pertaining exclusively to UNIX system
administration. Mail requests to be added to the list to
SYSADM-LIST-REQUEST@SYSADMIN.COM.

3.9.7.6 Vendor Specific System Lists

The SUN-SPOTS and SUN-MANAGERS lists are discussion groups for
users and administrators of systems supplied by Sun
Microsystems. SUN-SPOTS is a fairly general list, discussing
everything from hardware configurations to simple UNIX
questions. To subscribe, send a message to SUN-SPOTS-
REQUEST@RICE.EDU. This list is also available in the USENET
newsgroup “comp.sys.sun”. SUN-MANAGERS is a discussion list
for Sun system administrators and covers all aspects of Sun
system administration. To subscribe, send a message to SUN-
MANAGERS-REQUEST@EECS.NWU.EDU.

The APOLLO list discusses the HP/Apollo system and its
software. To subscribe, send a message to APOLLO-
REQUEST@UMIX.CC.UMICH.EDU. APOLLO-L is a similar list which
can be subscribed to by sending

SUB APOLLO-L your full name

to LISTSERV%UMRVMB.BITNET@VM1.NODAK.EDU.

Site Security Policy Handbook Working Group [Page 47]

RFC 1244 Site Security Handbook July 1991

HPMINI-L pertains to the Hewlett-Packard 9000 series and HP/UX
operating system. To subscribe, send

SUB HPMINI-L your full name

to LISTSERV%UAFSYSB.BITNET@VM1.NODAK.EDU.

INFO-IBMPC discusses IBM PCs and compatibles, as well as MS-
DOS. To subscribe, send a note to INFO-IBMPC-REQUEST@WSMR-
SIMTEL20.ARMY.MIL.

There are numerous other mailing lists for nearly every popular
computer or workstation in use today. For a complete list,
obtain the file “netinfo/interest-groups” via anonymous FTP
from the host FTP.NISC.SRI.COM.

3.9.7.7 Professional Societies and Journals

The IEEE Technical Committee on Security & Privacy publishes a
quarterly magazine, “CIPHER”.

IEEE Computer Society,
1730 Massachusetts Ave. N.W.
Washington, DC 2036-1903

The ACM SigSAC (Special Interest Group on Security, Audit, and
Controls) publishes a quarterly magazine, “SIGSAC Review”.

Association for Computing Machinery
11 West 42nd St.
New York, N.Y. 10036

The Information Systems Security Association publishes a
quarterly magazine called “ISSA Access”.

Information Systems Security Association
P.O. Box 9457
Newport Beach, CA 92658

“Computers and Security” is an “international journal for the
professional involved with computer security, audit and
control, and data integrity.”

Site Security Policy Handbook Working Group [Page 48]

RFC 1244 Site Security Handbook July 1991

$266/year, 8 issues (1990)

Elsevier Advanced Technology
Journal Information Center
655 Avenue of the Americas
New York, NY 10010

The “Data Security Letter” is published “to help data security
professionals by providing inside information and knowledgable
analysis of developments in computer and communications
security.”

$690/year, 9 issues (1990)

Data Security Letter
P.O. Box 1593
Palo Alto, CA 94302

3.9.8 Problem Reporting Tools

3.9.8.1 Auditing

Auditing is an important tool that can be used to enhance the
security of your installation. Not only does it give you a
means of identifying who has accessed your system (and may have
done something to it) but it also gives you an indication of
how your system is being used (or abused) by authorized users
and attackers alike. In addition, the audit trail
traditionally kept by computer systems can become an invaluable
piece of evidence should your system be penetrated.

3.9.8.1.1 Verify Security

An audit trail shows how the system is being used from day
to day. Depending upon how your site audit log is
configured, your log files should show a range of access
attempts that can show what normal system usage should look
like. Deviation from that normal usage could be the result
of penetration from an outside source using an old or stale
user account. Observing a deviation in logins, for example,
could be your first indication that something unusual is
happening.

3.9.8.1.2 Verify Software Configurations

One of the ruses used by attackers to gain access to a
system is by the insertion of a so-called Trojan Horse
program. A Trojan Horse program can be a program that does

Site Security Policy Handbook Working Group [Page 49]

RFC 1244 Site Security Handbook July 1991

something useful, or merely something interesting. It
always does something unexpected, like steal passwords or
copy files without your knowledge [25]. Imagine a Trojan
login program that prompts for username and password in the
usual way, but also writes that information to a special
file that the attacker can come back and read at will.
Imagine a Trojan Editor program that, despite the file
permissions you have given your files, makes copies of
everything in your directory space without you knowing about
it.

This points out the need for configuration management of the
software that runs on a system, not as it is being
developed, but as it is in actual operation. Techniques for
doing this range from checking each command every time it is
executed against some criterion (such as a cryptoseal,
described above) or merely checking the date and time stamp
of the executable. Another technique might be to check each
command in batch mode at midnight.

3.9.8.2 Tools

COPS is a security tool for system administrators that checks
for numerous common security problems on UNIX systems [27].
COPS is a collection of shell scripts and C programs that can
easily be run on almost any UNIX variant. Among other things,
it checks the following items and sends the results to the
system administrator:

– Checks “/dev/kmem” and other devices for world
read/writability.

– Checks special or important files and directories for
“bad” modes (world writable, etc.).

– Checks for easily-guessed passwords.

– Checks for duplicate user ids, invalid fields in the
password file, etc..

– Checks for duplicate group ids, invalid fields in the
group file, etc..

– Checks all users’ home directories and their “.cshrc”,
“.login”, “.profile”, and “.rhosts” files for security
problems.

– Checks all commands in the “/etc/rc” files and “cron”

Site Security Policy Handbook Working Group [Page 50]

RFC 1244 Site Security Handbook July 1991

files for world writability.

– Checks for bad “root” paths, NFS file systems exported
to the world, etc..

– Includes an expert system that checks to see if a given
user (usually “root”) can be compromised, given that
certain rules are true.

– Checks for changes in the setuid status of programs on the
system.

The COPS package is available from the “comp.sources.unix”
archive on “ftp.uu.net”, and also from the UNIX-SW repository
on the MILNET host “wsmr-simtel20.army.mil”.

3.9.9 Communication Among Administrators

3.9.9.1 Secure Operating Systems

The following list of products and vendors is adapted from the
National Computer Security Center’s (NCSC) Evaluated Products
List. They represent those companies who have either received
an evaluation from the NCSC or are in the process of a product
evaluation. This list is not complete, but it is
representative of those operating systems and add on components
available in the commercial marketplace.

For a more detailed listing of the current products appearing
in the NCSC EPL, contact the NCSC at:

National Computer Security Center
9800 Savage Road
Fort George G. Meade, MD 20755-6000
(301) 859-4458

Site Security Policy Handbook Working Group [Page 51]

RFC 1244 Site Security Handbook July 1991

Version Evaluation
Evaluated Product Vendor Evaluated Class
———————————————————————–
Secure Communications Honeywell Information 2.1 A1
Processor (SCOMP) Systems, Inc.

Multics Honeywell Information MR11.0 B2
Systems, Inc.

System V/MLS 1.1.2 on UNIX AT&T 1.1.2 B1
System V 3.1.1 on AT&T 3B2/500and 3B2/600

OS 1100 Unisys Corp. Security B1
Release 1

MPE V/E Hewlett-Packard Computer G.03.04 C2
Systems Division

AOS/VS on MV/ECLIPSE series Data General Corp. 7.60 C2

VM/SP or VM/SP HPO with CMS, IBM Corp. 5 C2
RACF, DIRMAINT, VMTAPE-MS,
ISPF

MVS/XA with RACF IBM Corp. 2.2,2.3 C2

AX/VMS Digital Equipment Corp. 4.3 C2

NOS Control Data Corp. NOS
Security C2
Eval Product

TOP SECRET CGA Software Products 3.0/163 C2
Group, Inc.

Access Control Facility 2 SKK, Inc. 3.1.3 C2

UTX/32S Gould, Inc. Computer 1.0 C2
Systems Division

A Series MCP/AS with Unisys Corp. 3.7 C2
InfoGuard Security
Enhancements

Primos Prime Computer, Inc. 21.0.1DODC2A C2
Resource Access Control IBM Corp. 1.5 C1
Facility (RACF)

Site Security Policy Handbook Working Group [Page 52]

RFC 1244 Site Security Handbook July 1991

Version Candidate
Candidate Product Vendor Evaluated Class
———————————————————————–
Boeing MLS LAN Boeing Aerospace A1 M1

Trusted XENIX Trusted Information
Systems, Inc. B2

VSLAN VERDIX Corp. B2

System V/MLS AT&T B1

VM/SP with RACF IBM Corp. 5/1.8.2 C2
Wang SVS/OS with CAP Wang Laboratories, Inc. 1.0 C2

3.9.9.2 Obtaining Fixes for Known Problems

It goes without saying that computer systems have bugs. Even
operating systems, upon which we depend for protection of our
data, have bugs. And since there are bugs, things can be
broken, both maliciously and accidentally. It is important
that whenever bugs are discovered, a should fix be identified
and implemented as soon as possible. This should minimize any
exposure caused by the bug in the first place.

A corollary to the bug problem is: from whom do I obtain the
fixes? Most systems have some support from the manufacturer or
supplier. Fixes coming from that source tend to be implemented
quickly after receipt. Fixes for some problems are often
posted on the network and are left to the system administrators
to incorporate as they can. The problem is that one wants to
have faith that the fix will close the hole and not introduce
any others. We will tend to trust that the manufacturer’s
fixes are better than those that are posted on the net.

3.9.9.3 Sun Customer Warning System

Sun Microsystems has established a Customer Warning System
(CWS) for handling security incidents. This is a formal
process which includes:

– Having a well advertised point of contact in Sun
for reporting security problems.
– Pro-actively alerting customers of worms, viruses,
or other security holes that could affect their systems.
– Distributing the patch (or work-around) as quickly
as possible.

Site Security Policy Handbook Working Group [Page 53]

RFC 1244 Site Security Handbook July 1991

They have created an electronic mail address, SECURITY-
ALERT@SUN.COM, which will enable customers to report security
problems. A voice-mail backup is available at (415) 688-9081.
A “Security Contact” can be designated by each customer site;
this person will be contacted by Sun in case of any new
security problems. For more information, contact your Sun
representative.

3.9.9.4 Trusted Archive Servers

Several sites on the Internet maintain large repositories of
public-domain and freely distributable software, and make this
material available for anonymous FTP. This section describes
some of the larger repositories. Note that none of these
servers implements secure checksums or anything else
guaranteeing the integrity of their data. Thus, the notion of
“trust” should be taken as a somewhat limited definition.

3.9.9.4.1 Sun Fixes on UUNET

Sun Microsystems has contracted with UUNET Communications
Services, Inc., to make fixes for bugs in Sun software
available via anonymous FTP. You can access these fixes by
using the “ftp” command to connect to the host FTP.UU.NET.
Then change into the directory “sun-dist/security”, and
obtain a directory listing. The file “README” contains a
brief description of what each file in this directory
contains, and what is required to install the fix.

3.9.9.4.2 Berkeley Fixes

The University of California at Berkeley also makes fixes
available via anonymous FTP; these fixes pertain primarily
to the current release of BSD UNIX (currently, release 4.3).
However, even if you are not running their software, these
fixes are still important, since many vendors (Sun, DEC,
Sequent, etc.) base their software on the Berkeley releases.

The Berkeley fixes are available for anonymous FTP from the
host UCBARPA.BERKELEY.EDU in the directory “4.3/ucb-fixes”.
The file “INDEX” in this directory describes what each file
contains. They are also available from UUNET (see section
3.9.9.4.3).

Berkeley also distributes new versions of “sendmail” and
“named” from this machine. New versions of these commands
are stored in the “4.3” directory, usually in the files
“sendmail.tar.Z” and “bind.tar.Z”, respectively.

Site Security Policy Handbook Working Group [Page 54]

RFC 1244 Site Security Handbook July 1991

3.9.9.4.3 Simtel-20 and UUNET

The two largest general-purpose software repositories on the
Internet are the hosts WSMR-SIMTEL20.ARMY.MIL and
FTP.UU.NET.

WSMR-SIMTEL20.ARMY.MIL is a TOPS-20 machine operated by the
U.S. Army at White Sands Missile Range (WSMR), New Mexico.
The directory “pd2:” contains a large amount of UNIX
software, primarily taken from the “comp.sources”
newsgroups. The directories “pd1:” and
“pd2:” contains software for IBM PC systems, and
“pd3:” contains software for the Apple Macintosh.

FTP.UU.NET is operated by UUNET Communications Services,
Inc. in Falls Church, Virginia. This company sells Internet
and USENET access to sites all over the country (and
internationally). The software posted to the following
USENET source newsgroups is stored here, in directories of
the same name:

comp.sources.games
comp.sources.misc
comp.sources.sun
comp.sources.unix
comp.sources.x

Numerous other distributions, such as all the freely
distributable Berkeley UNIX source code, Internet Request
for Comments (RFCs), and so on are also stored on this
system.

3.9.9.4.4 Vendors

Many vendors make fixes for bugs in their software available
electronically, either via mailing lists or via anonymous
FTP. You should contact your vendor to find out if they
offer this service, and if so, how to access it. Some
vendors that offer these services include Sun Microsystems
(see above), Digital Equipment Corporation (DEC), the
University of California at Berkeley (see above), and Apple
Computer [5, CURRY].

Site Security Policy Handbook Working Group [Page 55]

RFC 1244 Site Security Handbook July 1991

4. Types of Security Procedures

4.1 System Security Audits

Most businesses undergo some sort of annual financial auditing as a
regular part of their business life. Security audits are an
important part of running any computing environment. Part of the
security audit should be a review of any policies that concern system
security, as well as the mechanisms that are put in place to enforce
them.

4.1.1 Organize Scheduled Drills

Although not something that would be done each day or week,
scheduled drills may be conducted to determine if the procedures
defined are adequate for the threat to be countered. If your
major threat is one of natural disaster, then a drill would be
conducted to verify your backup and recovery mechanisms. On the
other hand, if your greatest threat is from external intruders
attempting to penetrate your system, a drill might be conducted to
actually try a penetration to observe the effect of the policies.

Drills are a valuable way to test that your policies and
procedures are effective. On the other hand, drills can be time-
consuming and disruptive to normal operations. It is important to
weigh the benefits of the drills against the possible time loss
which may be associated with them.

4.1.2 Test Procedures

If the choice is made to not to use scheduled drills to examine
your entire security procedure at one time, it is important to
test individual procedures frequently. Examine your backup
procedure to make sure you can recover data from the tapes. Check
log files to be sure that information which is supposed to be
logged to them is being logged to them, etc..

When a security audit is mandated, great care should be used in
devising tests of the security policy. It is important to clearly
identify what is being tested, how the test will be conducted, and
results expected from the test. This should all be documented and
included in or as an adjunct to the security policy document
itself.

It is important to test all aspects of the security policy, both
procedural and automated, with a particular emphasis on the
automated mechanisms used to enforce the policy. Tests should be
defined to ensure a comprehensive examination of policy features,

Site Security Policy Handbook Working Group [Page 56]

RFC 1244 Site Security Handbook July 1991

that is, if a test is defined to examine the user logon process,
it should be explicitly stated that both valid and invalid user
names and passwords will be used to demonstrate proper operation
of the logon program.

Keep in mind that there is a limit to the reasonableness of tests.
The purpose of testing is to ensure confidence that the security
policy is being correctly enforced, and not to “prove” the
absoluteness of the system or policy. The goal should be to
obtain some assurance that the reasonable and credible controls
imposed by your security policy are adequate.

4.2 Account Management Procedures

Procedures to manage accounts are important in preventing
unauthorized access to your system. It is necessary to decide
several things: Who may have an account on the system? How long may
someone have an account without renewing his or her request? How do
old accounts get removed from the system? The answers to all these
questions should be explicitly set out in the policy.

In addition to deciding who may use a system, it may be important to
determine what each user may use the system for (is personal use
allowed, for example). If you are connected to an outside network,
your site or the network management may have rules about what the
network may be used for. Therefore, it is important for any security
policy to define an adequate account management procedure for both
administrators and users. Typically, the system administrator would
be responsible for creating and deleting user accounts and generally
maintaining overall control of system use. To some degree, account
management is also the responsibility of each system user in the
sense that the user should observe any system messages and events
that may be indicative of a policy violation. For example, a message
at logon that indicates the date and time of the last logon should be
reported by the user if it indicates an unreasonable time of last
logon.

4.3 Password Management Procedures

A policy on password management may be important if your site wishes
to enforce secure passwords. These procedures may range from asking
or forcing users to change their passwords occasionally to actively
attempting to break users’ passwords and then informing the user of
how easy it was to do. Another part of password management policy
covers who may distribute passwords – can users give their passwords
to other users?

Section 2.3 discusses some of the policy issues that need to be

Site Security Policy Handbook Working Group [Page 57]

RFC 1244 Site Security Handbook July 1991

decided for proper password management. Regardless of the policies,
password management procedures need to be carefully setup to avoid
disclosing passwords. The choice of initial passwords for accounts
is critical. In some cases, users may never login to activate an
account; thus, the choice of the initial password should not be
easily guessed. Default passwords should never be assigned to
accounts: always create new passwords for each user. If there are
any printed lists of passwords, these should be kept off-line in
secure locations; better yet, don’t list passwords.

4.3.1 Password Selection

Perhaps the most vulnerable part of any computer system is the
account password. Any computer system, no matter how secure it is
from network or dial-up attack, Trojan horse programs, and so on,
can be fully exploited by an intruder if he or she can gain access
via a poorly chosen password. It is important to define a good
set of rules for password selection, and distribute these rules to
all users. If possible, the software which sets user passwords
should be modified to enforce as many of the rules as possible.

A sample set of guidelines for password selection is shown below:

– DON’T use your login name in any form (as-is,
reversed, capitalized, doubled, etc.).

– DON’T use your first, middle, or last name in any form.

– DON’T use your spouse’s or child’s name.

– DON’T use other information easily obtained about you.
This includes license plate numbers, telephone numbers,
social security numbers, the make of your automobile,
the name of the street you live on, etc..

– DON’T use a password of all digits, or all the same
letter.

– DON’T use a word contained in English or foreign
language dictionaries, spelling lists, or other
lists of words.

– DON’T use a password shorter than six characters.

– DO use a password with mixed-case alphabetics.

– DO use a password with non-alphabetic characters (digits
or punctuation).

Site Security Policy Handbook Working Group [Page 58]

RFC 1244 Site Security Handbook July 1991

– DO use a password that is easy to remember, so you don’t
have to write it down.

– DO use a password that you can type quickly, without
having to look at the keyboard.

Methods of selecting a password which adheres to these guidelines
include:

– Choose a line or two from a song or poem, and use the
first letter of each word.

– Alternate between one consonant and one or two vowels, up
to seven or eight characters. This provides nonsense
words which are usually pronounceable, and thus easily
remembered.

– Choose two short words and concatenate them together with
a punctuation character between them.

Users should also be told to change their password periodically,
usually every three to six months. This makes sure that an
intruder who has guessed a password will eventually lose access,
as well as invalidating any list of passwords he/she may have
obtained. Many systems enable the system administrator to force
users to change their passwords after an expiration period; this
software should be enabled if your system supports it [5, CURRY].

Some systems provide software which forces users to change their
passwords on a regular basis. Many of these systems also include
password generators which provide the user with a set of passwords
to choose from. The user is not permitted to make up his or her
own password. There are arguments both for and against systems
such as these. On the one hand, by using generated passwords,
users are prevented from selecting insecure passwords. On the
other hand, unless the generator is good at making up easy to
remember passwords, users will begin writing them down in order to
remember them.

4.3.2 Procedures for Changing Passwords

How password changes are handled is important to keeping passwords
secure. Ideally, users should be able to change their own
passwords on-line. (Note that password changing programs are a
favorite target of intruders. See section 4.4 on configuration
management for further information.)

However, there are exception cases which must be handled

Site Security Policy Handbook Working Group [Page 59]

RFC 1244 Site Security Handbook July 1991

carefully. Users may forget passwords and not be able to get onto
the system. The standard procedure is to assign the user a new
password. Care should be taken to make sure that the real person
is requesting the change and gets the new password. One common
trick used by intruders is to call or message to a system
administrator and request a new password. Some external form of
verification should be used before the password is assigned. At
some sites, users are required to show up in person with ID.

There may also be times when many passwords need to be changed.
If a system is compromised by an intruder, the intruder may be
able to steal a password file and take it off the system. Under
these circumstances, one course of action is to change all
passwords on the system. Your site should have procedures for how
this can be done quickly and efficiently. What course you choose
may depend on the urgency of the problem. In the case of a known
attack with damage, you may choose to forcibly disable all
accounts and assign users new passwords before they come back onto
the system. In some places, users are sent a message telling them
that they should change their passwords, perhaps within a certain
time period. If the password isn’t changed before the time period
expires, the account is locked.

Users should be aware of what the standard procedure is for
passwords when a security event has occurred. One well-known
spoof reported by the Computer Emergency Response Team (CERT)
involved messages sent to users, supposedly from local system
administrators, requesting them to immediately change their
password to a new value provided in the message [24]. These
messages were not from the administrators, but from intruders
trying to steal accounts. Users should be warned to immediately
report any suspicious requests such as this to site
administrators.

4.4 Configuration Management Procedures

Configuration management is generally applied to the software
development process. However, it is certainly applicable in a
operational sense as well. Consider that the since many of the
system level programs are intended to enforce the security policy, it
is important that these be “known” as correct. That is, one should
not allow system level programs (such as the operating system, etc.)
to be changed arbitrarily. At very least, the procedures should
state who is authorized to make changes to systems, under what
circumstances, and how the changes should be documented.

In some environments, configuration management is also desirable as
applied to physical configuration of equipment. Maintaining valid

Site Security Policy Handbook Working Group [Page 60]

RFC 1244 Site Security Handbook July 1991

and authorized hardware configuration should be given due
consideration in your security policy.

4.4.1 Non-Standard Configurations

Occasionally, it may be beneficial to have a slightly non-standard
configuration in order to thwart the “standard” attacks used by
some intruders. The non-standard parts of the configuration might
include different password encryption algorithms, different
configuration file locations, and rewritten or functionally
limited system commands.

Non-standard configurations, however, also have their drawbacks.
By changing the “standard” system, these modifications make
software maintenance more difficult by requiring extra
documentation to be written, software modification after operating
system upgrades, and, usually, someone with special knowledge of
the changes.

Because of the drawbacks of non-standard configurations, they are
often only used in environments with a “firewall” machine (see
section 3.9.1). The firewall machine is modified in non-standard
ways since it is susceptible to attack, while internal systems
behind the firewall are left in their standard configurations.

5. Incident Handling

5.1 Overview

This section of the document will supply some guidance to be applied
when a computer security event is in progress on a machine, network,
site, or multi-site environment. The operative philosophy in the
event of a breach of computer security, whether it be an external
intruder attack or a disgruntled employee, is to plan for adverse
events in advance. There is no substitute for creating contingency
plans for the types of events described above.

Traditional computer security, while quite important in the overall
site security plan, usually falls heavily on protecting systems from
attack, and perhaps monitoring systems to detect attacks. Little
attention is usually paid for how to actually handle the attack when
it occurs. The result is that when an attack is in progress, many
decisions are made in haste and can be damaging to tracking down the
source of the incident, collecting evidence to be used in prosecution
efforts, preparing for the recovery of the system, and protecting the
valuable data contained on the system.

Site Security Policy Handbook Working Group [Page 61]

RFC 1244 Site Security Handbook July 1991

5.1.1 Have a Plan to Follow in Case of an Incident

Part of handling an incident is being prepared to respond before
the incident occurs. This includes establishing a suitable level
of protections, so that if the incident becomes severe, the damage
which can occur is limited. Protection includes preparing
incident handling guidelines or a contingency response plan for
your organization or site. Having written plans eliminates much
of the ambiguity which occurs during an incident, and will lead to
a more appropriate and thorough set of responses. Second, part of
protection is preparing a method of notification, so you will know
who to call and the relevant phone numbers. It is important, for
example, to conduct “dry runs,” in which your computer security
personnel, system administrators, and managers simulate handling
an incident.

Learning to respond efficiently to an incident is important for
numerous reasons. The most important benefit is directly to human
beings–preventing loss of human life. Some computing systems are
life critical systems, systems on which human life depends (e.g.,
by controlling some aspect of life-support in a hospital or
assisting air traffic controllers).

An important but often overlooked benefit is an economic one.
Having both technical and managerial personnel respond to an
incident requires considerable resources, resources which could be
utilized more profitably if an incident did not require their
services. If these personnel are trained to handle an incident
efficiently, less of their time is required to deal with that
incident.

A third benefit is protecting classified, sensitive, or
proprietary information. One of the major dangers of a computer
security incident is that information may be irrecoverable.
Efficient incident handling minimizes this danger. When
classified information is involved, other government regulations
may apply and must be integrated into any plan for incident
handling.

A fourth benefit is related to public relations. News about
computer security incidents tends to be damaging to an
organization’s stature among current or potential clients.
Efficient incident handling minimizes the potential for negative
exposure.

A final benefit of efficient incident handling is related to legal
issues. It is possible that in the near future organizations may
be sued because one of their nodes was used to launch a network

Site Security Policy Handbook Working Group [Page 62]

RFC 1244 Site Security Handbook July 1991

attack. In a similar vein, people who develop patches or
workarounds may be sued if the patches or workarounds are
ineffective, resulting in damage to systems, or if the patches or
workarounds themselves damage systems. Knowing about operating
system vulnerabilities and patterns of attacks and then taking
appropriate measures is critical to circumventing possible legal
problems.

5.1.2 Order of Discussion in this Session Suggests an Order for
a Plan

This chapter is arranged such that a list may be generated from
the Table of Contents to provide a starting point for creating a
policy for handling ongoing incidents. The main points to be
included in a policy for handling incidents are:

o Overview (what are the goals and objectives in handling the
incident).
o Evaluation (how serious is the incident).
o Notification (who should be notified about the incident).
o Response (what should the response to the incident be).
o Legal/Investigative (what are the legal and prosecutorial
implications of the incident).
o Documentation Logs (what records should be kept from before,
during, and after the incident).

Each of these points is important in an overall plan for handling
incidents. The remainder of this chapter will detail the issues
involved in each of these topics, and provide some guidance as to
what should be included in a site policy for handling incidents.

5.1.3 Possible Goals and Incentives for Efficient Incident
Handling

As in any set of pre-planned procedures, attention must be placed
on a set of goals to be obtained in handling an incident. These
goals will be placed in order of importance depending on the site,
but one such set of goals might be:

Assure integrity of (life) critical systems.
Maintain and restore data.
Maintain and restore service.
Figure out how it happened.
Avoid escalation and further incidents.
Avoid negative publicity.
Find out who did it.
Punish the attackers.

Site Security Policy Handbook Working Group [Page 63]

RFC 1244 Site Security Handbook July 1991

It is important to prioritize actions to be taken during an
incident well in advance of the time an incident occurs.
Sometimes an incident may be so complex that it is impossible to
do everything at once to respond to it; priorities are essential.
Although priorities will vary from institution-to-institution, the
following suggested priorities serve as a starting point for
defining an organization’s response:

o Priority one — protect human life and people’s
safety; human life always has precedence over all
other considerations.

o Priority two — protect classified and/or sensitive
data (as regulated by your site or by government
regulations).

o Priority three — protect other data, including
proprietary, scientific, managerial and other data,
because loss of data is costly in terms of resources.

o Priority four — prevent damage to systems (e.g., loss
or alteration of system files, damage to disk drives,
etc.); damage to systems can result in costly down
time and recovery.

o Priority five — minimize disruption of computing
resources; it is better in many cases to shut a system
down or disconnect from a network than to risk damage
to data or systems.

An important implication for defining priorities is that once
human life and national security considerations have been
addressed, it is generally more important to save data than system
software and hardware. Although it is undesirable to have any
damage or loss during an incident, systems can be replaced; the
loss or compromise of data (especially classified data), however,
is usually not an acceptable outcome under any circumstances.

Part of handling an incident is being prepared to respond before
the incident occurs. This includes establishing a suitable level
of protections so that if the incident becomes severe, the damage
which can occur is limited. Protection includes preparing
incident handling guidelines or a contingency response plan for
your organization or site. Written plans eliminate much of the
ambiguity which occurs during an incident, and will lead to a more
appropriate and thorough set of responses. Second, part of
protection is preparing a method of notification so you will know
who to call and how to contact them. For example, every member of

Site Security Policy Handbook Working Group [Page 64]

RFC 1244 Site Security Handbook July 1991

the Department of Energy’s CIAC Team carries a card with every
other team member’s work and home phone numbers, as well as pager
numbers. Third, your organization or site should establish backup
procedures for every machine and system. Having backups
eliminates much of the threat of even a severe incident, since
backups preclude serious data loss. Fourth, you should set up
secure systems. This involves eliminating vulnerabilities,
establishing an effective password policy, and other procedures,
all of which will be explained later in this document. Finally,
conducting training activities is part of protection. It is
important, for example, to conduct “dry runs,” in which your
computer security personnel, system administrators, and managers
simulate handling an incident.

5.1.4 Local Policies and Regulations Providing Guidance

Any plan for responding to security incidents should be guided by
local policies and regulations. Government and private sites that
deal with classified material have specific rules that they must
follow.

The policies your site makes about how it responds to incidents
(as discussed in sections 2.4 and 2.5) will shape your response.
For example, it may make little sense to create mechanisms to
monitor and trace intruders if your site does not plan to take
action against the intruders if they are caught. Other
organizations may have policies that affect your plans. Telephone
companies often release information about telephone traces only to
law enforcement agencies.

Section 5.5 also notes that if any legal action is planned, there
are specific guidelines that must be followed to make sure that
any information collected can be used as evidence.

5.2 Evaluation

5.2.1 Is It Real?

This stage involves determining the exact problem. Of course
many, if not most, signs often associated with virus infections,
system intrusions, etc., are simply anomalies such as hardware
failures. To assist in identifying whether there really is an
incident, it is usually helpful to obtain and use any detection
software which may be available. For example, widely available
software packages can greatly assist someone who thinks there may
be a virus in a Macintosh computer. Audit information is also
extremely useful, especially in determining whether there is a
network attack. It is extremely important to obtain a system

Site Security Policy Handbook Working Group [Page 65]

RFC 1244 Site Security Handbook July 1991

snapshot as soon as one suspects that something is wrong. Many
incidents cause a dynamic chain of events to occur, and an initial
system snapshot may do more good in identifying the problem and
any source of attack than most other actions which can be taken at
this stage. Finally, it is important to start a log book.
Recording system events, telephone conversations, time stamps,
etc., can lead to a more rapid and systematic identification of
the problem, and is the basis for subsequent stages of incident
handling.

There are certain indications or “symptoms” of an incident which
deserve special attention:

o System crashes.
o New user accounts (e.g., the account RUMPLESTILTSKIN
has unexplainedly been created), or high activity on
an account that has had virtually no activity for
months.
o New files (usually with novel or strange file names,
such as data.xx or k).
o Accounting discrepancies (e.g., in a UNIX system you
might notice that the accounting file called
/usr/admin/lastlog has shrunk, something that should
make you very suspicious that there may be an
intruder).
o Changes in file lengths or dates (e.g., a user should
be suspicious if he/she observes that the .EXE files in
an MS DOS computer have unexplainedly grown
by over 1800 bytes).
o Attempts to write to system (e.g., a system manager
notices that a privileged user in a VMS system is
attempting to alter RIGHTSLIST.DAT).
o Data modification or deletion (e.g., files start to
disappear).
o Denial of service (e.g., a system manager and all
other users become locked out of a UNIX system, which
has been changed to single user mode).
o Unexplained, poor system performance (e.g., system
response time becomes unusually slow).
o Anomalies (e.g., “GOTCHA” is displayed on a display
terminal or there are frequent unexplained “beeps”).
o Suspicious probes (e.g., there are numerous
unsuccessful login attempts from another node).
o Suspicious browsing (e.g., someone becomes a root user
on a UNIX system and accesses file after file in one
user’s account, then another’s).

None of these indications is absolute “proof” that an incident is

Site Security Policy Handbook Working Group [Page 66]

RFC 1244 Site Security Handbook July 1991

occurring, nor are all of these indications normally observed when
an incident occurs. If you observe any of these indications,
however, it is important to suspect that an incident might be
occurring, and act accordingly. There is no formula for
determining with 100 percent accuracy that an incident is
occurring (possible exception: when a virus detection package
indicates that your machine has the nVIR virus and you confirm
this by examining contents of the nVIR resource in your Macintosh
computer, you can be very certain that your machine is infected).
It is best at this point to collaborate with other technical and
computer security personnel to make a decision as a group about
whether an incident is occurring.

5.2.2 Scope

Along with the identification of the incident is the evaluation of
the scope and impact of the problem. It is important to correctly
identify the boundaries of the incident in order to effectively
deal with it. In addition, the impact of an incident will
determine its priority in allocating resources to deal with the
event. Without an indication of the scope and impact of the
event, it is difficult to determine a correct response.

In order to identify the scope and impact, a set of criteria
should be defined which is appropriate to the site and to the type
of connections available. Some of the issues are:

o Is this a multi-site incident?
o Are many computers at your site effected by this
incident?
o Is sensitive information involved?
o What is the entry point of the incident (network,
phone line, local terminal, etc.)?
o Is the press involved?
o What is the potential damage of the incident?
o What is the estimated time to close out the incident?
o What resources could be required
to handle the incident?

5.3 Possible Types of Notification

When you have confirmed that an incident is occurring, the
appropriate personnel must be notified. Who and how this
notification is achieved is very important in keeping the event under
control both from a technical and emotional standpoint.

Site Security Policy Handbook Working Group [Page 67]

RFC 1244 Site Security Handbook July 1991

5.3.1 Explicit

First of all, any notification to either local or off-site
personnel must be explicit. This requires that any statement (be
it an electronic mail message, phone call, or fax) provides
information about the incident that is clear, concise, and fully
qualified. When you are notifying others that will help you to
handle an event, a “smoke screen” will only divide the effort and
create confusion. If a division of labor is suggested, it is
helpful to provide information to each section about what is being
accomplished in other efforts. This will not only reduce
duplication of effort, but allow people working on parts of the
problem to know where to obtain other information that would help
them resolve a part of the incident.

5.3.2 Factual

Another important consideration when communicating about the
incident is to be factual. Attempting to hide aspects of the
incident by providing false or incomplete information may not only
prevent a successful resolution to the incident, but may even
worsen the situation. This is especially true when the press is
involved. When an incident severe enough to gain press attention
is ongoing, it is likely that any false information you provide
will not be substantiated by other sources. This will reflect
badly on the site and may create enough ill-will between the site
and the press to damage the site’s public relations.

5.3.3 Choice of Language

The choice of language used when notifying people about the
incident can have a profound effect on the way that information is
received. When you use emotional or inflammatory terms, you raise
the expectations of damage and negative outcomes of the incident.
It is important to remain calm both in written and spoken
notifications.

Another issue associated with the choice of language is the
notification to non-technical or off-site personnel. It is
important to accurately describe the incident without undue alarm
or confusing messages. While it is more difficult to describe the
incident to a non-technical audience, it is often more important.
A non-technical description may be required for upper-level
management, the press, or law enforcement liaisons. The
importance of these notifications cannot be underestimated and may
make the difference between handling the incident properly and
escalating to some higher level of damage.

Site Security Policy Handbook Working Group [Page 68]

RFC 1244 Site Security Handbook July 1991

5.3.4 Notification of Individuals

o Point of Contact (POC) people (Technical, Administrative,
Response Teams, Investigative, Legal, Vendors, Service
providers), and which POCs are visible to whom.
o Wider community (users).
o Other sites that might be affected.

Finally, there is the question of who should be notified during
and after the incident. There are several classes of individuals
that need to be considered for notification. These are the
technical personnel, administration, appropriate response teams
(such as CERT or CIAC), law enforcement, vendors, and other
service providers. These issues are important for the central
point of contact, since that is the person responsible for the
actual notification of others (see section 5.3.6 for further
information). A list of people in each of these categories is an
important time saver for the POC during an incident. It is much
more difficult to find an appropriate person during an incident
when many urgent events are ongoing.

In addition to the people responsible for handling part of the
incident, there may be other sites affected by the incident (or
perhaps simply at risk from the incident). A wider community of
users may also benefit from knowledge of the incident. Often, a
report of the incident once it is closed out is appropriate for
publication to the wider user community.

5.3.5 Public Relations – Press Releases

One of the most important issues to consider is when, who, and how
much to release to the general public through the press. There
are many issues to consider when deciding this particular issue.
First and foremost, if a public relations office exists for the
site, it is important to use this office as liaison to the press.
The public relations office is trained in the type and wording of
information released, and will help to assure that the image of
the site is protected during and after the incident (if possible).
A public relations office has the advantage that you can
communicate candidly with them, and provide a buffer between the
constant press attention and the need of the POC to maintain
control over the incident.

If a public relations office is not available, the information
released to the press must be carefully considered. If the
information is sensitive, it may be advantageous to provide only
minimal or overview information to the press. It is quite
possible that any information provided to the press will be

Site Security Policy Handbook Working Group [Page 69]

RFC 1244 Site Security Handbook July 1991

quickly reviewed by the perpetrator of the incident. As a
contrast to this consideration, it was discussed above that
misleading the press can often backfire and cause more damage than
releasing sensitive information.

While it is difficult to determine in advance what level of detail
to provide to the press, some guidelines to keep in mind are:

o Keep the technical level of detail low. Detailed
information about the incident may provide enough
information for copy-cat events or even damage the
site’s ability to prosecute once the event is over.
o Keep the speculation out of press statements.
Speculation of who is causing the incident or the
motives are very likely to be in error and may cause
an inflamed view of the incident.
o Work with law enforcement professionals to assure that
evidence is protected. If prosecution is involved,
assure that the evidence collected is not divulged to
the press.
o Try not to be forced into a press interview before you are
prepared. The popular press is famous for the “2am”
interview, where the hope is to catch the interviewee off
guard and obtain information otherwise not available.
o Do not allow the press attention to detract from the
handling of the event. Always remember that the successful
closure of an incident is of primary importance.

5.3.6 Who Needs to Get Involved?

There now exists a number of incident response teams (IRTs) such
as the CERT and the CIAC. (See sections 3.9.7.3.1 and 3.9.7.3.4.)
Teams exists for many major government agencies and large
corporations. If such a team is available for your site, the
notification of this team should be of primary importance during
the early stages of an incident. These teams are responsible for
coordinating computer security incidents over a range of sites and
larger entities. Even if the incident is believed to be contained
to a single site, it is possible that the information available
through a response team could help in closing out the incident.

In setting up a site policy for incident handling, it may be
desirable to create an incident handling team (IHT), much like
those teams that already exist, that will be responsible for
handling computer security incidents for the site (or
organization). If such a team is created, it is essential that
communication lines be opened between this team and other IHTs.
Once an incident is under way, it is difficult to open a trusted

Site Security Policy Handbook Working Group [Page 70]

RFC 1244 Site Security Handbook July 1991

dialogue between other IHTs if none has existed before.

5.4 Response

A major topic still untouched here is how to actually respond to an
event. The response to an event will fall into the general
categories of containment, eradication, recovery, and follow-up.

Containment

The purpose of containment is to limit the extent of an attack.
For example, it is important to limit the spread of a worm attack
on a network as quickly as possible. An essential part of
containment is decision making (i.e., determining whether to shut
a system down, to disconnect from a network, to monitor system or
network activity, to set traps, to disable functions such as
remote file transfer on a UNIX system, etc.). Sometimes this
decision is trivial; shut the system down if the system is
classified or sensitive, or if proprietary information is at risk!
In other cases, it is worthwhile to risk having some damage to the
system if keeping the system up might enable you to identify an
intruder.

The third stage, containment, should involve carrying out
predetermined procedures. Your organization or site should, for
example, define acceptable risks in dealing with an incident, and
should prescribe specific actions and strategies accordingly.
Finally, notification of cognizant authorities should occur during
this stage.

Eradication

Once an incident has been detected, it is important to first think
about containing the incident. Once the incident has been
contained, it is now time to eradicate the cause. Software may be
available to help you in this effort. For example, eradication
software is available to eliminate most viruses which infect small
systems. If any bogus files have been created, it is time to
delete them at this point. In the case of virus infections, it is
important to clean and reformat any disks containing infected
files. Finally, ensure that all backups are clean. Many systems
infected with viruses become periodically reinfected simply
because people do not systematically eradicate the virus from
backups.

Recovery

Once the cause of an incident has been eradicated, the recovery

Site Security Policy Handbook Working Group [Page 71]

RFC 1244 Site Security Handbook July 1991

phase defines the next stage of action. The goal of recovery is
to return the system to normal. In the case of a network-based
attack, it is important to install patches for any operating
system vulnerability which was exploited.

Follow-up

One of the most important stages of responding to incidents is
also the most often omitted—the follow-up stage. This stage is
important because it helps those involved in handling the incident
develop a set of “lessons learned” (see section 6.3) to improve
future performance in such situations. This stage also provides
information which justifies an organization’s computer security
effort to management, and yields information which may be
essential in legal proceedings.

The most important element of the follow-up stage is performing a
postmortem analysis. Exactly what happened, and at what times?
How well did the staff involved with the incident perform? What
kind of information did the staff need quickly, and how could they
have gotten that information as soon as possible? What would the
staff do differently next time? A follow-up report is valuable
because it provides a reference to be used in case of other
similar incidents. Creating a formal chronology of events
(including time stamps) is also important for legal reasons.
Similarly, it is also important to as quickly obtain a monetary
estimate of the amount of damage the incident caused in terms of
any loss of software and files, hardware damage, and manpower
costs to restore altered files, reconfigure affected systems, and
so forth. This estimate may become the basis for subsequent
prosecution activity by the FBI, the U.S. Attorney General’s
Office, etc..

5.4.1 What Will You Do?

o Restore control.
o Relation to policy.
o Which level of service is needed?
o Monitor activity.
o Constrain or shut down system.

5.4.2 Consider Designating a “Single Point of Contact”

When an incident is under way, a major issue is deciding who is in
charge of coordinating the activity of the multitude of players.
A major mistake that can be made is to have a number of “points of
contact” (POC) that are not pulling their efforts together. This
will only add to the confusion of the event, and will probably

Site Security Policy Handbook Working Group [Page 72]

RFC 1244 Site Security Handbook July 1991

lead to additional confusion and wasted or ineffective effort.

The single point of contact may or may not be the person “in
charge” of the incident. There are two distinct rolls to fill
when deciding who shall be the point of contact and the person in
charge of the incident. The person in charge will make decisions
as to the interpretation of policy applied to the event. The
responsibility for the handling of the event falls onto this
person. In contrast, the point of contact must coordinate the
effort of all the parties involved with handling the event.

The point of contact must be a person with the technical expertise
to successfully coordinate the effort of the system managers and
users involved in monitoring and reacting to the attack. Often
the management structure of a site is such that the administrator
of a set of resources is not a technically competent person with
regard to handling the details of the operations of the computers,
but is ultimately responsible for the use of these resources.

Another important function of the POC is to maintain contact with
law enforcement and other external agencies (such as the CIA, DoD,
U.S. Army, or others) to assure that multi-agency involvement
occurs.

Finally, if legal action in the form of prosecution is involved,
the POC may be able to speak for the site in court. The
alternative is to have multiple witnesses that will be hard to
coordinate in a legal sense, and will weaken any case against the
attackers. A single POC may also be the single person in charge
of evidence collected, which will keep the number of people
accounting for evidence to a minimum. As a rule of thumb, the
more people that touch a potential piece of evidence, the greater
the possibility that it will be inadmissible in court. The
section below (Legal/Investigative) will provide more details for
consideration on this topic.

5.5 Legal/Investigative

5.5.1 Establishing Contacts with Investigative Agencies

It is important to establish contacts with personnel from
investigative agencies such as the FBI and Secret Service as soon
as possible, for several reasons. Local law enforcement and local
security offices or campus police organizations should also be
informed when appropriate. A primary reason is that once a major
attack is in progress, there is little time to call various
personnel in these agencies to determine exactly who the correct
point of contact is. Another reason is that it is important to

Site Security Policy Handbook Working Group [Page 73]

RFC 1244 Site Security Handbook July 1991

cooperate with these agencies in a manner that will foster a good
working relationship, and that will be in accordance with the
working procedures of these agencies. Knowing the working
procedures in advance and the expectations of your point of
contact is a big step in this direction. For example, it is
important to gather evidence that will be admissible in a court of
law. If you don’t know in advance how to gather admissible
evidence, your efforts to collect evidence during an incident are
likely to be of no value to the investigative agency with which
you deal. A final reason for establishing contacts as soon as
possible is that it is impossible to know the particular agency
that will assume jurisdiction in any given incident. Making
contacts and finding the proper channels early will make
responding to an incident go considerably more smoothly.

If your organization or site has a legal counsel, you need to
notify this office soon after you learn that an incident is in
progress. At a minimum, your legal counsel needs to be involved
to protect the legal and financial interests of your site or
organization. There are many legal and practical issues, a few of
which are:

1. Whether your site or organization is willing to risk
negative publicity or exposure to cooperate with legal
prosecution efforts.

2. Downstream liability–if you leave a compromised system
as is so it can be monitored and another computer is damaged
because the attack originated from your system, your site or
organization may be liable for damages incurred.

3. Distribution of information–if your site or organization
distributes information about an attack in which another
site or organization may be involved or the vulnerability
in a product that may affect ability to market that
product, your site or organization may again be liable
for any damages (including damage of reputation).

4. Liabilities due to monitoring–your site or organization
may be sued if users at your site or elsewhere discover
that your site is monitoring account activity without
informing users.

Unfortunately, there are no clear precedents yet on the
liabilities or responsibilities of organizations involved in a
security incident or who might be involved in supporting an
investigative effort. Investigators will often encourage
organizations to help trace and monitor intruders — indeed, most

Site Security Policy Handbook Working Group [Page 74]

RFC 1244 Site Security Handbook July 1991

investigators cannot pursue computer intrusions without extensive
support from the organizations involved. However, investigators
cannot provide protection from liability claims, and these kinds
of efforts may drag out for months and may take lots of effort.

On the other side, an organization’s legal council may advise
extreme caution and suggest that tracing activities be halted and
an intruder shut out of the system. This in itself may not
provide protection from liability, and may prevent investigators
from identifying anyone.

The balance between supporting investigative activity and limiting
liability is tricky; you’ll need to consider the advice of your
council and the damage the intruder is causing (if any) in making
your decision about what to do during any particular incident.

Your legal counsel should also be involved in any decision to
contact investigative agencies when an incident occurs at your
site. The decision to coordinate efforts with investigative
agencies is most properly that of your site or organization.
Involving your legal counsel will also foster the multi-level
coordination between your site and the particular investigative
agency involved which in turn results in an efficient division of
labor. Another result is that you are likely to obtain guidance
that will help you avoid future legal mistakes.

Finally, your legal counsel should evaluate your site’s written
procedures for responding to incidents. It is essential to obtain
a “clean bill of health” from a legal perspective before you
actually carry out these procedures.

5.5.2 Formal and Informal Legal Procedures

One of the most important considerations in dealing with
investigative agencies is verifying that the person who calls
asking for information is a legitimate representative from the
agency in question. Unfortunately, many well intentioned people
have unknowingly leaked sensitive information about incidents,
allowed unauthorized people into their systems, etc., because a
caller has masqueraded as an FBI or Secret Service agent. A
similar consideration is using a secure means of communication.
Because many network attackers can easily reroute electronic mail,
avoid using electronic mail to communicate with other agencies (as
well as others dealing with the incident at hand). Non-secured
phone lines (e.g., the phones normally used in the business world)
are also frequent targets for tapping by network intruders, so be
careful!

Site Security Policy Handbook Working Group [Page 75]

RFC 1244 Site Security Handbook July 1991

There is no established set of rules for responding to an incident
when the U.S. Federal Government becomes involved. Except by
court order, no agency can force you to monitor, to disconnect
from the network, to avoid telephone contact with the suspected
attackers, etc.. As discussed in section 5.5.1, you should
consult the matter with your legal counsel, especially before
taking an action that your organization has never taken. The
particular agency involved may ask you to leave an attacked
machine on and to monitor activity on this machine, for example.
Your complying with this request will ensure continued cooperation
of the agency–usually the best route towards finding the source
of the network attacks and, ultimately, terminating these attacks.
Additionally, you may need some information or a favor from the
agency involved in the incident. You are likely to get what you
need only if you have been cooperative. Of particular importance
is avoiding unnecessary or unauthorized disclosure of information
about the incident, including any information furnished by the
agency involved. The trust between your site and the agency
hinges upon your ability to avoid compromising the case the agency
will build; keeping “tight lipped” is imperative.

Sometimes your needs and the needs of an investigative agency will
differ. Your site may want to get back to normal business by
closing an attack route, but the investigative agency may want you
to keep this route open. Similarly, your site may want to close a
compromised system down to avoid the possibility of negative
publicity, but again the investigative agency may want you to
continue monitoring. When there is such a conflict, there may be
a complex set of tradeoffs (e.g., interests of your site’s
management, amount of resources you can devote to the problem,
jurisdictional boundaries, etc.). An important guiding principle
is related to what might be called “Internet citizenship” [22,
IAB89, 23] and its responsibilities. Your site can shut a system
down, and this will relieve you of the stress, resource demands,
and danger of negative exposure. The attacker, however, is likely
to simply move on to another system, temporarily leaving others
blind to the attacker’s intention and actions until another path
of attack can be detected. Providing that there is no damage to
your systems and others, the most responsible course of action is
to cooperate with the participating agency by leaving your
compromised system on. This will allow monitoring (and,
ultimately, the possibility of terminating the source of the
threat to systems just like yours). On the other hand, if there
is damage to computers illegally accessed through your system, the
choice is more complicated: shutting down the intruder may prevent
further damage to systems, but might make it impossible to track
down the intruder. If there has been damage, the decision about
whether it is important to leave systems up to catch the intruder

Site Security Policy Handbook Working Group [Page 76]

RFC 1244 Site Security Handbook July 1991

should involve all the organizations effected. Further
complicating the issue of network responsibility is the
consideration that if you do not cooperate with the agency
involved, you will be less likely to receive help from that agency
in the future.

5.6 Documentation Logs

When you respond to an incident, document all details related to the
incident. This will provide valuable information to yourself and
others as you try to unravel the course of events. Documenting all
details will ultimately save you time. If you don’t document every
relevant phone call, for example, you are likely to forget a good
portion of information you obtain, requiring you to contact the
source of information once again. This wastes yours and others’
time, something you can ill afford. At the same time, recording
details will provide evidence for prosecution efforts, providing the
case moves in this direction. Documenting an incident also will help
you perform a final assessment of damage (something your management
as well as law enforcement officers will want to know), and will
provide the basis for a follow-up analysis in which you can engage in
a valuable “lessons learned” exercise.

During the initial stages of an incident, it is often infeasible to
determine whether prosecution is viable, so you should document as if
you are gathering evidence for a court case. At a minimum, you
should record:

o All system events (audit records).
o All actions you take (time tagged).
o All phone conversations (including the person with whom
you talked, the date and time, and the content of the
conversation).

The most straightforward way to maintain documentation is keeping a
log book. This allows you to go to a centralized, chronological
source of information when you need it, instead of requiring you to
page through individual sheets of paper. Much of this information is
potential evidence in a court of law. Thus, when you initially
suspect that an incident will result in prosecution or when an
investigative agency becomes involved, you need to regularly (e.g.,
every day) turn in photocopied, signed copies of your logbook (as
well as media you use to record system events) to a document
custodian who can store these copied pages in a secure place (e.g., a
safe). When you submit information for storage, you should in return
receive a signed, dated receipt from the document custodian. Failure
to observe these procedures can result in invalidation of any
evidence you obtain in a court of law.

Site Security Policy Handbook Working Group [Page 77]

RFC 1244 Site Security Handbook July 1991

6. Establishing Post-Incident Procedures

6.1 Overview

In the wake of an incident, several actions should take place. These
actions can be summarized as follows:

1. An inventory should be taken of the systems’ assets,
i.e., a careful examination should determine how the
system was affected by the incident,

2. The lessons learned as a result of the incident
should be included in revised security plan to
prevent the incident from re-occurring,

3. A new risk analysis should be developed in light of the
incident,

4. An investigation and prosecution of the individuals
who caused the incident should commence, if it is
deemed desirable.

All four steps should provide feedback to the site security policy
committee, leading to prompt re-evaluation and amendment of the
current policy.

6.2 Removing Vulnerabilities

Removing all vulnerabilities once an incident has occurred is
difficult. The key to removing vulnerabilities is knowledge and
understanding of the breach. In some cases, it is prudent to remove
all access or functionality as soon as possible, and then restore
normal operation in limited stages. Bear in mind that removing all
access while an incident is in progress will obviously notify all
users, including the alleged problem users, that the administrators
are aware of a problem; this may have a deleterious effect on an
investigation. However, allowing an incident to continue may also
open the likelihood of greater damage, loss, aggravation, or
liability (civil or criminal).

If it is determined that the breach occurred due to a flaw in the
systems’ hardware or software, the vendor (or supplier) and the CERT
should be notified as soon as possible. Including relevant telephone
numbers (also electronic mail addresses and fax numbers) in the site
security policy is strongly recommended. To aid prompt
acknowledgment and understanding of the problem, the flaw should be
described in as much detail as possible, including details about how
to exploit the flaw.

Site Security Policy Handbook Working Group [Page 78]

RFC 1244 Site Security Handbook July 1991

As soon as the breach has occurred, the entire system and all its
components should be considered suspect. System software is the most
probable target. Preparation is key to recovering from a possibly
tainted system. This includes checksumming all tapes from the vendor
using a checksum algorithm which (hopefully) is resistant to
tampering [10]. (See sections 3.9.4.1, 3.9.4.2.) Assuming original
vendor distribution tapes are available, an analysis of all system
files should commence, and any irregularities should be noted and
referred to all parties involved in handling the incident. It can be
very difficult, in some cases, to decide which backup tapes to
recover from; consider that the incident may have continued for
months or years before discovery, and that the suspect may be an
employee of the site, or otherwise have intimate knowledge or access
to the systems. In all cases, the pre-incident preparation will
determine what recovery is possible. At worst-case, restoration from
the original manufactures’ media and a re-installation of the systems
will be the most prudent solution.

Review the lessons learned from the incident and always update the
policy and procedures to reflect changes necessitated by the
incident.

6.2.1 Assessing Damage

Before cleanup can begin, the actual system damage must be
discerned. This can be quite time consuming, but should lead into
some of the insight as to the nature of the incident, and aid
investigation and prosecution. It is best to compare previous
backups or original tapes when possible; advance preparation is
the key. If the system supports centralized logging (most do), go
back over the logs and look for abnormalities. If process
accounting and connect time accounting is enabled, look for
patterns of system usage. To a lesser extent, disk usage may shed
light on the incident. Accounting can provide much helpful
information in an analysis of an incident and subsequent
prosecution.

6.2.2 Cleanup

Once the damage has been assessed, it is necessary to develop a
plan for system cleanup. In general, bringing up services in the
order of demand to allow a minimum of user inconvenience is the
best practice. Understand that the proper recovery procedures for
the system are extremely important and should be specific to the
site.

It may be necessary to go back to the original distributed tapes
and recustomize the system. To facilitate this worst case

Site Security Policy Handbook Working Group [Page 79]

RFC 1244 Site Security Handbook July 1991

scenario, a record of the original systems setup and each
customization change should be kept current with each change to
the system.

6.2.3 Follow up

Once you believe that a system has been restored to a “safe”
state, it is still possible that holes and even traps could be
lurking in the system. In the follow-up stage, the system should
be monitored for items that may have been missed during the
cleanup stage. It would be prudent to utilize some of the tools
mentioned in section 3.9.8.2 (e.g., COPS) as a start. Remember,
these tools don’t replace continual system monitoring and good
systems administration procedures.

6.2.4 Keep a Security Log

As discussed in section 5.6, a security log can be most valuable
during this phase of removing vulnerabilities. There are two
considerations here; the first is to keep logs of the procedures
that have been used to make the system secure again. This should
include command procedures (e.g., shell scripts) that can be run
on a periodic basis to recheck the security. Second, keep logs of
important system events. These can be referenced when trying to
determine the extent of the damage of a given incident.

6.3 Capturing Lessons Learned

6.3.1 Understand the Lesson

After an incident, it is prudent to write a report describing the
incident, method of discovery, correction procedure, monitoring
procedure, and a summary of lesson learned. This will aid in the
clear understanding of the problem. Remember, it is difficult to
learn from an incident if you don’t understand the source.

6.3.2 Resources

6.3.2.1 Other Security Devices, Methods

Security is a dynamic, not static process. Sites are dependent
on the nature of security available at each site, and the array
of devices and methods that will help promote security.
Keeping up with the security area of the computer industry and
their methods will assure a security manager of taking
advantage of the latest technology.

Site Security Policy Handbook Working Group [Page 80]

RFC 1244 Site Security Handbook July 1991

6.3.2.2 Repository of Books, Lists, Information Sources

Keep an on site collection of books, lists, information
sources, etc., as guides and references for securing the
system. Keep this collection up to date. Remember, as systems
change, so do security methods and problems.

6.3.2.3 Form a Subgroup

Form a subgroup of system administration personnel that will be
the core security staff. This will allow discussions of
security problems and multiple views of the site’s security
issues. This subgroup can also act to develop the site
security policy and make suggested changes as necessary to
ensure site security.

6.4 Upgrading Policies and Procedures

6.4.1 Establish Mechanisms for Updating Policies, Procedures,
and Tools

If an incident is based on poor policy, and unless the policy is
changed, then one is doomed to repeat the past. Once a site has
recovered from and incident, site policy and procedures should be
reviewed to encompass changes to prevent similar incidents. Even
without an incident, it would be prudent to review policies and
procedures on a regular basis. Reviews are imperative due to
today’s changing computing environments.

6.4.2 Problem Reporting Procedures

A problem reporting procedure should be implemented to describe,
in detail, the incident and the solutions to the incident. Each
incident should be reviewed by the site security subgroup to allow
understanding of the incident with possible suggestions to the
site policy and procedures.

7. References

[1] Quarterman, J., “The Matrix: Computer Networks and Conferencing
Systems Worldwide”, Pg. 278, Digital Press, Bedford, MA, 1990.

[2] Brand, R., “Coping with the Threat of Computer Security
Incidents: A Primer from Prevention through Recovery”, R. Brand,
available on-line from: cert.sei.cmu.edu:/pub/info/primer, 8 June
1990.

[3] Fites, M., Kratz, P. and A. Brebner, “Control and Security of

Site Security Policy Handbook Working Group [Page 81]

RFC 1244 Site Security Handbook July 1991

Computer Information Systems”, Computer Science Press, 1989.

[4] Johnson, D., and J. Podesta, “Formulating a Company Policy on
Access to and Use and Disclosure of Electronic Mail on Company
Computer Systems”, Available from: The Electronic Mail
Association (EMA) 1555 Wilson Blvd, Suite 555, Arlington VA
22209, (703) 522-7111, 22 October 1990.

[5] Curry, D., “Improving the Security of Your UNIX System”, SRI
International Report ITSTD-721-FR-90-21, April 1990.

[6] Cheswick, B., “The Design of a Secure Internet Gateway”,
Proceedings of the Summer Usenix Conference, Anaheim, CA, June
1990.

[7] Linn, J., “Privacy Enhancement for Internet Electronic Mail: Part
I — Message Encipherment and Authentication Procedures”, RFC
1113, IAB Privacy Task Force, August 1989.

[8] Kent, S., and J. Linn, “Privacy Enhancement for Internet
Electronic Mail: Part II — Certificate-Based Key Management”,
RFC 1114, IAB Privacy Task Force, August 1989.

[9] Linn, J., “Privacy Enhancement for Internet Electronic Mail: Part
III — Algorithms, Modes, and Identifiers”, RFC 1115, IAB Privacy
Task Force, August 1989.

[10] Merkle, R., “A Fast Software One Way Hash Function”, Journal of
Cryptology, Vol. 3, No. 1.

[11] Postel, J., “Internet Protocol – DARPA Internet Program Protocol
Specification”, RFC 791, DARPA, September 1981.

[12] Postel, J., “Transmission Control Protocol – DARPA Internet
Program Protocol Specification”, RFC 793, DARPA, September 1981.

[13] Postel, J., “User Datagram Protocol”, RFC 768, USC/Information
Sciences Institute, 28 August 1980.

[14] Mogul, J., “Simple and Flexible Datagram Access Controls for
UNIX-based Gateways”, Digital Western Research Laboratory
Research Report 89/4, March 1989.

[15] Bellovin, S., and M. Merritt, “Limitations of the Kerberos
Authentication System”, Computer Communications Review, October
1990.

[16] Pfleeger, C., “Security in Computing”, Prentice-Hall, Englewood

Site Security Policy Handbook Working Group [Page 82]

RFC 1244 Site Security Handbook July 1991

Cliffs, N.J., 1989.

[17] Parker, D., Swope, S., and B. Baker, “Ethical Conflicts:
Information and Computer Science, Technology and Business”, QED
Information Sciences, Inc., Wellesley, MA.

[18] Forester, T., and P. Morrison, “Computer Ethics: Tales and
Ethical Dilemmas in Computing”, MIT Press, Cambridge, MA, 1990.

[19] Postel, J., and J. Reynolds, “Telnet Protocol Specification”, RFC
854, USC/Information Sciences Institute, May 1983.

[20] Postel, J., and J. Reynolds, “File Transfer Protocol”, RFC 959,
USC/Information Sciences Institute, October 1985.

[21] Postel, J., Editor, “IAB Official Protocol Standards”, RFC 1200,
IAB, April 1991.

[22] Internet Activities Board, “Ethics and the Internet”, RFC 1087,
Internet Activities Board, January 1989.

[23] Pethia, R., Crocker, S., and B. Fraser, “Policy Guidelines for
the Secure Operation of the Internet”, CERT, TIS, CERT, RFC in
preparation.

[24] Computer Emergency Response Team (CERT/CC), “Unauthorized
Password Change Requests”, CERT Advisory CA-91:03, April 1991.

[25] Computer Emergency Response Team (CERT/CC), “TELNET Breakin
Warning”, CERT Advisory CA-89:03, August 1989.

[26] CCITT, Recommendation X.509, “The Directory: Authentication
Framework”, Annex C.

[27] Farmer, D., and E. Spafford, “The COPS Security Checker System”,
Proceedings of the Summer 1990 USENIX Conference, Anaheim, CA,
Pgs. 165-170, June 1990.

8. Annotated Bibliography

The intent of this annotated bibliography is to offer a
representative collection of resources of information that will help
the user of this handbook. It is meant provide a starting point for
further research in the security area. Included are references to
other sources of information for those who wish to pursue issues of
the computer security environment.

Site Security Policy Handbook Working Group [Page 83]

RFC 1244 Site Security Handbook July 1991

8.1 Computer Law

[ABA89]
American Bar Association, Section of Science and
Technology, “Guide to the Prosecution of Telecommunication
Fraud by the Use of Computer Crime Statutes”, American Bar
Association, 1989.

[BENDER]
Bender, D., “Computer Law: Evidence and Procedure”,
M. Bender, New York, NY, 1978-present.

Kept up to date with supplements.
Years covering 1978-1984 focuses on: Computer law,
evidence and procedures. The years 1984 to the current
focus on general computer law. Bibliographical
references and index included.

[BLOOMBECKER]
Bloombecker, B., “Spectacular Computer Crimes”, Dow Jones-
Irwin, Homewood, IL. 1990.

[CCH]
Commerce Clearing House, “Guide to Computer Law”, (Topical
Law Reports), Chicago, IL., 1989.

Court cases and decisions rendered by federal and state
courts throughout the United States on federal and state
computer law. Includes Case Table and Topical Index.

[CONLY]
Conly, C., “Organizing for Computer Crime Investigation and
Prosecution”, U.S. Dept. of Justice, Office of Justice
Programs, Under Contract Number OJP-86-C-002, National
Institute of Justice, Washington, DC, July 1989.

[FENWICK]
Fenwick, W., Chair, “Computer Litigation, 1985: Trial
Tactics and Techniques”, Litigation Course Handbook
Series No. 280, Prepared for distribution at the
Computer Litigation, 1985: Trial Tactics and
Techniques Program, February-March 1985.

[GEMIGNANI]
Gemignani, M., “Viruses and Criminal Law”, Communications
of the ACM, Vol. 32, No. 6, Pgs. 669-671, June 1989.

Site Security Policy Handbook Working Group [Page 84]

RFC 1244 Site Security Handbook July 1991

[HUBAND]
Huband, F., and R. Shelton, Editors, “Protection of
Computer Systems and Software: New Approaches for Combating
Theft of Software and Unauthorized Intrusion”, Papers
presented at a workshop sponsored by the National Science
Foundation, 1986.

[MCEWEN]
McEwen, J., “Dedicated Computer Crime Units”, Report
Contributors: D. Fester and H. Nugent, Prepared for the
National Institute of Justice, U.S. Department of Justice,
by Institute for Law and Justice, Inc., under contract number
OJP-85-C-006, Washington, DC, 1989.

[PARKER]
Parker, D., “Computer Crime: Criminal Justice Resource
Manual”, U.S. Dept. of Justice, National Institute of Justice,
Office of Justice Programs, Under Contract Number
OJP-86-C-002, Washington, D.C., August 1989.

[SHAW]
Shaw, E., Jr., “Computer Fraud and Abuse Act of 1986,
Congressional Record (3 June 1986), Washington, D.C.,
3 June 1986.

[TRIBLE]
Trible, P., “The Computer Fraud and Abuse Act of 1986”,
U.S. Senate Committee on the Judiciary, 1986.

8.2 Computer Security

[CAELLI]
Caelli, W., Editor, “Computer Security in the Age of
Information”, Proceedings of the Fifth IFIP International
Conference on Computer Security, IFIP/Sec ’88.

[CARROLL]
Carroll, J., “Computer Security”, 2nd Edition, Butterworth
Publishers, Stoneham, MA, 1987.

[COOPER]
Cooper, J., “Computer and Communications Security:
Strategies for the 1990s”, McGraw-Hill, 1989.

[BRAND]
Brand, R., “Coping with the Threat of Computer Security
Incidents: A Primer from Prevention through Recovery”,

Site Security Policy Handbook Working Group [Page 85]

RFC 1244 Site Security Handbook July 1991

R. Brand, 8 June 1990.

As computer security becomes a more important issue in
modern society, it begins to warrant a systematic approach.
The vast majority of the computer security problems and the
costs associated with them can be prevented with simple
inexpensive measures. The most important and cost
effective of these measures are available in the prevention
and planning phases. These methods are presented in this
paper, followed by a simplified guide to incident
handling and recovery. Available on-line from:
cert.sei.cmu.edu:/pub/info/primer.

[CHESWICK]
Cheswick, B., “The Design of a Secure Internet Gateway”,
Proceedings of the Summer Usenix Conference, Anaheim, CA,
June 1990.

Brief abstract (slight paraphrase from the original
abstract): AT&T maintains a large internal Internet that
needs to be protected from outside attacks, while
providing useful services between the two.
This paper describes AT&T’s Internet gateway. This
gateway passes mail and many of the common Internet
services between AT&T internal machines and the Internet.
This is accomplished without IP connectivity using a pair
of machines: a trusted internal machine and an untrusted
external gateway. These are connected by a private link.
The internal machine provides a few carefully-guarded
services to the external gateway. This configuration
helps protect the internal internet even if the external
machine is fully compromised.

This is a very useful and interesting design. Most
firewall gateway systems rely on a system that, if
compromised, could allow access to the machines behind
the firewall. Also, most firewall systems require users
who want access to Internet services to have accounts on
the firewall machine. AT&T’s design allows AT&T internal
internet users access to the standard services of TELNET and
FTP from their own workstations without accounts on
the firewall machine. A very useful paper that shows
how to maintain some of the benefits of Internet
connectivity while still maintaining strong
security.

Site Security Policy Handbook Working Group [Page 86]

RFC 1244 Site Security Handbook July 1991

[CURRY]
Curry, D., “Improving the Security of Your UNIX System”,
SRI International Report ITSTD-721-FR-90-21, April 1990.

This paper describes measures that you, as a system
administrator can take to make your UNIX system(s) more
secure. Oriented primarily at SunOS 4.x, most of the
information covered applies equally well to any Berkeley
UNIX system with or without NFS and/or Yellow Pages (NIS).
Some of the information can also be applied to System V,
although this is not a primary focus of the paper. A very
useful reference, this is also available on the Internet in
various locations, including the directory
cert.sei.cmu.edu:/pub/info.

[FITES]
Fites, M., Kratz, P. and A. Brebner, “Control and
Security of Computer Information Systems”, Computer Science
Press, 1989.

This book serves as a good guide to the issues encountered
in forming computer security policies and procedures. The
book is designed as a textbook for an introductory course
in information systems security.

The book is divided into five sections: Risk Management (I),
Safeguards: security and control measures, organizational
and administrative (II), Safeguards: Security and Control
Measures, Technical (III), Legal Environment and
Professionalism (IV), and CICA Computer Control Guidelines
(V).

The book is particularly notable for its straight-forward
approach to security, emphasizing that common sense is the
first consideration in designing a security program. The
authors note that there is a tendency to look to more
technical solutions to security problems while overlooking
organizational controls which are often cheaper and much
more effective. 298 pages, including references and index.

[GARFINKEL]
Garfinkel, S, and E. Spafford, “Practical Unix Security”,
O’Reilly & Associates, ISBN 0-937175-72-2, May 1991.

Approx 450 pages, $29.95. Orders: 1-800-338-6887
(US & Canada), 1-707-829-0515 (Europe), email: nuts@ora.com

This is one of the most useful books available on Unix

Site Security Policy Handbook Working Group [Page 87]

RFC 1244 Site Security Handbook July 1991

security. The first part of the book covers standard Unix
and Unix security basics, with particular emphasis on
passwords. The second section covers enforcing security on
the system. Of particular interest to the Internet user are
the sections on network security, which address many
of the common security problems that afflict Internet Unix
users. Four chapters deal with handling security incidents,
and the book concludes with discussions of encryption,
physical security, and useful checklists and lists of
resources. The book lives up to its name; it is filled with
specific references to possible security holes, files to
check, and things to do to improve security. This
book is an excellent complement to this handbook.

[GREENIA90]
Greenia, M., “Computer Security Information Sourcebook”,
Lexikon Services, Sacramento, CA, 1989.

A manager’s guide to computer security. Contains a
sourcebook of key reference materials including
access control and computer crimes bibliographies.

[HOFFMAN]
Hoffman, L., “Rogue Programs: Viruses, Worms, and
Trojan Horses”, Van Nostrand Reinhold, NY, 1990.
(384 pages, includes bibliographical references and index.)

[JOHNSON]
Johnson, D., and J. Podesta, “Formulating A Company Policy
on Access to and Use and Disclosure of Electronic Mail on
Company Computer Systems”.

A white paper prepared for the EMA, written by two experts
in privacy law. Gives background on the issues, and presents
some policy options.

Available from: The Electronic Mail Association (EMA)
1555 Wilson Blvd, Suite 555, Arlington, VA, 22209.
(703) 522-7111.

[KENT]
Kent, Stephen, “E-Mail Privacy for the Internet: New Software
and Strict Registration Procedures will be Implemented this
Year”, Business Communications Review, Vol. 20, No. 1,
Pg. 55, 1 January 1990.

Site Security Policy Handbook Working Group [Page 88]

RFC 1244 Site Security Handbook July 1991

[LU]
Lu, W., and M. Sundareshan, “Secure Communication in
Internet Environments: A Hierachical Key Management Scheme
for End-to-End Encryption”, IEEE Transactions on
Communications, Vol. 37, No. 10, Pg. 1014, 1 October 1989.

[LU1]
Lu, W., and M. Sundareshan, “A Model for Multilevel Security
in Computer Networks”, IEEE Transactions on Software
Engineering, Vol. 16, No. 6, Page 647, 1 June 1990.

[NSA]
National Security Agency, “Information Systems Security
Products and Services Catalog”, NSA, Quarterly Publication.

NSA’s catalogue contains chapter on: Endorsed Cryptographic
Products List; NSA Endorsed Data Encryption Standard (DES)
Products List; Protected Services List; Evaluated Products
List; Preferred Products List; and Endorsed Tools List.

The catalogue is available from the Superintendent of
Documents, U.S. Government Printing Office, Washington,
D.C. One may place telephone orders by calling:
(202) 783-3238.

[OTA]
United States Congress, Office of Technology Assessment,
“Defending Secrets, Sharing Data: New Locks and Keys for
Electronic Information”, OTA-CIT-310, October 1987.

This report, prepared for congressional committee considering
Federal policy on the protection of electronic information, is
interesting because of the issues it raises regarding the
impact of technology used to protect information. It also
serves as a reasonable introduction to the various encryption
and information protection mechanisms. 185 pages. Available
from the U.S. Government Printing Office.

[PALMER]
Palmer, I., and G. Potter, “Computer Security Risk
Management”, Van Nostrand Reinhold, NY, 1989.

[PFLEEGER]
Pfleeger, C., “Security in Computing”, Prentice-Hall,
Englewood Cliffs, NJ, 1989.

A general textbook in computer security, this book provides an
excellent and very readable introduction to classic computer

Site Security Policy Handbook Working Group [Page 89]

RFC 1244 Site Security Handbook July 1991

security problems and solutions, with a particular emphasis on
encryption. The encryption coverage serves as a good
introduction to the subject. Other topics covered include
building secure programs and systems, security of database,
personal computer security, network and communications
security, physical security, risk analysis and security
planning, and legal and ethical issues. 538 pages including
index and bibliography.

[SHIREY]
Shirey, R., “Defense Data Network Security Architecture”,
Computer Communication Review, Vol. 20, No. 2, Page 66,
1 April 1990.

[SPAFFORD]
Spafford, E., Heaphy, K., and D. Ferbrache, “Computer
Viruses: Dealing with Electronic Vandalism and Programmed
Threats”, ADAPSO, 1989. (109 pages.)

This is a good general reference on computer viruses and
related concerns. In addition to describing viruses in
some detail, it also covers more general security issues,
legal recourse in case of security problems, and includes
lists of laws, journals focused on computers security,
and other security-related resources.

Available from: ADAPSO, 1300 N. 17th St, Suite 300,
Arlington VA 22209. (703) 522-5055.

[STOLL88]
Stoll, C., “Stalking the Wily Hacker”, Communications
of the ACM, Vol. 31, No. 5, Pgs. 484-497, ACM,
New York, NY, May 1988.

This article describes some of the technical means used
to trace the intruder that was later chronicled in
“Cuckoo’s Egg” (see below).

[STOLL89]
Stoll, C., “The Cuckoo’s Egg”, ISBN 00385-24946-2,
Doubleday, 1989.

Clifford Stoll, an astronomer turned UNIX System
Administrator, recounts an exciting, true story of how he
tracked a computer intruder through the maze of American
military and research networks. This book is easy to
understand and can serve as an interesting introduction to
the world of networking. Jon Postel says in a book review,

Site Security Policy Handbook Working Group [Page 90]

RFC 1244 Site Security Handbook July 1991

“[this book] … is absolutely essential reading for anyone
that uses or operates any computer connected to the Internet
or any other computer network.”

[VALLA]
Vallabhaneni, S., “Auditing Computer Security: A Manual with
Case Studies”, Wiley, New York, NY, 1989.

8.3 Ethics

[CPSR89]
Computer Professionals for Social Responsibility, “CPSR
Statement on the Computer Virus”, CPSR, Communications of the
ACM, Vol. 32, No. 6, Pg. 699, June 1989.

This memo is a statement on the Internet Computer Virus
by the Computer Professionals for Social Responsibility
(CPSR).

[DENNING]
Denning, Peter J., Editor, “Computers Under Attack:
Intruders, Worms, and Viruses”, ACM Press, 1990.

A collection of 40 pieces divided into six sections: the
emergence of worldwide computer networks, electronic breakins,
worms, viruses, counterculture (articles examining the world
of the “hacker”), and finally a section discussing social,
legal, and ethical considerations.

A thoughtful collection that addresses the phenomenon of
attacks on computers. This includes a number of previously
published articles and some new ones. The previously
published ones are well chosen, and include some references
that might be otherwise hard to obtain. This book is a key
reference to computer security threats that have generated
much of the concern over computer security in recent years.

[ERMANN]
Ermann, D., Williams, M., and C. Gutierrez, Editors,
“Computers, Ethics, and Society”, Oxford University Press,
NY, 1990. (376 pages, includes bibliographical references).

[FORESTER]
Forester, T., and P. Morrison, “Computer Ethics: Tales and
Ethical Dilemmas in Computing”, MIT Press, Cambridge, MA,
1990. (192 pages including index.)

Site Security Policy Handbook Working Group [Page 91]

RFC 1244 Site Security Handbook July 1991

From the preface: “The aim of this book is two-fold: (1) to
describe some of the problems created by society by computers,
and (2) to show how these problems present ethical dilemmas
for computers professionals and computer users.

The problems created by computers arise, in turn, from two
main sources: from hardware and software malfunctions and
from misuse by human beings. We argue that computer systems
by their very nature are insecure, unreliable, and
unpredictable — and that society has yet to come to terms
with the consequences. We also seek to show how society
has become newly vulnerable to human misuse of computers in
the form of computer crime, software theft, hacking, the
creation of viruses, invasions of privacy, and so on.”

The eight chapters include “Computer Crime”, “Software
Theft”, “Hacking and Viruses”, “Unreliable Computers”,
“The Invasion of Privacy”, “AI and Expert Systems”,
and “Computerizing the Workplace.” Includes extensive
notes on sources and an index.

[GOULD]
Gould, C., Editor, “The Information Web: Ethical and Social
Implications of Computer Networking”, Westview Press,
Boulder, CO, 1989.

[IAB89]
Internet Activities Board, “Ethics and the Internet”,
RFC 1087, IAB, January 1989. Also appears in the
Communications of the ACM, Vol. 32, No. 6, Pg. 710,
June 1989.

This memo is a statement of policy by the Internet
Activities Board (IAB) concerning the proper use of
the resources of the Internet. Available on-line on
host ftp.nisc.sri.com, directory rfc, filename rfc1087.txt.
Also available on host nis.nsf.net, directory RFC,
filename RFC1087.TXT-1.

[MARTIN]
Martin, M., and R. Schinzinger, “Ethics in Engineering”,
McGraw Hill, 2nd Edition, 1989.

[MIT89]
Massachusetts Institute of Technology, “Teaching Students
About Responsible Use of Computers”, MIT, 1985-1986. Also
reprinted in the Communications of the ACM, Vol. 32, No. 6,
Pg. 704, Athena Project, MIT, June 1989.

Site Security Policy Handbook Working Group [Page 92]

RFC 1244 Site Security Handbook July 1991

This memo is a statement of policy by the Massachusetts
Institute of Technology (MIT) on the responsible use
of computers.

[NIST]
National Institute of Standards and Technology, “Computer
Viruses and Related Threats: A Management Guide”, NIST
Special Publication 500-166, August 1989.

[NSF88]
National Science Foundation, “NSF Poses Code of Networking
Ethics”, Communications of the ACM, Vol. 32, No. 6, Pg. 688,
June 1989. Also appears in the minutes of the regular
meeting of the Division Advisory Panel for Networking and
Communications Research and Infrastructure, Dave Farber,
Chair, November 29-30, 1988.

This memo is a statement of policy by the National Science
Foundation (NSF) concerning the ethical use of the Internet.

[PARKER90]
Parker, D., Swope, S., and B. Baker, “Ethical Conflicts:
Information and Computer Science, Technology and Business”,
QED Information Sciences, Inc., Wellesley, MA. (245 pages).

Additional publications on Ethics:

The University of New Mexico (UNM)

The UNM has a collection of ethics documents. Included are
legislation from several states and policies from many
institutions.

Access is via FTP, IP address ariel.umn.edu. Look in the
directory /ethics.

8.4 The Internet Worm

[BROCK]
Brock, J., “November 1988 Internet Computer Virus and the
Vulnerability of National Telecommunications Networks to
Computer Viruses”, GAO/T-IMTEC-89-10, Washington, DC,
20 July 1989.

Testimonial statement of Jack L. Brock, Director, U. S.
Government Information before the Subcommittee on
Telecommunications and Finance, Committee on Energy and

Site Security Policy Handbook Working Group [Page 93]

RFC 1244 Site Security Handbook July 1991

Commerce, House of Representatives.

[EICHIN89]
Eichin, M., and J. Rochlis, “With Microscope and Tweezers:
An Analysis of the Internet Virus of November 1988”,
Massachusetts Institute of Technology, February 1989.

Provides a detailed dissection of the worm program. The
paper discusses the major points of the worm program then
reviews strategies, chronology, lessons and open issues,
Acknowledgments; also included are a detailed appendix
on the worm program subroutine by subroutine, an
appendix on the cast of characters, and a reference section.

[EISENBERG89]
Eisenberg, T., D. Gries, J. Hartmanis, D. Holcomb,
M. Lynn, and T. Santoro, “The Computer Worm”, Cornell
University, 6 February 1989.

A Cornell University Report presented to the Provost of the
University on 6 February 1989 on the Internet Worm.

[GAO]
U.S. General Accounting Office, “Computer Security – Virus
Highlights Need for Improved Internet Management”, United
States General Accounting Office, Washington, DC, 1989.

This 36 page report (GAO/IMTEC-89-57), by the U.S.
Government Accounting Office, describes the Internet worm
and its effects. It gives a good overview of the various
U.S. agencies involved in the Internet today and their
concerns vis-a-vis computer security and networking.

Available on-line on host nnsc.nsf.net, directory
pub, filename GAO_RPT; and on nis.nsf.net, directory nsfnet,
filename GAO_RPT.TXT.

[REYNOLDS89]
The Helminthiasis of the Internet, RFC 1135,
USC/Information Sciences Institute, Marina del Rey,
CA, December 1989.

This report looks back at the helminthiasis (infestation
with, or disease caused by parasitic worms) of the
Internet that was unleashed the evening of 2 November 1988.
This document provides a glimpse at the infection,its
festering, and cure. The impact of the worm on the Internet
community, ethics statements, the role of the news media,

Site Security Policy Handbook Working Group [Page 94]

RFC 1244 Site Security Handbook July 1991

crime in the computer world, and future prevention is
discussed. A documentation review presents four publications
that describe in detail this particular parasitic computer
program. Reference and bibliography sections are also
included. Available on-line on host ftp.nisc.sri.com
directory rfc, filename rfc1135.txt. Also available on
host nis.nsf.net, directory RFC, filename RFC1135.TXT-1.

[SEELEY89]
Seeley, D., “A Tour of the Worm”, Proceedings of 1989
Winter USENIX Conference, Usenix Association, San Diego, CA,
February 1989.

Details are presented as a “walk thru” of this particular
worm program. The paper opened with an abstract,
introduction, detailed chronology of events upon the
discovery of the worm, an overview, the internals of the
worm, personal opinions, and conclusion.

[SPAFFORD88]
Spafford, E., “The Internet Worm Program: An
Analysis”, Computer Communication Review, Vol. 19,
No. 1, ACM SIGCOM, January 1989. Also issued as Purdue
CS Technical Report CSD-TR-823, 28 November 1988.

Describes the infection of the Internet as a worm
program that exploited flaws in utility programs in
UNIX based systems. The report gives a detailed
description of the components of the worm program:
data and functions. Spafford focuses his study on two
completely independent reverse-compilations of the
worm and a version disassembled to VAX assembly language.

[SPAFFORD89]
Spafford, G., “An Analysis of the Internet Worm”,
Proceedings of the European Software Engineering
Conference 1989, Warwick England, September 1989.
Proceedings published by Springer-Verlag as: Lecture
Notes in Computer Science #387. Also issued
as Purdue Technical Report #CSD-TR-933.

8.5 National Computer Security Center (NCSC)

All NCSC publications, approved for public release, are available
from the NCSC Superintendent of Documents.

NCSC = National Computer Security Center

Site Security Policy Handbook Working Group [Page 95]

RFC 1244 Site Security Handbook July 1991

9800 Savage Road
Ft Meade, MD 20755-6000

CSC = Computer Security Center:
an older name for the NCSC

NTISS = National Telecommunications and
Information Systems Security
NTISS Committee, National Security Agency
Ft Meade, MD 20755-6000

[CSC]
Department of Defense, “Password Management Guideline”,
CSC-STD-002-85, 12 April 1985, 31 pages.

The security provided by a password system depends on
the passwords being kept secret at all times. Thus, a
password is vulnerable to compromise whenever it is used,
stored, or even known. In a password-based authentication
mechanism implemented on an ADP system, passwords are
vulnerable to compromise due to five essential aspects
of the password system: 1) a password must be initially
assigned to a user when enrolled on the ADP system;
2) a user’s password must be changed periodically;
3) the ADP system must maintain a ‘password
database’; 4) users must remember their passwords; and
5) users must enter their passwords into the ADP system at
authentication time. This guideline prescribes steps to be
taken to minimize the vulnerability of passwords in each of
these circumstances.

[NCSC1]
NCSC, “A Guide to Understanding AUDIT in Trusted Systems”,
NCSC-TG-001, Version-2, 1 June 1988, 25 pages.

Audit trails are used to detect and deter penetration of
a computer system and to reveal usage that identifies
misuse. At the discretion of the auditor, audit trails
may be limited to specific events or may encompass all of
the activities on a system. Although not required by
the criteria, it should be possible for the target of the
audit mechanism to be either a subject or an object. That
is to say, the audit mechanism should be capable of
monitoring every time John accessed the system as well as
every time the nuclear reactor file was accessed; and
likewise every time John accessed the nuclear reactor
file.

Site Security Policy Handbook Working Group [Page 96]

RFC 1244 Site Security Handbook July 1991

[NCSC2]
NCSC, “A Guide to Understanding DISCRETIONARY ACCESS CONTROL
in Trusted Systems”, NCSC-TG-003, Version-1, 30 September
1987, 29 pages.

Discretionary control is the most common type of access
control mechanism implemented in computer systems today.
The basis of this kind of security is that an individual
user, or program operating on the user’s behalf, is
allowed to specify explicitly the types of access other
users (or programs executing on their behalf) may have to
information under the user’s control. […] Discretionary
controls are not a replacement for mandatory controls. In
any environment in which information is protected,
discretionary security provides for a finer granularity of
control within the overall constraints of the mandatory
policy.

[NCSC3]
NCSC, “A Guide to Understanding CONFIGURATION MANAGEMENT
in Trusted Systems”, NCSC-TG-006, Version-1, 28 March 1988,
31 pages.

Configuration management consists of four separate tasks:
identification, control, status accounting, and auditing.
For every change that is made to an automated data
processing (ADP) system, the design and requirements of the
changed version of the system should be identified. The
control task of configuration management is performed
by subjecting every change to documentation, hardware, and
software/firmware to review and approval by an authorized
authority. Configuration status accounting is responsible
for recording and reporting on the configuration of the
product throughout the change. Finally, though the process
of a configuration audit, the completed change can be
verified to be functionally correct, and for trusted
systems, consistent with the security policy of the system.

[NTISS]
NTISS, “Advisory Memorandum on Office Automation Security
Guideline”, NTISSAM CONPUSEC/1-87, 16 January 1987,
58 pages.

This document provides guidance to users, managers, security
officers, and procurement officers of Office Automation
Systems. Areas addressed include: physical security,
personnel security, procedural security, hardware/software
security, emanations security (TEMPEST), and communications

Site Security Policy Handbook Working Group [Page 97]

RFC 1244 Site Security Handbook July 1991

security for stand-alone OA Systems, OA Systems
used as terminals connected to mainframe computer systems,
and OA Systems used as hosts in a Local Area Network (LAN).
Differentiation is made between those Office Automation
Systems equipped with removable storage media only (e.g.,
floppy disks, cassette tapes, removable hard disks) and
those Office Automation Systems equipped with fixed media
(e.g., Winchester disks).

Additional NCSC Publications:

[NCSC4]
National Computer Security Center, “Glossary of Computer
Security Terms”, NCSC-TG-004, NCSC, 21 October 1988.

[NCSC5]
National Computer Security Center, “Trusted
Computer System Evaluation Criteria”, DoD 5200.28-STD,
CSC-STD-001-83, NCSC, December 1985.

[NCSC7]
National Computer Security Center, “Guidance for
Applying the Department of Defense Trusted Computer System
Evaluation Criteria in Specific Environments”,
CSC-STD-003-85, NCSC, 25 June 1985.

[NCSC8]
National Computer Security Center, “Technical Rationale
Behind CSC-STD-003-85: Computer Security Requirements”,
CSC-STD-004-85, NCSC, 25 June 85.

[NCSC9]
National Computer Security Center, “Magnetic Remanence
Security Guideline”, CSC-STD-005-85, NCSC, 15 November 1985.

This guideline is tagged as a “For Official Use Only”
exemption under Section 6, Public Law 86-36 (50 U.S. Code
402). Distribution authorized of U.S. Government agencies
and their contractors to protect unclassified technical,
operational, or administrative data relating to operations
of the National Security Agency.

[NCSC10]
National Computer Security Center, “Guidelines for Formal
Verification Systems”, Shipping list no.: 89-660-P, The
Center, Fort George G. Meade, MD, 1 April 1990.

Site Security Policy Handbook Working Group [Page 98]

RFC 1244 Site Security Handbook July 1991

[NCSC11]
National Computer Security Center, “Glossary of Computer
Security Terms”, Shipping list no.: 89-254-P, The Center,
Fort George G. Meade, MD, 21 October 1988.

[NCSC12]
National Computer Security Center, “Trusted UNIX Working
Group (TRUSIX) rationale for selecting access control
list features for the UNIX system”, Shipping list no.:
90-076-P, The Center, Fort George G. Meade, MD, 1990.

[NCSC13]
National Computer Security Center, “Trusted Network
Interpretation”, NCSC-TG-005, NCSC, 31 July 1987.

[NCSC14]
Tinto, M., “Computer Viruses: Prevention, Detection, and
Treatment”, National Computer Security Center C1
Technical Report C1-001-89, June 1989.

[NCSC15]
National Computer Security Conference, “12th National
Computer Security Conference: Baltimore Convention Center,
Baltimore, MD, 10-13 October, 1989: Information Systems
Security, Solutions for Today – Concepts for Tomorrow”,
National Institute of Standards and National Computer
Security Center, 1989.

8.6 Security Checklists

[AUCOIN]
Aucoin, R., “Computer Viruses: Checklist for Recovery”,
Computers in Libraries, Vol. 9, No. 2, Pg. 4,
1 February 1989.

[WOOD]
Wood, C., Banks, W., Guarro, S., Garcia, A., Hampel, V.,
and H. Sartorio, “Computer Security: A Comprehensive Controls
Checklist”, John Wiley and Sons, Interscience Publication,
1987.

8.7 Additional Publications

Defense Data Network’s Network Information Center (DDN NIC)

The DDN NIC maintains DDN Security bulletins and DDN Management

Site Security Policy Handbook Working Group [Page 99]

RFC 1244 Site Security Handbook July 1991

bulletins online on the machine: NIC.DDN.MIL. They are available
via anonymous FTP. The DDN Security bulletins are in the
directory: SCC, and the DDN Management bulletins are in the
directory: DDN-NEWS.

For additional information, you may send a message to:
NIC@NIC.DDN.MIL, or call the DDN NIC at: 1-800-235-3155.

[DDN88]
Defense Data Network, “BSD 4.2 and 4.3 Software Problem
Resolution”, DDN MGT Bulletin #43, DDN Network Information
Center, 3 November 1988.

A Defense Data Network Management Bulletin announcement
on the 4.2bsd and 4.3bsd software fixes to the Internet
worm.

[DDN89]
DCA DDN Defense Communications System, “DDN Security
Bulletin 03”, DDN Security Coordination Center,
17 October 1989.

IEEE Proceedings

[IEEE]
“Proceedings of the IEEE Symposium on Security
and Privacy”, published annually.

IEEE Proceedings are available from:

Computer Society of the IEEE
P.O. Box 80452
Worldway Postal Center
Los Angeles, CA 90080

Other Publications:

Computer Law and Tax Report
Computers and Security
Security Management Magazine
Journal of Information Systems Management
Data Processing & Communications Security
SIG Security, Audit & Control Review

Site Security Policy Handbook Working Group [Page 100]

RFC 1244 Site Security Handbook July 1991

9. Acknowledgments

Thanks to the SSPHWG’s illustrious “Outline Squad”, who assembled at
USC/Information Sciences Institute on 12-June-90: Ray Bates (ISI),
Frank Byrum (DEC), Michael A. Contino (PSU), Dave Dalva (Trusted
Information Systems, Inc.), Jim Duncan (Penn State Math Department),
Bruce Hamilton (Xerox), Sean Kirkpatrick (Unisys), Tom Longstaff
(CIAC/LLNL), Fred Ostapik (SRI/NIC), Keith Pilotti (SAIC), and Bjorn
Satdeva (/sys/admin, inc.).

Many thanks to Rich Pethia and the Computer Emergency Response Team
(CERT); much of the work by Paul Holbrook was done while he was
working for CERT. Rich also provided a very thorough review of this
document. Thanks also to Jon Postel and USC/Information Sciences
Institute for contributing facilities and moral support to this
effort.

Last, but NOT least, we would like to thank members of the SSPHWG and
Friends for their additional contributions: Vint Cerf (CNRI),
Dave Grisham (UNM), Nancy Lee Kirkpatrick (Typist Extraordinaire),
Chris McDonald (WSMR), H. Craig McKee (Mitre), Gene Spafford (Purdue),
and Aileen Yuan (Mitre).

10. Security Considerations

If security considerations had not been so widely ignored in the
Internet, this memo would not have been possible.

11. Authors’ Addresses

J. Paul Holbrook
CICNet, Inc.
2901 Hubbard
Ann Arbor, MI 48105

Phone: (313) 998-7680
EMail: holbrook@cic.net

Joyce K. Reynolds
University of Southern California
Information Sciences Institute
4676 Admiralty Way
Marina del Rey, CA 90292

Phone: (213) 822-1511
EMail: JKREY@ISI.EDU

Site Security Policy Handbook Working Group [Page 101]

Defense Data Network Security Bulletin #4

**********************************************************************
DDN Security Bulletin 04         DCA DDN Defense Communications System
23 Oct 89               Published by: DDN Security Coordination Center
                                     (SCC@NIC.DDN.MIL)  (800) 235-3155

                        DEFENSE  DATA  NETWORK
                          SECURITY  BULLETIN

The DDN  SECURITY BULLETIN  is distributed  by the  DDN SCC  (Security
Coordination Center) under  DCA contract as  a means of  communicating
information on network and host security exposures, fixes, &  concerns
to security & management personnel at DDN facilities.  Back issues may
be  obtained  via  FTP  (or  Kermit)  from  NIC.DDN.MIL  [26.0.0.73 or
10.0.0.51] using login="anonymous" and password="guest".  The bulletin
pathname is SCC:DDN-SECURITY-nn (where "nn" is the bulletin number).

**********************************************************************

                     HALLOWEEN PRECAUTIONARY NOTE

Halloween is traditionally a time  for tricks of all kinds.   In order
to guard against possible benign or malevolent attempts to affect  the
normal operation of your host,  the DDN SCC staff suggests  taking the
following easy precautions:

   1. Write a set of emergency procedures for your site and keep it up
      to date.  Address such things as:

         - What would you do if you had an intruder (either a human or
           a computer virus)?

         - Who would you  call for help?  HINT:  Read the top  of this
           bulletin!  Also, for 24 hour assistance:

           MILNET Trouble Desk -- (A/V) 231-1713 or (800) 451-7413

         - Who is in charge of security at your site?

         - How would you apply a hardware/software fix if needed?

   2. Save your files regularly,  and make file  back-ups often.   Put
      the distribution copies of your  software in  a safe  place away
      from your computer room.  Don't forget where they're stored!

   3. Avoid trivial passwords and change them often.   (See the "Green
      Book"  (Department  of  Defense  Password Management Guideline),
      CSC-STD-002-85, for information on the use of passwords.)

   4. Check  to  make  sure  your  host  has no  unauthorized users or
      accounts.  Also check for obsolete accounts (a favorite path for
      intruders to gain access).

   5. Restrict system  ("superuser", "maint", etc.)  privileges to the
      minimum number of accounts you possibly can.

   6. Well publicized accounts including "root", "guest", etc. AND the
      personal account  for the  system administrator  should NOT have
      system privileges.   (Past experience  has shown  that these IDs
      are more susceptible to successful intruder attacks.)

   7. Keep your maintenance contracts active.

Of course,  these steps should be taken throughout the year as part of
your regular operating procedure.

**********************************************************************
�����
Downloaded From P-80 International Information Systems 304-744-2253

The Info-VAX Monthly Posting Part 4 (May 27, 1993)

Archive-name:   info-vax/part04
Last-modified:  1993/05/27

[Changes since last posting: Add new site for CMUIP.  Minor tweaks because of
changes to dmc.com.]

           The Info-VAX Monthly Posting
           ----------------------------
           PART 4 -- How to find software.
           (Coordinated by Dick Munroe, written by many others)

(Part 1 is an introduction to Info-VAX.  Part 2 is Beginner Common Questions.
Part 3 is Advanced Common Questions.)

This is NOT an introduction to navigation on the Internet.  Nor is it intended
to supplant other official "how to find ..." FAQs.  It is intended to be a
collection of pointers to commonly used/requested VMS software.  Whenever
possible the pointers will be to the "official" support site.  Pointers to
widely known software archives will be included here from time to time.

In general, all of the software discussed here either has been or soon will be
available from DECUS either as a seperate package or on the DECUS CDROMs.  If
all else fails, you can always get things through your local DECUS librarian or
[shudder] buy your own copy.

I'm also soliciting reviews of any of the software discussed in here from users.

Thanks,

Dick Munroe

Save this message for future reference!

Table of Contents:

,ANONYMOUS-UUCP -- Archive sites available via anonymous UUCP          30-Apr-93
        Dick Munroe <munroe@dmc.com>
,BBS	-- Is there a VAX based BBS available?		       updated 28-Oct-92
	Dick Munroe <munroe@dmc.com>
        Jay Whitney <jw@innovative.com>
        "Brendan Welch" <welchb@aspen.ulowell.edu>
,CMU-OpenVMS-IP -- Where to find CMU-OpenVMS-IP                        24-Feb-93
        Marc Shannon <SYNFUL@DRYCAS.CLUB.CC.CMU.EDU>
,FAQFINDING -- Sources for Frequently Asked Questions                  04-Aug-92
        Dick Munroe <munroe@dmc.com>
,FILESERV -- Addresses of various mail based file servers.     updated 28-Oct-92
        Dick Munroe <munroe@dmc.com>
,FTP    -- Addresses of various FTP sites.                     updated 05-Sep-92
        Dick Munroe <munroe@dmc.com>
	Ulli Horlacher <ORAKEL@rzmain.rz.uni-ulm.de>
,FTPMAIL -- How to access FTP without an Internet Connection	       02-Aug-92
	Dick Munroe <munroe@dmc.com>
,GCC	-- See ,GNUSOFT						       02-Aug-92
,GNUSOFT -- How to find GNU software				       02-Aug-92
	Dick Munroe <munroe@dmc.com>
,MX	-- How to get a copy of the Message Exchanger		       02-Aug-92
	Hunter Goatley <goathunter@WKUVX1.BITNET>
,NEWS   -- How to get a news reader.                           updated 03-Sep-92
	Billy Barron <billy@sol.acs.unt.edu>
        Rod Eldridge <gvrod@isuvax.iastate.edu>
        Hunter Goatley <goathunter@WKUVX1.BITNET>
        Bernd Onasch <ONASCH@ira.uka.de>
,SOFTWARE_LIST -- Pointer to lists of VAX Software                     03-Sep-92
        Ed Wilts <EWilts@Galaxy.Gov.BC.CA>
,UUCP	-- *how* to get decus uucp V2.0				       02-Aug-92
	Kent C. Brodie <brodie@fps.mcw.edu>
,VI     -- Where to get VI for VMS? (for those without POSIX)          28-Oct-92
        le9miiwa@cine88.cineca.it (Andrea Spinelli) and a cast of thousands.
,ZMODEM -- Where to find [sources for] ZMODEM for VMS.                 14-Sep-92
	Dick Munroe <munroe@dmc.com>
	Chuck Forsberg <caf@omen.com>
,ZOO	-- Where to get ZOO v2.10 for MS-DOS, Unix and VMS	       09-Aug-92
	Keith Petersen <w8sdz@tacom-emh1.army.mil>, The SIMTEL20 Archives

(the ",UUCP", etc are keywords.  If you search for that text (including the ",")
you will be brought to the beginning of that article.)

--------------------------------------------------------------------------------
,ANONYMOUS-UUCP

Some sites provide anonymous uucp access to VMSNET.SOURCES and a variety
of other software.

dmc.com         Telephone:      (508) 562-7186
                Modem:          WorldBlazer or equivalent
                Login:          ...
                                DMConnection> connect hulk<cr>
                                ...
                                Username: UANON<cr>
                                Password: anonymous<cr>

        Where ... is text that can be ignored and <cr> is carriage
        return.

        Make sure that your chat script sets parity to zero or you won't
        make it past the DECServer.  U*x boxes should try "" P_ZERO as
        the first two tokens of their chat scripts if they have trouble
        getting in.

        Further information can be gotten by transfering:

                ~/listings/README.

        Which describes the contents of the archives and how to transfer
        things.  Briefly, you have access to the latest DECUS L&T CDROM,
        VMSNET.SOURCE, COMP.SOURCES.*, COMP.BINARIES.MAC, and any other
        software in use at dmc.com (some GNU products, MX, DECUS UUCP,
        etc.)  If you would like other things archived, send
        suggestions/requests to postmaster@dmc.com

--------------------------------------------------------------------------------
,BBS

The only public domain VAX based BBS that I know of is available from:

	MAILSERV@ualr.edu
or
	MAILSERV%ualr.bitnet@cunyvm.cuny.edu

Start by sending a message to the mailserv with the body of the message being:

HELP
INDEX

And go from there.  A copy of this BBS has been posted to VMSNET.SOURCES, so it
should also be available from an archive site near you.

At least one person (Roger Smith, SMITH@biosci.arizona.edu) has reported that
MAILSERV@ualr.edu has bounced messages recently.

There is at least one commercially developed BBS available (I'm not an owner or
user of this software, I just know about it).  Contact OMTOOL in Salem, New
Hampshire, USA for details.

If anybody knows of other commercial or public domain BBSs for VMS, please
contact me so I can update this listing.

Dick Munroe

I got a pointer to another commercial BBS.  The following message is from one of
the developers.  The product name is Huddle and is available from Innovative
Software in Denver, Col.  As before, I'm not a user or a principal in the
company, just an interested bystander.

>From: Jay Whitney <jw@innovative.com>
>Subject: Your Huddle request
>
>Huddle is a commercial electronic conferencing and bulletin board system for
>VMS.  Its primary catch point is ease of use.  Huddle offers three different
>user interfaces; two are command-based, with an intuitive command set based on
>VMS MAIL (of those two, one is screen oriented, and one is not), the third is a
>panel-oriented, user-extendable menu a-la Lotus 1-2-3 and MS-word.
>
>Huddle also features hierarchical conferencing.  A conference can support any
>number of subconferences, where the aggregate structure can be managed as a
>single unit.  Maintenance is very simple.  Once a maintenance policy has been
>defined, implementation of the maintenance policy is 100% automated.  Access
>control is very similar to standard VMS mechanisms.
>
>Huddle also offers built-in bidirectional Bitnet/Internet mailing list
>integration, file upload and download, a file transfer area, and a system news
>facility.
>
>                                                Best Regards,
>                                                Jay Whitney

Yet another pointer to a commercial BBS:

>From: "Brendan Welch, System Analyst, UMass/Lowell"
>      <welchb@aspen.ulowell.edu>
>Subject: Info-VAX: How to find VAX/VMS software.
>
>CoSy (Conferencing System) is a product originally from the Univ. of Guelph.
>It is now supported by Softwords, 4252 Commerce Circle, Victoria, BC, Canada,
>V8Z  4M2.    (604)727-6522      Their David Sells does have an email
>address; sorry I have lost it.
>
>Incidentally, we do run it here (as well as VaxNotes and VTX).
>
>Brendan Welch, UMass/Lowell, W1LPG,  welchb@woods.ulowell.edu

--------------------------------------------------------------------------------
,CMU-OpenVMS-IP

>The End of an Era...
>
>Effective December 1st, 1992, Carnegie Mellon University will no
>longer be distributing, developing, or supporting CMU-TEK TCP/IP.
>The resources for CMU-TEK will be reassigned to other local projects
>which will benefit the CMU community.
>
>What happens now?
>
>CMU has made an arrangement with Tektronix to remove their name from
>the product and we are making a final release called CMU-OpenVMS/IP.
>This version is nearly identical to the CMU-TEK 6.6-5 kit, so
>"upgrading" is not required.  This version is being released for
>public distribution and will be available for a limited time (until
>January 15, 1993 locally and can probably be found elsewhere after
>that time) for ANONYMOUS FTP.
>
>The installation savesets for CMU-OpenVMS/IP will be stored on
>DRYCAS.CLUB.CC.CMU.EDU (128.2.232.11).  This location will also
>contain a CMU-OpenVMS/IP V6.3 kit for those users who are still
>using VMS V4.4 through V4.7.  Instructions on how best to retrieve
>and install the software are available there in README.CMUIP.  In
>addition, these kits will also be placed in the DECUS Library and
>available for distribution after January 1, 1993.  You can contact
>the DECUS Library at the following address/number:
>        DECUS
>        Library Order Processing
>        333 South Street, SHR1-4/D31
>        Shrewsbury, MA  01545-4195
>        Phone: 508-841-3500/3502/3511 (8:30AM - 5:00PM EST)
>        FAX: 508-841-3373
>
>"Support" for the product will continue on a volunteer-based system
>through the electronic mailing list.  Please note (from this
>message) that the mailing list has been moved.  The new addresses
>are:
>        cmu-openvms-ip@DRYCAS.CLUB.CC.CMU.EDU
>for postings to the mailing list, and:
>        cmu-openvms-ip-request@DRYCAS.CLUB.CC.CMU.EDU
>for requests to be put on or taken off the list.
>The VMSnet newsgroup vmsnet.networks.cmu-tek is also being renamed
>to be vmsnet.networks.cmu-openvms-ip.
>
>Thank you for your support over the years and we hope that this new
>arrangement will permit more sites to help develop CMU-OpenVMS/IP.

[Ed. note: I have found copies of the release available via FTP on
csus.edu in the following directories:

        /pub/cmuip/contrib      - User contributions to CMUIP.
        /pub/cmuip/vms-v4       - The version of CMUIP know to work
                                  against VMS V4
        /pub/cmuip/vms-v5       - Ditto for V5 and higher.

The mailing list is gatewayed to

        vmsnet.networks.tcp-ip.cmu-tek

for those of you who prefer news group interaction.  There appears to be
much more traffic there now and as near as I can tell, better support. 
Anyway, more people are responding faster.

I received official confirmation of csus.edu as a mirror site in a later
message, below.]

>Since we are quite fond of the former CMU/Tek package, still rely on it
>for some of our machines, and need to keep a copy around anyway, we have
>placed a mirror copy of the release files on our Anonymous-FTP server.
>CMU initially announced that these files would only be available from their
>server (drycas.club.cc.cmu.edu) until 15-Jan-1993. The intent is that
>providing this mirror site will help maintain the availability of this
>software.
>
>PLEASE NOTE that we are doing this as a convenience to the network community -
>CSU Sacramento does *not* offer any form of support or assistance with the
>product so please do not ask! (Though we'll be happy to distribute patches
>and updates if/when they become available from Henry, John and other
>dedicated [and slightly eccentric :-)] persons).
>
>    John F. Sandhoff, University Network Support
>      California State University, Sacramento - USA
>        sandhoff@csus.edu

[Ed. note: Another FTP site for CMUIP is available (this may even be the
"official" one as Henry Miller runs it):

        sacwms.mp.usbr.gov      [.tekip]
        sacusr.mp.usbr.gov      [.tekip]

There is a European mirror for CMUIP and all it's goodies run by Andy Harper:

        oak.cc.kcl.ac.uk        [.cmu-tcpip]
        elm.cc.kcl.ac.uk        [.cmu-tcpip]
        ash.cc.kcl.ac.uk        [.cmu-tcpip]

get [.cmu-tcpip]AAA-TYPE-ME-FIRST.TXT to see the index of everything in the
directory.]
--------------------------------------------------------------------------------
,FAQFINDING

The news.answers news group is the official vehicle for publication of
frequently asked questions digests (including this one).  For those of you
without access to News, there is an alternative: DMConnection archives
news.answers and makes it available to the general public.  Send a message to
fileserv@dmc.com with the body of the message containing the line:

help

to get started.

Dick Munroe
--------------------------------------------------------------------------------
,FILESERV

The following mail servers deal primarily with VMS software.  Instructions on
their use and content appear regularly in the vmsnet.sources.d, vmsnet.misc,
and vmsnet.tpu newsgroups.

Address                                 Maintainer

MAILSERV@Cerritos.EDU.                  Bruce Tanner <tanner@cerritos.edu>
nrl_archive@nrlvax.nrl.navy.mil         koffley@nrlvax.nrl.navy.mil
VMSSERV@NYUACF.BITNET                   Stephen Tihor <tihor@acf4.nyu.edu>
FILESERV@SHSU.BITNET or			TPU Command Procedures Collection
FILESERV@SHSU.EDU
FILESERV@WKUVX1.BITNET                  Hunter Goatley <goathunter@WKUVX1.BITNET>
FILESERV@irav17.ira.uka.de              Bernd Onasch <ONASCH@irav17.ira.uka.de>

Dick Munroe
--------------------------------------------------------------------------------
,FTP

The following FTP sites have significant collections of VMS software.  More
complete lists of anonymous FTP sites and their contents appear regularly in the
news.answers newsgroup.  The following ftp sites and their content are discussed 
fully in vmsnet.sources.d.

Address                                 Maintainer

acfcluster.nyu.edu                      Stephen Tihor <tihor@acf4.nyu.edu>
Black.Cerritos.EDU                      Bruce Tanner <tanner@cerritos.edu>
dmc.com                                 Dick Munroe <munroe@dmc.com>
ftp.spc.edu
White.Cerritos.EDU                      Bruce Tanner <tanner@cerritos.edu>

Dick Munroe

>From: ORAKEL@rzmain.rz.uni-ulm.de (Framstag)
>Subject:Re: Anonymous FTP sites for VMS, DOS, Mac files
>
>For a list of VMS-ftp-sites look at info.rz.uni-ulm.de (134.60.1.125) in
>pub/VMS/ftp-sites 
>
>Updated every sunday.
>

[Ed. Note:  I looked and here is the sort of stuff you can expect to find:

total 659
-r--r--r--  1 ftp-admi news        42990 Apr 28 16:42 VAX-VMS-SOFTWARE.LIS_Z
-rw-r--r--  1 ftp-admi news        35853 Sep  6 06:04 acfcluster_nyu_edu.lis_z
-rw-r--r--  1 ftp-admi news         1718 Sep  6 07:40 ada.cenaath.cena.dgac.fr.lis_z
-rw-r--r--  1 ftp-admi news         2594 Sep  6 06:06 addvax_llnl_gov.lis_z
-rw-r--r--  1 ftp-admi news        33135 Sep  6 06:37 arizona_edu.lis_z
-rw-r--r--  1 ftp-admi news        22874 Sep  6 06:40 black_cerritos_edu.lis_z
-rw-r--r--  1 ftp-admi news         2647 Sep  6 06:11 cca_camb_com.lis_z
-rw-r--r--  1 ftp-admi news         4934 Sep  6 06:07 cisco_nosc_mil.lis_z
-rw-r--r--  1 ftp-admi news        13767 Sep  6 06:41 ftp_spc_edu.lis_z
-rw-r--r--  1 ftp-admi news         4508 Aug  2 06:09 ftpvms_ira_uka_de.lis_z
-rw-r--r--  1 ftp-admi news         2662 Sep  6 06:05 iago_caltech_edu.lis_z
-rw-r--r--  1 ftp-admi news         1805 Sep  6 06:00 kuhub_cc_ukans_edu.lis_z
-rw-r--r--  1 ftp-admi news          809 Sep  6 06:11 mis1_mis_mcw_edu.lis_z
-rw-r--r--  1 ftp-admi news       218437 Sep  6 07:37 mvb_saic_com.lis_z
-rw-r--r--  1 ftp-admi news        84183 Sep  6 06:33 niord_shsu_edu.lis_z
-rw-r--r--  1 ftp-admi news         3797 Sep  6 06:05 phast_phys_washington_edu.lis_z
-rw-r--r--  1 ftp-admi news         6300 Sep  6 07:39 public_tgv_com.lis_z
-rw-r--r--  1 ftp-admi news          834 Sep  6 06:06 rml2_sri_com.lis_z
-rw-r--r--  1 ftp-admi news         1167 Sep  6 07:39 ubvms_cc_buffalo_edu.lis_z
-rw-r--r--  1 ftp-admi news          734 Sep  6 06:06 utnetw_utoledo_edu.lis_z
-rw-r--r--  1 ftp-admi news         6718 Sep  6 06:08 vesta_sunquest_com.lis_z
-rw-r--r--  1 ftp-admi news         3337 May 24 06:00 vms_ecs_rpi_edu.lis_z
-rw-r--r--  1 ftp-admi news       139662 Sep  6 06:52 vms_huji_ac_il.lis_z
-rw-r--r--  1 ftp-admi news         1670 Sep  6 06:00 vmsa_oac_uci_edu.lis_z

The above was obtained via ftpmail@decwrl.dec.com with the following message:

connect info.rz.uni-ulm.de
ls /pub/VMS/ftp-sites
quit

Presumably things will change as stuff gets added.]
--------------------------------------------------------------------------------
,FTPMAIL

Many sites are not directly connected to the internet.  Yet much of the software
or information we want access to is not available from mail servers.  In the
United States (there may be other sites, but I've never had reason to access
them) I have found the FTPMAIL service of DEC Western Research Labs to be
invaluable.  To get information, send a message to ftpmail@decwrl.dec.com with a
message body consisting of the single line:

help

ftpmail and other mechanisms are discussed periodically in the news.answers
newsgroup.

Good Luck,

Dick Munroe
--------------------------------------------------------------------------------
,GCC
,GNUSOFT

The official FTP site for all GNU software is prep.ai.mit.edu.  They even have
the basics there for GCC.  For those of you interested in G++ (the GNU C++
compiler) for VMS you should try FTP from mango.rsmas.miami.edu.  They keep
current copies of G++ and G++lib in VMS installable form available.  I have also
found a mail based file server, MRCServ@mtroyal.ab.ca which also keeps current
copies of G++ and G++lib in VMS installable form.

Good Luck,

Dick Munroe
--------------------------------------------------------------------------------
,MX

>From: "Hunter Goatley, WKU" <goathunter@WKUVX1.BITNET>
>Subject:New e-mail fileserver: MXSERVER@WKUVX1.BITNET

>OK, I'm setting up an experiment....
>
>Some of you have been asking about getting MX031 via e-mail because
>you don't have access to anonymous ftp.  Well, I (finally) packaged up
>MX031 and created a new file server on my system.  To the new file
>server is MXSERVER@WKUVX1.BITNET.  MX will *not* be available from
>FILESERV because I want to try to limit when the files go out.
>
>MX031 is a VMS_SHARE file in 113 parts (60 blocks)---that's pretty
>big.  Unlike the way Matt used to set it up, I've (for the time being)
>created one big .MFTU file containing all the VMSINSTAL savesets.
>The only caveat at the moment is that MXSERVER files are only sent
>between 5PM CDT and 6AM CDT.  If very many people request MX031, my
>poor 9600-baud BITNET link will take a while to let everything go
>through.
>
>SO:
>
>  a) please be patient---it may take several days for everything to
>     make it through
>  b) MX_REVC_UPGRADE031 has also been moved from FILESERV to MXSERVER
>     (use SEND MX_REVC_UPGRADE031)
>  c) don't everybody ask for it at once
>  d) this is an experiment---depending on how things go, I may have to
>     limit the number of files a person can get in a day, etc.
>
>Of course, MX is available via ftp from ftp.spc.edu in [.MX].
>
>Hunter
--------------------------------------------------------------------------------
,NEWS

[Ed. Note -- This was cribbed from the news.software.anu-news FAQ.  There
is an beta version of ANU News, 6.1b* that is available as well.  I don't know
if it is available from these sites as well, but I believe it is.  6.1a4 is
packaged as part of the UUCP 2.0 release, so if you're getting that, you don't
need to get a copy unless you want to get the documentation/sources, or the
latest versions.

Dick Munroe]

>From: billy@sol.acs.unt.edu (Billy Barron - VAX/UNIX Systems Manager)
>Subject:FAQ: news.software.anu-news

>...
>Q: How do I acquire a copy of ANU News?
>A: ANU News is available for anonymous FTP on sao.aarnet.edu.au (Australia)
>   and kuhub.cc.ukans.edu (Kansas, USA).  Other sites may also have it
>   available.  Please use the site closest to you.  There are two versions
>   available:  a normal backup saveset and a LZW compressed version of the
>   backup savesets.  VMS LZW programs are available on kuhub.cc.ukans.edu
>   also and through DECUS.  For network load reasons, it is preferable if you
>   acquire LZW and the compressed version of ANU NEWS.
>
>   The Glass patch was posted to the newsgroup after V6.0-2 was released.  The
>   archives for the newsgroup are accessable from listserv@vm1.nodak.edu.
>
>   here's an example of what you should send to listserv@vm1.nodak.edu:
>
>   //ListSrch JOB   Echo=No,Reply-via=mail
>   Database Search DD=Rules
>   //Rules DD *
>   Select glass v6.0-3 patch in anu-news since 1-jan-1991
>   print
>   /*
>
>   i believe you can send mail there with a subject (or is it body?)
>   of "help", and it'll send help.
>
>   there's also a news fileserver at fileserv@dayton.saic.com. commands are
>   like:
>   send news_v60-3_patch.19
>   send news_v60-3_patch.2%
>   send news_v60-3_patch.18
>   and i imagine the help thing works there, too..
>
>...

>From: gvrod@isuvax.iastate.edu (R Eldridge, VMS FOREVER)
>Subject:Re: Looking for Newsreader software

>VNEWS           - NNTP Client developed by a large group of people.  Current
>                contact is Joel Snyder (jms@arizona.edu).  Available by
>                Anonymous FTP, contact Joel for details.  UNIX style interface.
>
>                'bit.listserv.vnews-l' is a USENET equivalent of the 'VNEWS-L'
>                mailing list based at UBVM.BITNET.
>
>DXRN            - VMS port of the Berkeley XRN X windows news reader.  Contact
>                is Rick Murphy (murphy@burfle.dco.dec.com).  Available by
>                Anonymous FTP to decuac.dec.com, file /pub/DEC/dxrn.share.
>
>BULLETIN        - Includes a USENET news reading mode.  Send mail to
>                BULLETIN@ORYANA.PFC.MIT.EDU with the text INFO for details on
>                what BULLETIN is and how to get it.
>

>Subject: Matt Madison's utilities (was Re: Looking for Newsreader software)
>From: goathunter@wkuvx1.bitnet
>
>In article <1992Aug29.170329.29824@news.iastate.edu>, gvrod@isuvax.iastate.edu
>(R Eldridge, VMS FOREVER) writes:
>> NEWSRDR               - Available for Anonymous FTP on
>> ...
>>
>Actually, all of this has changed.  NEWSRDR is available via anonymous
>ftp from ftp.spc.edu ([192.107.46.27]) in [.MADISON.NEWSRDR].  It's
>available via e-mail by sending the following commands in the body
>a of mail message to FILESERV@WKUVX1.BITNET:
>
>SEND NEWSRDR
>SEND FILESERV_TOOLS
>SEND NEWSRDR_SRC                        !To get the sources
>
>All of Matt's programs are available from ftp.spc.edu in [.MADISON]
>and via e-mail from FILESERV@WKUVX1.BITNET.  Send the command DIR ALL
>in the body of a mail message to FILESERV@WKUVX1.BITNET for a brief
>listing of all the packages available from here.
>

>From: ORAKEL@rzmain.rz.uni-ulm.de (Framstag)
>Subject:Re: Looking for Newsreader software
>
>VMS NEWS v1.24 by Bernd Onasch
>
>VMS NEWS is a VAX/VMS full screen orientated NEWSreader supporting the
>following network (TCP/IP) implementations:
>  * CMU/tek
>  * EXOS              (never tested, no site with it available)
>  * MultiNet
>  * Process Software
>  * UCX               (1.x and 2.0 [DEC TCP/IP])
>  * Wollongong
>  * DECnet object     (tested the one of ANU NEWS 6.0.6)
>( * ANET - just got it from someone in japan - not yet tested )
>
>The client supports various display methods:
>  * Numbered  to just show the articles in order they came in
>  * Subject   to display the articles sorted by subject line
>  * Threaded  to display the articles sorted by threads (e.g. references)
>In all cases, VMS NEWS offers a window where you can scroll around to select
>the requested newsgroup or article.
>
>VMS NEWS is available from:
>  * MAILserver FILESERV@irav17.ira.uka.de
>    package NEWS_124 - VMSshare'd source
>  * FTPserver iraun1.ira.uka.de (129.13.10.90)
>    /pub/networks/news/news_1_24.com - VMSshare'd source
>  * FTPserver info.rz.uni-ulm.de (134.60.1.125)
>    /pub/VMS/communication/news124.zip - VMS zipped source
>  * FTPserver ftp.spc.edu (192.107.46.27)
>    [anonymous.ucx]news_124.share - VMSshare'd source
>
>For technical questions about VMS NEWS please mail directly to
>Bernd Onasch:
>                ONASCH@ira.uka.de 
>                ONASCH@informatik.uni-karlsruhe.de
>
--------------------------------------------------------------------------------
,SOFTWARE_LIST
>From: EWILTS <EWILTS@mr.gov.bc.ca>
>Subject: Info-VAX: How to find VAX/VMS software.
>
>Dick,
>
>Ric Steinberger publishes a monthly Vax Software list to comp.os.vms.  A
>pointer to this would be helpful.  From Ric's file:
>
>NOTE:   A current version of this file may be retreived by sending a ONE LINE
>        mail message to:  NETSERVER@RML2.SRI.COM.  The one line should say:
>        ?PACKAGE*VAX_LIST
>        I will also try to keep a current version of VAX_LIST.TXT (this
>        file) available via anonymous FTP from: rml2.sri.com (128.18.22.20).
>
>
>Ed Wilts, BC Systems Corp., 4000 Seymour Place, Victoria, B.C., Canada, V8X 4S8
>EWilts@Galaxy.Gov.BC.CA   |   Ed.Wilts@BCSystems.Gov.BC.CA   |   (604) 389-3430
>
--------------------------------------------------------------------------------
,UUCP

>Subject: HOW to get decus uucp (last time... really!)
>From: brodie@fps.mcw.edu
>
>                       UUCP_INFO.TXT    
>                           -or-
>"The canned answer given to folks who want to get a copy of DECUS UUCP"
>
>Last Revision:  7/28/92		Kent C. Brodie (brodie@fps.mcw.edu)
>						-or-    fps!brodie)
>
>
>How to get DECUS UUCP (Version 2.0):
>
>DECUS UUCP, distributed by the DECUS "VMSNET" working group, is a complete
>implementation of UUCP for VMS.    It is becoming one of the most popular
>"packages" offered from DECUS, and for many VAX sites, it's their only
>way to get to the internet.        This file describes how you may
>obtain your very own copy of the software for little or no cost.
>
>
>1) via DECUS.   By far the most popular method of obtaining DECUS UUCP
>   is through the VAX SIG TAPES distributed by the DECUS librarians.  The
>   tapes (now available on CDROM, too!) do not cost much (or nothing at all
>   if you obtain them via your LUG librarian), and you'll get
>   TONS of stuff besides "just" UUCP.     Contact your local DECUS tape
>   librarian, and find out how you can get the latest set of DECUS tapes.
>   (You can also order them through the DECUS LIBRARY)
>
>   NOTE:
>    * decus UUCP (V2.0) is NOW available on the SPRING 1992 tapes
>    * decus UUCP (V1.3) last appeared on the FALL 1990 tapes
>
>2) using FTP via the INTERNET.   DECUS UUCP (V2.0) is available via anonymous
>   ftp from the following sites:
>
>	ftp.spc.edu		directory [.decus_uucp]
>	Address: 192.107.46.27
>		(maintained by terry@spcvxa.spc.edu)
>	ftp.uu.net		directory /systems/vms/uucp
>	Address: 137.39.1.9
>		(maintained by the folks at uunet)
>	mis1.mis.mcw.edu	directory [.decus_uucp]
>	Address: 141.106.64.11
>		(maintained by brodie@mis.mcw.edu)
>	mvb.saic.com		directory [.decus_uucp]
>	Address: 139.121.19.1
>		(maintained by Mark.berryman@mvb.saic.com)
>
>   "mis1.mis.mcw.edu" is the ftp site that *I* maintain, and any
>   updates I receive will be immediately passed down to the other sites
>   shown above.
>
>   The contents if the [.decus_uucp] directory should have the following:
>   (file sizes listed are VMS disk blocks [=512 bytes each])
>
>     Directory STA1:[UUCP_DIST]
>
>     DEVEL.BKP;1           16128	
>     MAPS.BKP;1            13167	
>     NEWSDEVEL.BKP;1        7623
>   * RUN.BKP;1              8001  		
>     SUPPDOC.BKP;1          3465
>     SUPPNEWS.BKP;1        12096
>   * UU20BOOT.BKP;1         1890
>
>     Total of 7 files, 62370 blocks.
>
>   Files with a "*" are the ABSOLUTE MINIMUM required.  Additionally,
>   NEW sites (not yet running UUCP) will also need MAPS.BKP.
>   The "NEWS" files refer to Geoff Huston's ANU-NEWS, which is
>   also distributed with DECUS UUCP.
>
>   Additionally, If you're really up to it, the ftp site
>   wuarchive.wustl.edu currently offers the latest VAX SIG decus
>   tape offerings via anonymous ftp.  (the WHOLE tape, no parts!).  Be
>   ready for a lot of transfer time and a lot of pieces, though.  
>
>   If you do use FTP on any of the above sites, PLEASE be kind. 
>   DECUS UUCP is kind of big, so transfer it during non-peak hours.

[Ed. note: for those of you without FTP AND without a direct UUNET
connection AND you are unwilling to use UUNET's 900 number service, 
try FTPMAIL.  For details, see the discussion of FTPMAIL, above.]

>   
>3) Via "anonymous uucp".    
>   [thanks to Jamie Hanrahan who arranged for this....]
>
>   UUNET also has the package available for pickup via anonymous uucp.
>   (any DECUS UUCP site running at least V1.2 can make use of this!)
>   
>   DOCUMENTATION ON USING ANONYMOUS UUCP IS *IN* YOUR EXISTING (V1.2+)
>   DECUS UUCP DOCUMENTATION (USRGDxx.MEM).
>
>   You can use uunet even if you are not a "subscriber" (they have
>   a 900 number!   At the time of writing, connect charges via 900 
>   access is about $0.40 a minute)    (be forwarned:  uunet
>   charges rates based on connect time used.... based on a V.32 modem
>   (etc), the BASE kit would cost about $10-$12.  The ENTIRE kit can
>   probably be retrieved for under $50.)   Although long distance is
>   the "next best thing to being there", it isn't cheap....
>
>   (If all you have is a 2400-baud modem, don't even consider this.....)
>
>   The directory to get the files from is ~/systems/vms/uucp   and
>   the very first file to retrieve is "aaareadme.txt".   That file
>   will then document for you everything else you have to do.  (what
>   files to get, how to uncompress them, etc.).
>	
>   This method, therefore, is probably preferable for those sites who
>   aren't a long ways away from uunet, or who are already using it
>   as their main feed.   
>
>   The following is a reference to the file names and sizes.  You
>   can be the judge as to what it would cost to transfer knowing
>   your system's capabilities, modem speed, and connection rate:
>
>
>   directory "/systems/vms/uucp":
>   total 14174
>   *  -rw-rw-r--   1 3        21          10198 Jul 20 19:52 aaareadme.txt
>      -rw-rw-r--   1 3        21        4787679 Jul 10 05:01 devel.bkp-z
>      -rw-rw-r--   1 3        21        2112011 Jul 10 05:17 newsdevel.bkp-z
>   *  -rw-rw-r--   1 3        21          85666 Jul 17 00:15 preboot.vms_share
>   *  -rw-rw-r--   1 3        21        2460500 Jul 10 05:24 run.bkp-z
>      -rw-rw-r--   1 3        21         900149 Jul 10 05:27 suppdoc.bkp-z
>      -rw-rw-r--   1 3        21        3591934 Jul 10 05:37 suppnews.bkp-z
>   *  -rw-rw-r--   1 3        21         498411 Jul 17 00:22 uu20boot.bkp-z
>
>   Files marked with a "*" are the ABSOLUTE MINIMUM required.  (note:
>   the files here are a bit different from the FTP or MAGTAPE distribution)

[Ed. note: the command I used to fetch devel.bkp-z was:

        uucp -v "uunet!~/systems/vms/uucp/devel.bkp-z" uucp_public:devel.bkp-z

and it worked just fine.]

>
>4) Lastly, the final method of getting a copy is via your local friendly
>   mailman.    I provide free distribution of the DECUS UUCP package,
>   but only if you pay the freight.   
>
>   Package the following:
>
>   1 tape    (TK50, TK70, or 9-track)     ** TK85 coming soon...**
>   1 self-addressed, STAMPED envelope to fit the tape  **required**
>
>
>   Mail it all to:
>
>    DECUS UUCP
>c/o Kent Brodie
>    Faculty Physicians & Surgeons
>    11200 West Plank Court, Suite 160
>    Wauwatosa, WI    53226
>
>
>    (If you send me a 9-track tape, I can handle 1600 or 6250bpi.)   
>    If you use 6250, it will all fit on a 600-foot reel.  If you need 1600,
>    then you must send a 2400-foot reel)
>
>    U.S. Postage for mailing a TKxx cartridge is something like $2.90.
>    (thus, for both ways, the total cost is $5.80)
>    
>    A few notes.   I provide this service free of charge, but at the
>    same time, The Medical College of Wisconsin didn't hire me to make
>    tape copies either.  As I get tapes to make, I make them when I can.  
>    Usual turnaround time is about a week.  ("your mileage may vary").
>    
>    Finally, when it comes to the self-addressed "stamped"
>    envelope, you can use normal US postal mail, Federal Express, or UPS.
>    Our office does not normally deal with any other couriers, so please
>    don't go using DHL or something.   
>
>    Questions?   email me or gimme a call.
>
>-----------------------------------------------------------------------------
>Kent C. Brodie - Sr. Systems Manager		InterNet:  brodie@fps.mcw.edu
>Faculty Physicians & Surgeons			uucpNet:   fps.mcw.edu!brodie
>Medical College of Wisconsin			MaBellNet: +1 414 266 5080
--------------------------------------------------------------------------------
,VI
>From: le9miiwa@cine88.cineca.it
>Subject:Summary: vi editor for VMS
>
>Hi everyone!
>
>Here is a summary of answers to my query for a vi clone for VMS
>
>From: whitfill@heechee.meediv.lanl.gov (Jim Whitfill - Los Alamos)
>
>>get [.vi]vi.sav or [.vi]vi.sav_z from anonymous on meediv.lanl.gov. vi.sav is
>>a VMS backup saveset and vi.sav_z is compressed. Get lzdcmp.exe from top level
>>on meediv.lanl.gov and decomp vi.sav_z.
>
>From: orakel@rzmain.rz.uni-ulm.de (Ulli 'Framstag' Horlacher)
>
>info.rz.uni-ulm.de  pub/VMS/editors/elvis.zip
>
>From:  BRENNAN@COCO.CCHS.SU.OZ.AU (Luke Brennan)
>
>>        you should use ELVIS - it works fine. If you can't find it
>>        closer to home, try here at coco.cchs.su.oz.au ( cd [.elvis])
>>
>>        The version here is one rev behind the current one apparently -
>>        but I haven't bothered to find the latest one, as nobody has
>>        complained about anything!
>
>Original_From: TESTA@eldp.epfl.ch (Testa Andrea SI-DP EPFL)
>Host ftp.uni-kl.de   (131.246.9.95)
>Last updated 00:47 27 Sep 1992
>
>        [Andrea suggests Elvis, and points to the DECUS distribution,
>         or the listed ARCHIE-generated locations.
>         I translate here because he wrote to me in Italian...]
>
>Host ftp.uni-kl.de   (131.246.9.95)
>Last updated 00:47 27 Sep 1992
>
>    Location: /pub2/packages/linux/sources/usr.bin
>      FILE   r--r--r--    341883  Apr  6 14:29   elvis-1.5.tar.Z
>    Location: /pub2/packages/linux/binaries/usr.bin
>      FILE      rw-r--r--    333727  Apr  2 18:09   elvis-1.5.tar.Z
>
>Host guardian.cs.psu.edu   (130.203.1.3)
>Last updated 00:25 27 Sep 1992
>
>    Location: /pub/src/gnu
>Last updated 00:25 27 Sep 1992
>     FILE      rw-rw-r--    333727  Apr  6 18:33   elvis-1.5.tar.Z
>
>Host ftp.uu.net   (137.39.1.9)
>     Location: /systems/unix/linux/binaries/usr.bin/Editors
>     FILE      rw-r--r--    166363  Apr 30 16:24   elvis-1.5.tar.Z
>
>From: SFA@epavax.rtpnc.epa.gov (STEVEN FISHBACK)
>
>>        Yes, I have ported it the vax here at my work place and it
>>works great for me.   FTP anonymously to the following site:
>
>        Host:  gatekeeper.dec.com    (16.1.0.2)
>    Location:  pub/VMS/vitpu-v5
>
>               DIRECTORY r-xr-xr-x    1024 Oct 31 1990  vitpu-v5
>
>>It's written in TPU and it comes with documentation to install and use.
>>The creator is Gregg Wonderly, Mathematics Department of Oklahoma
>>State U.
>
>From: tarjeij@extern.uio.no
>
>>Try Elvis v1.5 or later, it is supposed to work under VMS. The TPU version of
>>vi should be avoided.
>
>
>Thanks also to:
>        Larry Henry <larry@eco.twg.com>
>
>[editor's note: like we say here in Lombardia,
> there are 32 different tastes!
> pick your choice! BTW, I am writing this message with vi by
>         whitfill@heechee.meediv.lanl.gov
> which happened to arrive first to me!
>]
>
>Many thanks to everybody. Hope this is useful....
>
>        Andrea Spinelli
--------------------------------------------------------------------------------
,ZMODEM

The following is the official word form Chuck Forsberg, the developer of ZMODEM:

>From: <caf%omen.UUCP%uunet.UUCP@DMC.COM>
>Subject: Re: Pointer to VMS implementation of ZMODEM needed
>
>There are two versions of VMS ZMODEM available.
>
>RZSZ.TLB is available on TeleGodzilla, GEnie, and Compuserve,
>and supports dial-in callers with ZMODEM-90(Tm) programs.
>
>A commercial version that also supports Crosstalk, Telix, Procomm
>et al is available for $495.00 from Omen Technology.
>
>Both of these programs support the popular VMS record formats.
>
>Chuck Forsberg WA7KGX          ...!tektronix!reed!omen!caf
>Author of YMODEM, ZMODEM, Professional-YAM, ZCOMM, and DSZ
>  Omen Technology Inc    "The High Reliability Software"
>17505-V NW Sauvie IS RD   Portland OR 97231   503-621-3406
>TeleGodzilla:621-3746 FAX:621-3735 CIS:70007,2304 Genie:CAF

I logged into TeleGodzilla just to see what was there.  For those of you who
want source code, here's the official pointer to the "official source code":

"ZMODEM protocol information and royalty free C source code for
developers is available in Omen's "ZMODEM Developer's
Collection" which may be ordered by voice at 503-621-3406."

The following is the help file from TeleGodzilla:

>Yam-Host(C):help
>***************   Yam-Host Command Summary	Rev 04-24-87
>
>File(s)     Ambiguous Path Name or names: [dir/]file.exe ...
>	    A directory name expands to all files in that directory,
>	    An empty File expands to all files in the current directory.
>
>file1	    Unambiguous single filename
>
>cd directory		change to directory
>cd              	change to login (home) directory
>pwd             	print working directory
>BYE			stops the hemorraging of your phone bill
>chat			opens a link to the console (chat with SYSOP)
>message			leave a public message (file=MESSAGES)
>private			leave a private message for sysop
>dir File(s)      	alphabetized directory listing
>dirr File(s)     	long form directory with transmission time printout
>dird File(s)     	  sorted by date
>dirt File(s)     	  sorted by date in reverse order
>dirl File(s)     	  sorted by file length
>dirs File(s)     	  sorted by file length in reverse order
>rb			receive files FROM YOU with YMODEM batch protocol
>rx file1          	receive one file FROM YOU using XMODEM protocol
>kermit rb       	receive files FROM YOU with Kermit protocol
>type File(s)     	type files (one or more ambiguous file names)
>sx -k file1          	send 1 TO YOU, XMODEM protocol (-k gives 1k blocks)
>rc file1         	receive one file FROM YOU with CRC-16 error checking
>sb -k File(s) 		send one or more files with YMODEM batch protocol
>sz File(s) 		send one or more files with ZMODEM batch protocol
>sz -r File(s) 		Recover/resume ZMODEM file transfer(s)
>kermit sb File(s) 	send files TO YOU with Kermit protocol
>EXAMPLES:    sx yamdemo.arc              (XMODEM)
>             kermit sb yamdemo.arc       (Kermit)
>
>Keyboard "type info.txt" for more information on this particular system.

RZSZ.TLB is, I believe, also available from most of the VMS software archives
(cerritos.edu carries it).  I checked to see if there were other copies of
ZMODEM around.

UUNET has [at least] the following, most of which SHOULD be in source form:

>systems/unix/linux/sources/usr.bin/Communications:
>total 1310
>-rw-r--r--  1 archive       713 Aug 11 07:20 rzsz.README.Z
>-rw-r--r--  1 archive      3784 Aug  6 17:13 rzsz9202.dff.Z
>-rw-r--r--  1 archive     81407 Aug  6 17:14 rzsz9202.tar-z.Z

>networking/terms:
>total 1165
>-rw-r--r--  1 revell      59565 Jul  2 23:24 zmodem.tar.Z

>systems/mac/info-mac/unix:
>total 1810
>-rw-r--r--  1 archive     52736 Feb 12  1992 zmodem-part1.shar
>-rw-r--r--  1 archive     42589 Feb 12  1992 zmodem-part2.shar
>-rw-r--r--  1 archive     54140 Feb 12  1992 zmodem-part3.shar
>-rw-r--r--  1 archive     47368 Feb 12  1992 zmodem-part4.shar

>usenet/comp.sources.unix/volume12/zmodem:
>total 62
>-r--r--r--  1 archive     22356 Oct 18  1987 part01.Z
>-r--r--r--  1 archive     12759 Oct 18  1987 part02.Z
>-r--r--r--  1 archive     27191 Oct 18  1987 part03.Z

The protocol documentation is in:

>doc/literary/obi/Standards/FileTransfer:
>total 109
>-rw-r--r--  1 archive     45213 Oct 28  1991 ZMODEM8.DOC.1.Z

In addition, the VAX Software list (see SOFTWARE-LIST, above) mentions the
following:

SZ Shell        SZ Shell gives the Z-Modem program SZ a host of new features
                including wildcards and various others.  Both SZ and RZ are
                provided in the archive.
                Availability:   F47

F47     AB20.LARC.NASA.GOV

and

ZMODEM          File transfer between Vax and Unix/PC/Amiga computers.
                Availability:   S14, F15

S14     MRCserv@Janus.MtRoyal.AB.CA
F15     WSMR-SIMTEL20.ARMY.MIL

F = FTP
S = Mail Server

That's all I could find with a quick look.

Dick Munroe
--------------------------------------------------------------------------------
,ZOO

>Subject: Where to get ZOO v2.10 for MS-DOS, Unix and VMS
>From: w8sdz@tacom-emh1.army.mil (Keith Petersen)
>
>It seems that no matter how often this information is posted, someone
>will ask for it again in 2 or 3 days.  PLEASE save this article!
>
>SIMTEL20:
>=========
>ZOO version 2.10 (needed for extracting files posted in Usenet
>newsgroup comp.binaries.ibm.pc) is available via anonymous FTP from
>WSMR-SIMTEL20.ARMY.MIL (192.88.110.20) or mirror sites OAK.Oakland.Edu
>(141.210.10.117), wuarchive.wustl.edu (128.252.135.4), ftp.uu.net
>(137.39.1.9), nic.funet.fi (128.214.6.100), src.doc.ic.ac.uk
>(146.169.3.7) or archie.au (139.130.4.6), by e-mail through the
>BITNET/EARN file servers, or by uucp from UUNET's 1-900-GOT-SRCS.
>See UUNET file uunet!~/info/archive-help for details.
>
>Garbo:
>======
>If you do not know how to go about getting this material, users
>are welcome to email ts@uwasa.fi (Timo Salmi) for the prerecorded
>garbo.uwasa.fi instructions (long, circa 29Kb).  North American users
>are referred to the garbo mirror on wuarchive.wustl.edu.  Australian
>users are referred to the archie.au mirror.  The mirrors may lag
>occasionally, or might not have all the files.  If you do not receive
>Timo's reply within five days, please ask your own site's system manager
>to construct a returnable mail path for you.
>
>Directory PD1:<MSDOS.ZOO>
> Filename   Type Length   Date    Description
>==============================================
>ZOO210.EXE    B   55721  910712  Dhesi's make/extract/view ZOO archives, 910712
>
> 73461 Jul 12  1991 garbo.uwasa.fi:/pc/arcers/zoo210.exe
>
>Directory PD8:<MISC.UNIX>
> Filename   Type Length   Date    Description
>==============================================
>ZOO210.TAR-Z  B  246115  910714  Dhesi's make/extract/view ZOO archives, C src
>
>237093 Aug  8  1991 garbo.uwasa.fi:/unix/arcers/zoo210.tar.Z
>
>Directory PD8:<MISC.VAXVMS>
> Filename   Type Length   Date    Description
>==============================================
>ZOO210.ARC    B  289193  910801  Dhesi's make/extract/view ZOO archives, C src
>
>289193 Jul  5  1991 garbo.uwasa.fi:/vms/arcers/zoo210.arc
>647168 Jun 24 13:42 garbo.uwasa.fi:/vms/arcers/zoo210.tar
>
>Keith
>--
>Keith Petersen
>Maintainer of the MSDOS, MISC and CP/M archives at SIMTEL20 [192.88.110.20]
>Internet: w8sdz@TACOM-EMH1.Army.Mil     or       w8sdz@vela.acs.oakland.edu
>Uucp: uunet!umich!vela!w8sdz                          BITNET: [email protected]
--
Dick Munroe				Internet: munroe@dmc.com
Doyle Munroe Consultants, Inc.		UUCP: ...uunet!thehulk!munroe
267 Cox St.				Office: (508) 568-1618
Hudson, Ma. USA				FAX: (508) 562-1133

GET CONNECTED!!! Send mail to info@dmc.com to find out about DMConnection.

Improving the Security of your UNIX System, by David A. Curry (April, 1990)

IMPROVING THE SECURITY OF YOUR     

UNIX SYSTEM

David A. Curry, Systems Programmer
Information and Telecommunications Sciences and Technology Division

ITSTD-721-FR-90-21

Approved:

Paul K. Hyder, Manager
Computer Facility

Boyd C. Fair, General Manager
Division Operations Section

Michael S. Frankel, Vice President
Information and Telecommunications Sciences and Technology Division

 Final Report \(bu April 1990

SRI International  333 Ravenswood Avenue \(bu Menlo Park, CA 94025-3493 \(bu (415) 326-6200 \(bu FAX: (415) 326-5512 \(bu Telex: 334486

 SECTION 1
 INTRODUCTION

1.1   UNIX SECURITY

 The UNIX operating system, although now in widespread use in environments con-
 cerned about security, was not really designed with security in mind [Ritc75]. This does
 not mean that UNIX does not provide any security mechanisms; indeed, several very good
 ones are available. However, most ``out of the box'' installation procedures from com-
 panies such as Sun Microsystems still install the operating system in much the same way
 as it was installed 15 years ago: with little or no security enabled.

 The reasons for this state of affairs are largely historical. UNIX was originally
 designed by programmers for use by other programmers. The environment in which it
 was used was one of open cooperation, not one of privacy. Programmers typically colla-
 borated with each other on projects, and hence preferred to be able to share their files with
 each other without having to climb over security hurdles. Because the first sites outside of
 Bell Laboratories to install UNIX were university research laboratories, where a similar
 environment existed, no real need for greater security was seen until some time later.

 In the early 1980s, many universities began to move their UNIX systems out of the
 research laboratories and into the computer centers, allowing (or forcing) the user popula-
 tion as a whole to use this new and wonderful system. Many businesses and government
 sites began to install UNIX systems as well, particularly as desktop workstations became
 more powerful and affordable. Thus, the UNIX operating system is no longer being used
 only in environments where open collaboration is the goal. Universities require their stu-
 dents to use the system for class assignments, yet they do not want the students to be able
 to copy from each other. Businesses use their UNIX systems for confidential tasks such as
 bookkeeping and payroll. And the government uses UNIX systems for various unclassified
 yet sensitive purposes.

 To complicate matters, new features have been added to UNIX over the years, mak-
 ing security even more difficult to control. Perhaps the most problematic features are
 those relating to networking: remote login, remote command execution, network file sys-
 tems, diskless workstations, and electronic mail. All of these features have increased the
 utility and usability of UNIX by untold amounts. However, these same features, along
 with the widespread connection of UNIX systems to the Internet and other networks, have
 opened up many new areas of vulnerability to unauthorized abuse of the system.

\(ru \(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru
UNIX is a registered trademark of AT&T. VAX is a trademark of Digital Equipment Corporation. Sun-3 and
NFS are trademarks of Sun Microsystems. Annex is a trademark of Xylogics, Inc.

 1

1.2   THE INTERNET WORM

 On the evening of November 2, 1988, a self-replicating program, called a worm, was
 released on the Internet [Seel88, Spaf88, Eich89]. Overnight, this program had copied
 itself from machine to machine, causing the machines it infected to labor under huge
 loads, and denying service to the users of those machines. Although the program only
 infected two types of computers,* it spread quickly, as did the concern, confusion, and
 sometimes panic of system administrators whose machines were affected. While many
 system administrators were aware that something like this could theoretically happen - the
 security holes exploited by the worm were well known - the scope of the worm's break-
 ins came as a great surprise to most people.

 The worm itself did not destroy any files, steal any information (other than account
 passwords), intercept private mail, or plant other destructive software [Seel88]. However,
 it did manage to severely disrupt the operation of the network. Several sites, including
 parts of MIT, NASA's Ames Research Center and Goddard Space Flight Center, the Jet
 Propulsion Laboratory, and the U. S. Army Ballistic Research Laboratory, disconnected
 themselves from the Internet to avoid recontamination. In addition, the Defense Commun-
 ications Agency ordered the connections between the MILNET and ARPANET shut down,
 and kept them down for nearly 24 hours [Eich89, Elme88]. Ironically, this was perhaps
 the worst thing to do, since the first fixes to combat the worm were distributed via the net-
 work [Eich89].

 This incident was perhaps the most widely described computer security problem ever.
 The worm was covered in many newspapers and magazines around the country including
 the New York Times, Wall Street Journal, Time and most computer-oriented technical
 publications, as well as on all three major television networks, the Cable News Network,
 and National Public Radio. In January 1990, a United States District Court jury found
 Robert Tappan Morris, the author of the worm, guilty of charges brought against him
 under a 1986 federal computer fraud and abuse law. Morris faces up to five years in
 prison and a $250,000 fine [Schu90]. Sentencing is scheduled for May 4, 1990.

1.3   SPIES AND ESPIONAGE

 In August 1986, the Lawrence Berkeley Laboratory, an unclassified research labora-
 tory at the University of California at Berkeley, was attacked by an unauthorized computer
 intruder [Stol88, Stol89]. Instead of immediately closing the holes the intruder was using,
 the system administrator, Clifford Stoll, elected to watch the intruder and document the
 weaknesses he exploited. Over the next 10 months, Stoll watched the intruder attack over
 400 computers around the world, and successfully enter about 30. The computers broken
 into were located at universities, military bases, and defense contractors [Stol88].
\(ru \(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru

 * Sun-3 systems from Sun Microsystems and VAX systems from Digital Equipment Corp., both running vari-
ants of 4.x BSD UNIX from the University of California at Berkeley.

 2

 Unlike many intruders seen on the Internet, who typically enter systems and browse
 around to see what they can, this intruder was looking for something specific. Files and
 data dealing with the Strategic Defense Initiative, the space shuttle, and other military
 topics all seemed to be of special interest. Although it is unlikely that the intruder would
 have found any truly classified information (the Internet is an unclassified network), it was
 highly probable that he could find a wealth of sensitive material [Stol88].

 After a year of tracking the intruder (eventually involving the FBI, CIA, National
 Security Agency, Air Force Intelligence, and authorities in West Germany), five men in
 Hannover, West Germany were arrested. In March 1989, the five were charged with
 espionage: they had been selling the material they found during their exploits to the KGB.
 One of the men, Karl Koch (``Hagbard''), was later found burned to death in an isolated
 forest outside Hannover. No suicide note was found [Stol89]. In February 1990, three of
 the intruders (Markus Hess, Dirk Bresinsky, and Peter Carl) were convicted of espionage
 in a German court and sentenced to prison terms, fines, and the loss of their rights to par-
 ticipate in elections [Risk90]. The last of the intruders, Hans Hu  . .  bner (``Pengo''), still
 faces trial in Berlin.

1.4   OTHER BREAK-INS

 Numerous other computer security problems have occurred in recent years, with vary-
 ing levels of publicity. Some of the more widely known incidents include break-ins on
 NASA's SPAN network [McLe87], the IBM ``Christmas Virus'' [Risk87], a virus at Mitre
 Corp. that caused the MILNET to be temporarily isolated from other networks [Risk88], a
 worm that penetrated DECNET networks [Risk89a], break-ins on U. S. banking networks
 [Risk89b], and a multitude of viruses, worms, and trojan horses affecting personal com-
 puter users.

1.5   SECURITY IS IMPORTANT

 As the previous stories demonstrate, computer security is an important topic. This
 document describes the security features provided by the UNIX operating system, and how
 they should be used. The discussion centers around version 4.x of SunOS, the version of
 UNIX sold by Sun Microsystems. Most of the information presented applies equally well
 to other UNIX systems. Although there is no way to make a computer completely secure
 against unauthorized use (other than to lock it in a room and turn it off), by following the
 instructions in this document you can make your system impregnable to the ``casual'' sys-
 tem cracker,* and make it more difficult for the sophisticated cracker to penetrate.
\(ru \(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru

 * The term ``hacker,'' as applied to computer users, originally had an honorable connotation: ``a person who
enjoys learning the details of programming systems and how to stretch their capabilities - as opposed to most
users of computers, who prefer to learn only the minimum amount necessary'' [Stee88]. Unfortunately, the
media has distorted this definition and given it a dishonorable meaning. In deference to the true hackers, we
will use the term ``cracker'' throughout this document.

 3

 4

 SECTION 2
 IMPROVING SECURITY

 UNIX system security can be divided into three main areas of concern. Two of these
 areas, account security and network security, are primarily concerned with keeping unau-
 thorized users from gaining access to the system. The third area, file system security, is
 concerned with preventing unauthorized access, either by legitimate users or crackers, to
 the data stored in the system. This section describes the UNIX security tools provided to
 make each of these areas as secure as possible.

2.1   ACCOUNT SECURITY

 One of the easiest ways for a cracker to get into a system is by breaking into
 someone's account. This is usually easy to do, since many systems have old accounts
 whose users have left the organization, accounts with easy-to-guess passwords, and so on.
 This section describes methods that can be used to avoid these problems.

2.1.1   Passwords

 The password is the most vital part of UNIX account security. If a cracker can dis-
 cover a user's password, he can then log in to the system and operate with all the capabil-
 ities of that user. If the password obtained is that of the super-user, the problem is more
 serious: the cracker will have read and write access to every file on the system. For this
 reason, choosing secure passwords is extremely important.

 The UNIX passwd program [Sun88a, 379] places very few restrictions on what may
 be used as a password. Generally, it requires that passwords contain five or more lower-
 case letters, or four characters if a nonalphabetic or uppercase letter is included. However,
 if the user ``insists'' that a shorter password be used (by entering it three times), the pro-
 gram will allow it. No checks for obviously insecure passwords (see below) are per-
 formed. Thus, it is incumbent upon the system administrator to ensure that the passwords
 in use on the system are secure.

 In [Morr78], the authors describe experiments conducted to determine typical users'
 habits in the choice of passwords. In a collection of 3,289 passwords, 16% of them con-
 tained three characters or less, and an astonishing 86% were what could generally be
 described as insecure. Additional experiments in [Gram84] show that by trying three sim-
 ple guesses on each account - the login name, the login name in reverse, and the two con-
 catenated together - a cracker can expect to obtain access to between 8 and 30 percent of
 the accounts on a typical system. A second experiment showed that by trying the 20 most
 common female first names, followed by a single digit (a total of 200 passwords), at least
 one password was valid on each of several dozen machines surveyed. Further experimen-
 tation by the author has found that by trying variations on the login name, user's first and

 5

 last names, and a list of nearly 1800 common first names, up to 50 percent of the pass-
 words on any given system can be cracked in a matter of two or three days.

2.1.1.1   Selecting Passwords

 The object when choosing a password is to make it as difficult as possible for a
 cracker to make educated guesses about what you've chosen. This leaves him no alterna-
 tive but a brute-force search, trying every possible combination of letters, numbers, and
 punctuation. A search of this sort, even conducted on a machine that could try one mil-
 lion passwords per second (most machines can try less than one hundred per second),
 would require, on the average, over one hundred years to complete. With this as our goal,
 and by using the information in the preceding text, a set of guidelines for password selec-
 tion can be constructed:

 \(bu Don't use your login name in any form (as-is, reversed, capitalized, doubled,
 etc.).

 \(bu Don't use your first or last name in any form.

 \(bu Don't use your spouse's or child's name.

 \(bu Don't use other information easily obtained about you. This includes license
 plate numbers, telephone numbers, social security numbers, the brand of your
 automobile, the name of the street you live on, etc.

 \(bu Don't use a password of all digits, or all the same letter. This significantly
 decreases the search time for a cracker.

 \(bu Don't use a word contained in (English or foreign language) dictionaries, spel-
 ling lists, or other lists of words.

 \(bu Don't use a password shorter than six characters.

 \(bu Do use a password with mixed-case alphabetics.

 \(bu Do use a password with nonalphabetic characters, e.g., digits or punctuation.

 \(bu Do use a password that is easy to remember, so you don't have to write it
 down.

 \(bu Do use a password that you can type quickly, without having to look at the key-
 board. This makes it harder for someone to steal your password by watching
 over your shoulder.

 Although this list may seem to restrict passwords to an extreme, there are several
 methods for choosing secure, easy-to-remember passwords that obey the above rules.
 Some of these include the following:

 \(bu Choose a line or two from a song or poem, and use the first letter of each word.
 For example, ``In Xanadu did Kubla Kahn a stately pleasure dome decree''
 becomes ``IXdKKaspdd.''

 \(bu Alternate between one consonant and one or two vowels, up to eight characters.
 This provides nonsense words that are usually pronounceable, and thus easily

 6

 remembered. Examples include ``routboo,'' ``quadpop,'' and so on.

 \(bu Choose two short words and concatenate them together with a punctation char-
 acter between them. For example: ``dog;rain,'' ``book+mug,'' ``kid?goat.''

 The importance of obeying these password selection rules cannot be overemphasized.
 The Internet worm, as part of its strategy for breaking into new machines, attempted to
 crack user passwords. First, the worm tried simple choices such as the login name, user's
 first and last names, and so on. Next, the worm tried each word present in an internal dic-
 tionary of 432 words (presumably Morris considered these words to be ``good'' words to
 try). If all else failed, the worm tried going through the system dictionary,
 /usr/dict/words, trying each word [Spaf88]. The password selection rules above success-
 fully guard against all three of these strategies.

2.1.1.2   Password Policies

 Although asking users to select secure passwords will help improve security, by itself
 it is not enough. It is also important to form a set of password policies that all users must
 obey, in order to keep the passwords secure.

 First and foremost, it is important to impress on users the need to keep their pass-
 words in their minds only. Passwords should never be written down on desk blotters,
 calendars, and the like. Further, storing passwords in files on the computer must be prohi-
 bited. In either case, by writing the password down on a piece of paper or storing it in a
 file, the security of the user's account is totally dependent on the security of the paper or
 file, which is usually less than the security offered by the password encryption software.

 A second important policy is that users must never give out their passwords to others.
 Many times, a user feels that it is easier to give someone else his password in order to
 copy a file, rather than to set up the permissions on the file so that it can be copied.
 Unfortunately, by giving out the password to another person, the user is placing his trust
 in this other person not to distribute the password further, write it down, and so on.

 Finally, it is important to establish a policy that users must change their passwords
 from time to time, say twice a year. This is difficult to enforce on UNIX, since in most
 implementations, a password-expiration scheme is not available. However, there are ways
 to implement this policy, either by using third-party software or by sending a memo to the
 users requesting that they change their passwords.

 This set of policies should be printed and distributed to all current users of the sys-
 tem. It should also be given to all new users when they receive their accounts. The pol-
 icy usually carries more weight if you can get it signed by the most ``impressive'' person
 in your organization (e.g., the president of the company).

 7

2.1.1.3   Checking Password Security

 The procedures and policies described in the previous sections, when properly imple-
 mented, will greatly reduce the chances of a cracker breaking into your system via a
 stolen account. However, as with all security measures, you as the system administrator
 must periodically check to be sure that the policies and procedures are being adhered to.
 One of the unfortunate truisms of password security is that, ``left to their own ways, some
 people will still use cute doggie names as passwords'' [Gram84].

 The best way to check the security of the passwords on your system is to use a
 password-cracking program much like a real cracker would use. If you succeed in crack-
 ing any passwords, those passwords should be changed immediately. There are a few
 freely available password cracking programs distributed via various source archive sites;
 these are described in more detail in Section 4. A fairly extensive cracking program is
 also available from the author. Alternatively, you can write your own cracking program,
 and tailor it to your own site. For a list of things to check for, see the list of guidelines
 above.

2.1.2   Expiration Dates

 Many sites, particularly those with a large number of users, typically have several old
 accounts lying around whose owners have since left the organization. These accounts are
 a major security hole: not only can they be broken into if the password is insecure, but
 because nobody is using the account anymore, it is unlikely that a break-in will be
 noticed.

 The simplest way to prevent unused accounts from accumulating is to place an
 expiration date on every account. These expiration dates should be near enough in the
 future that old accounts will be deleted in a timely manner, yet far enough apart that the
 users will not become annoyed. A good figure is usually one year from the date the
 account was installed. This tends to spread the expirations out over the year, rather than
 clustering them all at the beginning or end. The expiration date can easily be stored in the
 password file (in the full name field). A simple shell script can be used to periodically
 check that all accounts have expiration dates, and that none of the dates has passed.

 On the first day of each month, any user whose account has expired should be con-
 tacted to be sure he is still employed by the organization, and that he is actively using the
 account. Any user who cannot be contacted, or who has not used his account recently,
 should be deleted from the system. If a user is unavailable for some reason (e.g., on vaca-
 tion) and cannot be contacted, his account should be disabled by replacing the encrypted
 password in the password file entry with an asterisk (*). This makes it impossible to log
 in to the account, yet leaves the account available to be re-enabled on the user's return.

 8

2.1.3   Guest Accounts

 Guest accounts present still another security hole. By their nature, these accounts are
 rarely used, and are always used by people who should only have access to the machine
 for the short period of time they are guests. The most secure way to handle guest
 accounts is to install them on an as-needed basis, and delete them as soon as the people
 using them leave. Guest accounts should never be given simple passwords such as
 ``guest'' or ``visitor,'' and should never be allowed to remain in the password file when
 they are not being used.

2.1.4   Accounts Without Passwords

 Some sites have installed accounts with names such as ``who,'' ``date,'' ``lpq,'' and
 so on that execute simple commands. These accounts are intended to allow users to exe-
 cute these commands without having to log in to the machine. Typically these accounts
 have no password associated with them, and can thus be used by anyone. Many of the
 accounts are given a user id of zero, so that they execute with super-user permissions.

 The problem with these accounts is that they open potential security holes. By not
 having passwords on them, and by having super-user permissions, these accounts practi-
 cally invite crackers to try to penetrate them. Usually, if the cracker can gain access to
 the system, penetrating these accounts is simple, because each account executes a different
 command. If the cracker can replace any one of these commands with one of his own, he
 can then use the unprotected account to execute his program with super-user permissions.

 Simply put, accounts without passwords should not be allowed on any UNIX system.

2.1.5   Group Accounts and Groups

 Group accounts have become popular at many sites, but are actually a break-in wait-
 ing to happen. A group account is a single account shared by several people, e.g., by all
 the collaborators on a project. As mentioned in the section on password security, users
 should not share passwords - the group account concept directly violates this policy. The
 proper way to allow users to share information, rather than giving them a group account to
 use, is to place these users into a group. This is done by editing the group file, /etc/group
 [Sun88a, 1390; Sun88b, 66], and creating a new group with the users who wish to colla-
 borate listed as members.

 A line in the group file looks like

 groupname:password:groupid:user1,user2,user3,...

 The groupname is the name assigned to the group, much like a login name. It may be the
 same as someone's login name, or different. The maximum length of a group name is
 eight characters. The password field is unused in BSD-derived versions of UNIX, and
 should contain an asterisk (*). The groupid is a number from 0 to 65535 inclusive.

 9

 Generally, numbers below 10 are reserved for special purposes, but you may choose any
 unused number. The last field is a comma-separated (no spaces) list of the login names of
 the users in the group. If no login names are listed, then the group has no members. To
 create a group called ``hackers'' with Huey, Duey, and Louie as members, you would add
 a line such as this to the group file:

 hackers:*:100:huey,duey,louie

 After the group has been created, the files and directories the members wish to share
 can then be changed so that they are owned by this group, and the group permission bits
 on the files and directories can be set to allow sharing. Each user retains his own account,
 with his own password, thus protecting the security of the system.

 For example, to change Huey's ``programs'' directory to be owned by the new group
 and properly set up the permissions so that all members of the group may access it, the
 chgrp and chmod commands would be used as follows [Sun88a, 63-66]:

 # chgrp hackers ~huey/programs
 # chmod -R g+rw ~huey/programs

2.1.6   Yellow Pages

 The Sun Yellow Pages system [Sun88b, 349-374] allows many hosts to share pass-
 word files, group files, and other files via the network, while the files are stored on only a
 single host. Unfortunately, Yellow Pages also contains a few potential security holes.

 The principal way Yellow Pages works is to have a special line in the password or
 group file that begins with a ``+''. In the password file, this line looks like

 +::0:0:::

 and in the group file, it looks like

 +:

 These lines should only be present in the files stored on Yellow Pages client machines.
 They should not be present in the files on the Yellow Pages master machine(s). When a
 program reads the password or group file and encounters one of these lines, it goes
 through the network and requests the information it wants from the Yellow Pages server
 instead of trying to find it in the local file. In this way, the data does not have to be
 maintained on every host. Since the master machine already has all the information, there
 is no need for this special line to be present there.

 Generally speaking, the Yellow Pages service itself is reasonably secure. There are a
 few openings that a sophisticated (and dedicated) cracker could exploit, but Sun is rapidly
 closing these. The biggest problem with Yellow Pages is the ``+'' line in the password
 file. If the ``+'' is deleted from the front of the line, then this line loses its special Yellow
 Pages meaning. It instead becomes a regular password file line for an account with a null
 login name, no password, and user id zero (super-user). Thus, if a careless system

 10

 administrator accidentally deletes the ``+''. the whole system is wide open to any attack.*

 Yellow Pages is too useful a service to suggest turning it off, although turning it off
 would make your system more secure. Instead, it is recommended that you read carefully
 the information in the Sun manuals in order to be fully aware of Yellow Pages' abilities
 and its limitations.

2.2   NETWORK SECURITY

 As trends toward internetworking continue, most sites will, if they haven't already,
 connect themselves to one of the numerous regional networks springing up around the
 country. Most of these regional networks are also interconnected, forming the Internet
 [Hind83, Quar86]. This means that the users of your machine can access other hosts and
 communicate with other users around the world. Unfortunately, it also means that other
 hosts and users from around the world can access your machine, and attempt to break into
 it.

 Before internetworking became commonplace, protecting a system from unauthorized
 access simply meant locking the machine in a room by itself. Now that machines are con-
 nected by networks, however, security is much more complex. This section describes the
 tools and methods available to make your UNIX networks as secure as possible.

2.2.1   Trusted Hosts

 One of the most convenient features of the Berkeley (and Sun) UNIX networking
 software is the concept of ``trusted'' hosts. The software allows the specification of other
 hosts (and possibly users) who are to be considered trusted - remote logins and remote
 command executions from these hosts will be permitted without requiring the user to enter
 a password. This is very convenient, because users do not have to type their password
 every time they use the network. Unfortunately, for the same reason, the concept of a
 trusted host is also extremely insecure.

 The Internet worm made extensive use of the trusted host concept to spread itself
 throughout the network [Seel88]. Many sites that had already disallowed trusted hosts did
 fairly well against the worm compared with those sites that did allow trusted hosts. Even
 though it is a security hole, there are some valid uses for the trusted host concept. This
 section describes how to properly implement the trusted hosts facility while preserving as
 much security as possible.

\(ru \(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru

 * Actually, a line like this without a ``+'' is dangerous in any password file, regardless of whether Yellow
Pages is in use.

 11

2.2.1.1   The hosts.equiv File

 The file /etc/hosts.equiv [Sun88a, 1397] can be used by the system administrator to
 indicate trusted hosts. Each trusted host is listed in the file, one host per line. If a user
 attempts to log in (using rlogin) or execute a command (using rsh) remotely from one of
 the systems listed in hosts.equiv, and that user has an account on the local system with the
 same login name, access is permitted without requiring a password.

 Provided adequate care is taken to allow only local hosts in the hosts.equiv file, a
 reasonable compromise between security and convenience can be achieved. Nonlocal
 hosts (including hosts at remote sites of the same organization) should never be trusted.
 Also, if there are any machines at your organization that are installed in ``public'' areas
 (e.g., terminal rooms) as opposed to private offices, you should not trust these hosts.

 On Sun systems, hosts.equiv is controlled with the Yellow Pages software. As distri-
 buted, the default hosts.equiv file distributed by Sun contains a single line:

 +

 This indicates that every known host (i.e., the complete contents of the host file) should be
 considered a trusted host. This is totally incorrect and a major security hole, since hosts
 outside the local organization should never be trusted. A correctly configured hosts.equiv
 should never list any ``wildcard'' hosts (such as the ``+''); only specific host names
 should be used. When installing a new system from Sun distribution tapes, you should be
 sure to either replace the Sun default hosts.equiv with a correctly configured one, or delete
 the file altogether.

2.2.1.2   The .rhosts File

 The .rhosts file [Sun88a, 1397] is similar in concept and format to the hosts.equiv
 file, but allows trusted access only to specific host-user combinations, rather than to hosts
 in general.* Each user may create a .rhosts file in his home directory, and allow access to
 her account without a password. Most people use this mechanism to allow trusted access
 between accounts they have on systems owned by different organizations who do not trust
 each other's hosts in hosts.equiv. Unfortunately, this file presents a major security prob-
 lem: While hosts.equiv is under the system administrator's control and can be managed
 effectively, any user may create a .rhosts file granting access to whomever he chooses,
 without the system administrator's knowledge.

 The only secure way to manage .rhosts files is to completely disallow them on the
 system. The system administrator should check the system often for violations of this pol-
 icy (see Section 3.3.1.4). One possible exception to this rule is the ``root'' account; a
 .rhosts file may be necessary to allow network backups and the like to be completed.

\(ru \(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru

 * Actually, hosts.equiv may be used to specify host-user combinations as well, but this is rarely done.

 12

2.2.2   Secure Terminals

 Under newer versions of UNIX, the concept of a ``secure'' terminal has been intro-
 duced. Simply put, the super-user (``root'') may not log in on a nonsecure terminal, even
 with a password. (Authorized users may still use the su command to become super-user,
 however.) The file /etc/ttytab [Sun88a, 1478] is used to control which terminals are con-
 sidered secure.\(dg A short excerpt from this file is shown below.

 console "/usr/etc/getty std.9600" sun off secure
 ttya "/usr/etc/getty std.9600" unknown off secure
 ttyb "/usr/etc/getty std.9600" unknown off secure
 ttyp0 none network off secure
 ttyp1 none network off secure
 ttyp2 none network off secure

 The keyword ``secure'' at the end of each line indicates that the terminal is considered
 secure. To remove this designation, simply edit the file and delete the ``secure'' keyword.
 After saving the file, type the command (as super-user):

 # kill -HUP 1

 This tells the init process to reread the ttytab file.

 The Sun default configuration for ttytab is to consider all terminals secure, including
 ``pseudo'' terminals used by the remote login software. This means that ``root'' may log
 in remotely from any host on the network. A more secure configuration would consider
 as secure only directly connected terminals, or perhaps only the console device. This is
 how file servers and other machines with disks should be set up.

 The most secure configuration is to remove the ``secure'' designation from all termi-
 nals, including the console device. This requires that those users with super-user authority
 first log in as themselves, and then become the super-user via the su command. It also
 requires the ``root'' password to be entered when rebooting in single-user mode, in order
 to prevent users from rebooting their desktop workstations and obtaining super-user
 access. This is how all diskless client machines should be set up.

2.2.3   The Network File System

 The Network File System (NFS) [Sun88d] is designed to allow several hosts to share
 files over the network. One of the most common uses of NFS is to allow diskless works-
 tations to be installed in offices, while keeping all disk storage in a central location. As
 distributed by Sun, NFS has no security features enabled. This means that any host on the
 Internet may access your files via NFS, regardless of whether you trust them or not.

\(ru \(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru

 \(dg Under non-Sun versions of Berkeley UNIX, this file is called /etc/ttys.

 13

 Fortunately, there are several easy ways to make NFS more secure. The more com-
 monly used methods are described in this section, and these can be used to make your
 files quite secure from unauthorized access via NFS. Secure NFS, introduced in SunOS
 Release 4.0, takes security one step further, using public-key encryption techniques to
 ensure authorized access. Discussion of secure NFS is deferred until Section 4.

2.2.3.1   The exports File

 The file /etc/exports [Sun88a, 1377] is perhaps one of the most important parts of
 NFS configuration. This file lists which file systems are exported (made available for
 mounting) to other systems. A typical exports file as installed by the Sun installation pro-
 cedure looks something like this:

 /usr
 /home
 /var/spool/mail
 #
 /export/root/client1 -access=client1,root=client1
 /export/swap/client1 -access=client1,root=client1
 #
 /export/root/client2 -access=client2,root=client2
 /export/swap/client2 -access=client2,root=client2

 The root= keyword specifies the list of hosts that are allowed to have super-user access to
 the files in the named file system. This keyword is discussed in detail in Section 2.2.3.3.
 The access= keyword specifies the list of hosts (separated by colons) that are allowed to
 mount the named file system. If no access= keyword is specified for a file system, any
 host anywhere on the network may mount that file system via NFS.

 Obviously, this presents a major security problem, since anyone who can mount your
 file systems via NFS can then peruse them at her leisure. Thus, it is important that all file
 systems listed in exports have an access= keyword associated with them. If you have
 only a few hosts which must mount a file system, you can list them individually in the
 file:

 /usr -access=host1:host2:host3:host4:host5

 However, because the maximum number of hosts that can be listed this way is ten, the
 access= keyword will also allow netgroups to be specified. Netgroups are described in
 the next section.

 After making any changes to the exports file, you should run the command

 # exportfs -a

 in order to make the changes take effect.

 14

2.2.3.2   The netgroup File

 The file /etc/netgroup [Sun88a, 1407] is used to define netgroups. This file is con-
 trolled by Yellow Pages, and must be rebuilt in the Yellow Pages maps whenever it is
 modified. Consider the following sample netgroup file:

 A_Group (servera,,) (clienta1,,) (clienta2,,)

 B_Group (serverb,,) (clientb1,,) (clientb2,,)

 AdminStaff (clienta1,mary,) (clientb3,joan,)

 AllSuns A_Group B_Group

 This file defines four netgroups, called A_Group, B_Group, AdminStaff, and AllSuns.
 The AllSuns netgroup is actually a ``super group'' containing all the members of the
 A_Group and B_Group netgroups.

 Each member of a netgroup is defined as a triple: (host, user, domain). Typically,
 the domain field is never used, and is simply left blank. If either the host or user field is
 left blank, then any host or user is considered to match. Thus the triple (host,,) matches
 any user on the named host, while the triple (,user,) matches the named user on any host.

 Netgroups are useful when restricting access to NFS file systems via the exports file.
 For example, consider this modified version of the file from the previous section:

 /usr -access=A_Group
 /home -access=A_Group:B_Group
 /var/spool/mail -access=AllSuns
 #
 /export/root/client1 -access=client1,root=client1
 /export/swap/client1 -access=client1,root=client1
 #
 /export/root/client2 -access=client2,root=client2
 /export/swap/client2 -access=client2,root=client2

 The /usr file system may now only be mounted by the hosts in the A_Group netgroup,
 that is, servera, clienta1, and clienta2. Any other host that tries to mount this file system
 will receive an ``access denied'' error. The /home file system may be mounted by any of
 the hosts in either the A_Group or B_Group netgroups. The /var/spool/mail file system
 is also restricted to these hosts, but in this example we used the ``super group'' called
 AllSuns.

 Generally, the best way to configure the netgroup file is to make a single netgroup
 for each file server and its clients, and then to make other super groups, such as AllSuns.
 This allows you the flexibility to specify the smallest possible group of hosts for each file
 system in /etc/exports.

 Netgroups can also be used in the password file to allow access to a given host to be
 restricted to the members of that group, and they can be used in the hosts.equiv file to

 15

 centralize maintenance of the list of trusted hosts. The procedures for doing this are
 defined in more detail in the Sun manual.

2.2.3.3   Restricting Super-User Access

 Normally, NFS translates the super-user id to a special id called ``nobody'' in order
 to prevent a user with ``root'' on a remote workstation from accessing other people's files.
 This is good for security, but sometimes a nuisance for system administration, since you
 cannot make changes to files as ``root'' through NFS.

 The exports file also allows you to grant super-user access to certain file systems for
 certain hosts by using the root= keyword. Following this keyword a colon-separated list
 of up to ten hosts may be specified; these hosts will be allowed to access the file system
 as ``root'' without having the user id converted to ``nobody.'' Netgroups may not be
 specified to the root= keyword.

 Granting ``root'' access to a host should not be done lightly. If a host has ``root''
 access to a file system, then the super-user on that host will have complete access to the
 file system, just as if you had given him the ``root'' password on the server. Untrusted
 hosts should never be given ``root'' access to NFS file systems.

2.2.4   FTP

 The File Transfer Protocol, implemented by the ftp and ftpd programs [Sun88a,
 195-201, 1632-1634], allows users to connect to remote systems and transfer files back
 and forth. Unfortunately, older versions of these programs also had several bugs in them
 that allowed crackers to break into a system. These bugs have been fixed by Berkeley,
 and new versions are available. If your ftpd* was obtained before December 1988, you
 should get a newer version (see Section 4).

 One of the more useful features of FTP is the ``anonymous'' login. This special
 login allows users who do not have an account on your machine to have restricted access
 in order to transfer files from a specific directory. This is useful if you wish to distribute
 software to the public at large without giving each person who wants the software an
 account on your machine. In order to securely set up anonymous FTP you should follow
 the specific instructions below:

 1. Create an account called ``ftp.'' Disable the account by placing an asterisk (*)
 in the password field. Give the account a special home directory, such as
 /usr/ftp or /usr/spool/ftp.

 2. Make the home directory owned by ``ftp'' and unwritable by anyone:

\(ru \(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru

 * On Sun systems, ftpd is stored in the file /usr/etc/in.ftpd. On most other systems, it is called /etc/ftpd.

 16

 # chown ftp ~ftp
 # chmod 555 ~ftp

 3. Make the directory ~ftp/bin, owned by the super-user and unwritable by anyone.
 Place a copy of the ls program in this directory:

 # mkdir ~ftp/bin
 # chown root ~ftp/bin
 # chmod 555 ~ftp/bin
 # cp -p /bin/ls ~ftp/bin
 # chmod 111 ~ftp/bin/ls

 4. Make the directory ~ftp/etc, owned by the super-user and unwritable by anyone.
 Place copies of the password and group files in this directory, with all the pass-
 word fields changed to asterisks (*). You may wish to delete all but a few of
 the accounts and groups from these files; the only account that must be present
 is ``ftp.''

 # mkdir ~ftp/etc
 # chown root ~ftp/etc
 # chmod 555 ~ftp/etc
 # cp -p /etc/passwd /etc/group ~ftp/etc
 # chmod 444 ~ftp/etc/passwd ~ftp/etc/group

 5. Make the directory ~ftp/pub, owned by ``ftp'' and world-writable. Users may
 then place files that are to be accessible via anonymous FTP in this directory:

 # mkdir ~ftp/pub
 # chown ftp ~ftp/pub
 # chmod 777 ~ftp/pub

 Because the anonymous FTP feature allows anyone to access your system (albeit in a
 very limited way), it should not be made available on every host on the network. Instead,
 you should choose one machine (preferably a server or standalone host) on which to allow
 this service. This makes monitoring for security violations much easier. If you allow
 people to transfer files to your machine (using the world-writable pub directory, described
 above), you should check often the contents of the directories into which they are allowed
 to write. Any suspicious files you find should be deleted.

2.2.4.1   Trivial FTP

 The Trivial File Transfer Protocol, TFTP, is used on Sun workstations (and others) to
 allow diskless hosts to boot from the network. Basically, TFTP is a stripped-down version
 of FTP - there is no user authentication, and the connection is based on the User
 Datagram Protocol instead of the Transmission Control Protocol. Because they are so
 stripped-down, many implementations of TFTP have security holes. You should check

 17

 your hosts by executing the command sequence shown below.

 % tftp
 tftp> connect yourhost
 tftp> get /etc/motd tmp
 Error code 1: File not found
 tftp> quit
 %

 If your version does not respond with ``File not found,'' and instead transfers the file, you
 should replace your version of tftpd* with a newer one. In particular, versions of SunOS
 prior to release 4.0 are known to have this problem.

2.2.5   Mail

 Electronic mail is one of the main reasons for connecting to outside networks. On
 most versions of Berkeley-derived UNIX systems, including those from Sun, the sendmail
 program [Sun88a, 1758-1760; Sun88b, 441-488] is used to enable the receipt and delivery
 of mail. As with the FTP software, older versions of sendmail have several bugs that
 allow security violations. One of these bugs was used with great success by the Internet
 worm [Seel88, Spaf88]. The current version of sendmail from Berkeley is version 5.61,
 of January 1989. Sun is, as of this writing, still shipping version 5.59, which has a known
 security problem. They have, however, made a fixed version available. Section 4 details
 how to obtain these newer versions.

 Generally, with the exception of the security holes mentioned above, sendmail is rea-
 sonably secure when installed by most vendors' installation procedures. There are, how-
 ever, a few precautions that should be taken to ensure secure operation:

 1. Remove the ``decode'' alias from the aliases file (/etc/aliases or /usr/lib/aliases).

 2. If you create aliases that allow messages to be sent to programs, be absolutely
 sure that there is no way to obtain a shell or send commands to a shell from
 these programs.

 3. Make sure the ``wizard'' password is disabled in the configuration file,
 sendmail.cf. (Unless you modify the distributed configuration files, this
 shouldn't be a problem.)

 4. Make sure your sendmail does not support the ``debug'' command. This can be
 done with the following commands:

\(ru \(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru

 * On Sun systems, tftpd is stored in the file /usr/etc/in.tftpd. On most other systems, it is called /etc/tftpd.

 18

 % telnet localhost 25
 220 yourhost Sendmail 5.61 ready at 9 Mar 90 10:57:36 PST
 debug
 500 Command unrecognized
 quit
 %

 If your sendmail responds to the ``debug'' command with ``200 Debug set,''
 then you are vulnerable to attack and should replace your sendmail with a
 newer version.

 By following the procedures above, you can be sure that your mail system is secure.

2.2.6   Finger

 The ``finger'' service, provided by the finger program [Sun88a, 186-187], allows you
 to obtain information about a user such as her full name, home directory, last login time,
 and in some cases when she last received mail and/or read her mail. The fingerd program
 [Sun88a, 1625] allows users on remote hosts to obtain this information.

 A bug in fingerd was also exercised with success by the Internet worm [Seel88,
 Spaf88]. If your version of fingerd* is older than November 5, 1988, it should be
 replaced with a newer version. New versions are available from several of the sources
 described in Section 4.

2.2.7   Modems and Terminal Servers

 Modems and terminal servers (terminal switches, Annex boxes, etc.) present still
 another potential security problem. The main problem with these devices is one of
 configuration - misconfigured hardware can allow security breaches. Explaining how to
 configure every brand of modem and terminal server would require volumes. However,
 the following items should be checked for on any modems or terminal servers installed at
 your site:

 1. If a user dialed up to a modem hangs up the phone, the system should log him
 out. If it doesn't, check the hardware connections and the kernel configuration
 of the serial ports.

 2. If a user logs off, the system should force the modem to hang up. Again, check
 the hardware connections if this doesn't work.

 3. If the connection from a terminal server to the system is broken, the system
 should log the user off.
\(ru \(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru

 * On Sun systems, fingerd is stored in /usr/etc/in.fingerd. On most other systems, it is called /etc/fingerd.

 19

 4. If the terminal server is connected to modems, and the user hangs up, the termi-
 nal server should inform the system that the user has hung up.

 Most modem and terminal server manuals cover in detail how to properly connect
 these devices to your system. In particular you should pay close attention to the ``Carrier
 Detect,'' ``Clear to Send,'' and ``Request to Send'' connections.

2.2.8   Firewalls

 One of the newer ideas in network security is that of a firewall. Basically, a firewall
 is a special host that sits between your outside-world network connection(s) and your
 internal network(s). This host does not send out routing information about your internal
 network, and thus the internal network is ``invisible'' from the outside. In order to
 configure a firewall machine, the following considerations need to be taken:

 1. The firewall does not advertise routes. This means that users on the internal
 network must log in to the firewall in order to access hosts on remote networks.
 Likewise, in order to log in to a host on the internal network from the outside, a
 user must first log in to the firewall machine. This is inconvenient, but more
 secure.

 2. All electronic mail sent by your users must be forwarded to the firewall machine
 if it is to be delivered outside your internal network. The firewall must receive
 all incoming electronic mail, and then redistribute it. This can be done either
 with aliases for each user or by using name server MX records.

 3. The firewall machine should not mount any file systems via NFS, or make any
 of its file systems available to be mounted.

 4. Password security on the firewall must be rigidly enforced.

 5. The firewall host should not trust any other hosts regardless of where they are.
 Furthermore, the firewall should not be trusted by any other host.

 6. Anonymous FTP and other similar services should only be provided by the
 firewall host, if they are provided at all.

 The purpose of the firewall is to prevent crackers from accessing other hosts on your
 network. This means, in general, that you must maintain strict and rigidly enforced secu-
 rity on the firewall, but the other hosts are less vulnerable, and hence security may be
 somewhat lax. But it is important to remember that the firewall is not a complete cure
 against crackers - if a cracker can break into the firewall machine, he can then try to
 break into any other host on your network.

2.3   FILE SYSTEM SECURITY

 The last defense against system crackers are the permissions offered by the file sys-
 tem. Each file or directory has three sets of permission bits associated with it: one set for

 20

 the user who owns the file, one set for the users in the group with which the file is associ-
 ated, and one set for all other users (the ``world'' permissions). Each set contains three
 identical permission bits, which control the following:

 read If set, the file or directory may be read. In the case of a directory, read
 access allows a user to see the contents of a directory (the names of the
 files contained therein), but not to access them.

 write If set, the file or directory may be written (modified). In the case of a
 directory, write permission implies the ability to create, delete, and rename
 files. Note that the ability to remove a file is not controlled by the per-
 missions on the file, but rather the permissions on the directory containing
 the file.

 execute If set, the file or directory may be executed (searched). In the case of a
 directory, execute permission implies the ability to access files contained
 in that directory.

 In addition, a fourth permission bit is available in each set of permissions. This bit
 has a different meaning in each set of permission bits:

 setuid If set in the owner permissions, this bit controls the ``set user id'' (setuid)
 status of a file. Setuid status means that when a program is executed, it
 executes with the permissions of the user owning the program, in addition
 to the permissions of the user executing the program. For example, send-
 mail is setuid ``root,'' allowing it to write files in the mail queue area,
 which normal users are not allowed to do. This bit is meaningless on
 nonexecutable files.

 setgid If set in the group permissions, this bit controls the ``set group id'' (setgid)
 status of a file. This behaves in exactly the same way as the setuid bit,
 except that the group id is affected instead. This bit is meaningless on
 non-executable files (but see below).

 sticky If set in the world permissions, the ``sticky'' bit tells the operating system
 to do special things with the text image of an executable file. It is mostly a
 holdover from older versions of UNIX, and has little if any use today. This
 bit is also meaningless on nonexecutable files (but see below).

2.3.1   Setuid Shell Scripts

 Shell scripts that have the setuid or setgid bits set on them are not secure, regardless of
how many safeguards are taken when writing them. There are numerous software packages
available that claim to make shell scripts secure, but every one released so far has not
managed to solve all the problems.

 Setuid and setgid shell scripts should never be allowed on any UNIX system.

 21

2.3.2   The Sticky Bit on Directories

 Newer versions of UNIX have attached a new meaning to the sticky bit. When this bit is
set on a directory, it means that users may not delete or rename other users' files in this direc-
tory. This is typically useful for the /tmp directory. Normally, /tmp is world-writable, ena-
bling any user to delete another user's files. By setting the sticky bit on /tmp, users may only
delete their own files from this directory.

 To set the sticky bit on a directory, use the command

 # chmod o+t directory

2.3.3   The Setgid Bit on Directories

 In SunOS 4.0, the setgid bit was also given a new meaning. Two rules can be used for
assigning group ownership to a file in SunOS:

 1. The System V mechanism, which says that a user's primary group id (the one listed
 in the password file) is assigned to any file he creates.

 2. The Berkeley mechanism, which says that the group id of a file is set to the group
 id of the directory in which it is created.

 If the setgid bit is set on a directory, the Berkeley mechanism is enabled. Otherwise, the
System V mechanism is enabled. Normally, the Berkeley mechanism is used; this mechanism
must be used if creating directories for use by more than one member of a group (see Section
2.1.5).

 To set the setgid bit on a directory, use the command

 # chmod g+s directory

2.3.4   The umask Value

 When a file is created by a program, say a text editor or a compiler, it is typically
created with all permissions enabled. Since this is rarely desirable (you don't want other
users to be able to write your files), the umask value is used to modify the set of permissions
a file is created with. Simply put, while the chmod command [Sun88a, 65-66] specifies what
bits should be turned on, the umask value specifies what bits should be turned off.

 For example, the default umask on most systems is 022. This means that write permis-
sion for the group and world should be turned off whenever a file is created. If instead you
wanted to turn off all group and world permission bits, such that any file you created would
not be readable, writable, or executable by anyone except yourself, you would set your umask
to 077.

 The umask value is specified in the .cshrc or .profile files read by the shell using the
umask command [Sun88a, 108, 459]. The ``root'' account should have the line

 22

 umask 022

in its /.cshrc file, in order to prevent the accidental creation of world-writable files owned by
the super-user.

2.3.5   Encrypting Files

 The standard UNIX crypt command [Sun88a, 95] is not at all secure. Although it is rea-
sonable to expect that crypt will keep the casual ``browser'' from reading a file, it will
present nothing more than a minor inconvenience to a determined cracker. Crypt implements
a one-rotor machine along the lines of the German Enigma (broken in World War II). The
methods of attack on such a machine are well known, and a sufficiently large file can usually
be decrypted in a few hours even without knowledge of what the file contains [Reed84]. In
fact, publicly available packages of programs designed to ``break'' files encrypted with crypt
have been around for several years.

 There are software implementations of another algorithm, the Data Encryption Standard
(DES), available on some systems. Although this algorithm is much more secure than crypt,
it has never been proven that it is totally secure, and many doubts about its security have been
raised in recent years.

 Perhaps the best thing to say about encrypting files on a computer system is this: if you
think you have a file whose contents are important enough to encrypt, then that file should not
be stored on the computer in the first place. This is especially true of systems with limited
security, such as UNIX systems and personal computers.

 It is important to note that UNIX passwords are not encrypted with the crypt program.
Instead, they are encrypted with a modified version of the DES that generates one-way encryp-
tions (that is, the password cannot be decrypted). When you log in, the system does not
decrypt your password. Instead, it encrypts your attempted password, and if this comes out to
the same result as encrypting your real password, you are allowed to log in.

2.3.6   Devices

 The security of devices is an important issue in UNIX. Device files (usually residing in
/dev) are used by various programs to access the data on the disk drives or in memory. If
these device files are not properly protected, your system is wide open to a cracker. The
entire list of devices is too long to go into here, since it varies widely from system to system.
However, the following guidelines apply to all systems:

 1. The files /dev/kmem, /dev/mem, and /dev/drum should never be readable by the
 world. If your system supports the notion of the ``kmem'' group (most newer sys-
 tems do) and utilities such as ps are setgid ``kmem,'' then these files should be
 owned by user ``root'' and group ``kmem,'' and should be mode 640. If your sys-
 tem does not support the notion of the ``kmem'' group, and utilities such as ps are
 setuid ``root,'' then these files should be owned by user ``root'' and mode 600.

 23

 2. The disk devices, such as /dev/sd0a, /dev/rxy1b, etc., should be owned by user
 ``root'' and group ``operator,'' and should be mode 640. Note that each disk has
 eight partitions and two device files for each partition. Thus, the disk ``sd0'' would
 have the following device files associated with it in /dev:

 sd0a sd0e rsd0a rsd0e
 sd0b sd0f rsd0b rsd0f
 sd0c sd0g rsd0c rsd0g
 sd0d sd0h rsd0d rsd0h

 3. With very few exceptions, all other devices should be owned by user ``root.'' One
 exception is terminals, which are changed to be owned by the user currently logged
 in on them. When the user logs out, the ownership of the terminal is automatically
 changed back to ``root.''

2.4   SECURITY IS YOUR RESPONSIBILITY

 This section has detailed numerous tools for improving security provided by the UNIX
operating system. The most important thing to note about these tools is that although they are
available, they are typically not put to use in most installations. Therefore, it is incumbent on
you, the system administrator, to take the time and make the effort to enable these tools, and
thus to protect your system from unauthorized access.

 24

 SECTION 3
 MONITORING SECURITY

 One of the most important tasks in keeping any computer system secure is monitoring
the security of the system. This involves examining system log files for unauthorized
accesses of the system, as well as monitoring the system itself for security holes. This section
describes the procedures for doing this. An additional part of monitoring security involves
keeping abreast of security problems found by others; this is described in Section 5.

3.1   ACCOUNT SECURITY

 Account security should be monitored periodically in order to check for two things: users
logged in when they ``shouldn't'' be (e.g., late at night, when they're on vacation, etc.), and
users executing commands they wouldn't normally be expected to use. The commands
described in this section can be used to obtain this information from the system.

3.1.1   The lastlog File

 The file /usr/adm/lastlog [Sun88a, 1485] records the most recent login time for each user
of the system. The message printed each time you log in, e.g.,

 Last login: Sat Mar 10 10:50:48 from spam.itstd.sri.c

uses the time stored in the lastlog file. Additionally, the last login time reported by the
finger command uses this time. Users should be told to carefully examine this time whenever
they log in, and to report unusual login times to the system administrator. This is an easy
way to detect account break-ins, since each user should remember the last time she logged
into the system.

3.1.2   The utmp and wtmp Files

 The file /etc/utmp [Sun88a, 1485] is used to record who is currently logged into the sys-
tem. This file can be displayed using the who command [Sun88a, 597]:

 % who
 hendra tty0c Mar 13 12:31
 heidari tty14 Mar 13 13:54
 welgem tty36 Mar 13 12:15
 reagin ttyp0 Mar 13 08:54 (aaifs.itstd.sri.)
 ghg ttyp1 Mar 9 07:03 (hydra.riacs.edu)
 compion ttyp2 Mar 1 03:01 (ei.ecn.purdue.ed)

For each user, the login name, terminal being used, login time, and remote host (if the user is

 25

logged in via the network) are displayed.

 The file /usr/adm/wtmp [Sun88a, 1485] records each login and logout time for every
user. This file can also be displayed using the who command:

 % who /usr/adm/wtmp
 davy ttyp4 Jan 7 12:42 (annex01.riacs.ed)
  ttyp4 Jan 7 15:33
 davy ttyp4 Jan 7 15:33 (annex01.riacs.ed)
  ttyp4 Jan 7 15:35
 hyder ttyp3 Jan 8 09:07 (triceratops.itst)
  ttyp3 Jan 8 11:43

A line that contains a login name indicates the time the user logged in; a line with no login
name indicates the time that the terminal was logged off. Unfortunately, the output from this
command is rarely as simple as in the example above; if several users log in at once, the
login and logout times are all mixed together and must be matched up by hand using the ter-
minal name.

 The wtmp file may also be examined using the last command [Sun88a, 248]. This com-
mand sorts out the entries in the file, matching up login and logout times. With no argu-
ments, last displays all information in the file. By giving the name of a user or terminal, the
output can be restricted to the information about the user or terminal in question. Sample out-
put from the last command is shown below.

 % last
 davy ttyp3 intrepid.itstd.s Tue Mar 13 10:55 - 10:56 (00:00)
 hyder ttyp3 clyde.itstd.sri. Mon Mar 12 15:31 - 15:36 (00:04)
 reboot ~ Mon Mar 12 15:16
 shutdown ~ Mon Mar 12 15:16
 arms ttyp3 clyde0.itstd.sri Mon Mar 12 15:08 - 15:12 (00:04)
 hyder ttyp3 spam.itstd.sri.c Sun Mar 11 21:08 - 21:13 (00:04)
 reboot ~ Sat Mar 10 20:05
 davy ftp hydra.riacs.edu Sat Mar 10 13:23 - 13:30 (00:07)

For each login session, the user name, terminal used, remote host (if the user logged in via
the network), login and logout times, and session duration are shown. Additionally, the times
of all system shutdowns and reboots (generated by the shutdown and reboot commands
[Sun88a, 1727, 1765]) are recorded. Unfortunately, system crashes are not recorded. In
newer versions of the operating system, pseudo logins such as those via the ftp command are
also recorded; an example of this is shown in the last line of the sample output, above.

3.1.3   The acct File

 The file /usr/adm/acct [Sun88a, 1344-1345] records each execution of a command on the
system, who executed it, when, and how long it took. This information is logged each time a
command completes, but only if your kernel was compiled with the SYSACCT option enabled
(the option is enabled in some GENERIC kernels, but is usually disabled by default).

 26

 The acct file can be displayed using the lastcomm command [Sun88a, 249]. With no
arguments, all the information in the file is displayed. However, by giving a command name,
user name, or terminal name as an argument, the output can be restricted to information about
the given command, user, or terminal. Sample output from lastcomm is shown below.

 % lastcomm
 sh S root __ 0.67 secs Tue Mar 13 12:45
 atrun root __ 0.23 secs Tue Mar 13 12:45
 lpd F root __ 1.06 secs Tue Mar 13 12:44
 lpr S burwell tty09 1.23 secs Tue Mar 13 12:44
 troff burwell tty09 12.83 secs Tue Mar 13 12:44
 eqn burwell tty09 1.44 secs Tue Mar 13 12:44
 df kindred ttyq7 0.78 secs Tue Mar 13 12:44
 ls kindred ttyq7 0.28 secs Tue Mar 13 12:44
 cat kindred ttyq7 0.05 secs Tue Mar 13 12:44
 stty kindred ttyq7 0.05 secs Tue Mar 13 12:44
 tbl burwell tty09 1.08 secs Tue Mar 13 12:44
 rlogin S jones ttyp3 5.66 secs Tue Mar 13 12:38
 rlogin F jones ttyp3 2.53 secs Tue Mar 13 12:41
 stty kindred ttyq7 0.05 secs Tue Mar 13 12:44

The first column indicates the name of the command. The next column displays certain flags
on the command: an ``F'' means the process spawned a child process, ``S'' means the process
ran with the set-user-id bit set, ``D'' means the process exited with a core dump, and ``X''
means the process was killed abnormally. The remaining columns show the name of the user
who ran the program, the terminal he ran it from (if applicable), the amount of CPU time used
by the command (in seconds), and the date and time the process started.

3.2   NETWORK SECURITY

 Monitoring network security is more difficult, because there are so many ways for a
cracker to attempt to break in. However, there are some programs available to aid you in this
task. These are described in this section.

3.2.1   The syslog Facility

 The syslog facility [Sun88a, 1773] is a mechanism that enables any command to log
error messages and informational messages to the system console, as well as to a log file.
Typically, error messages are logged in the file /usr/adm/messages along with the date, time,
name of the program sending the message, and (usually) the process id of the program. A
sample segment of the messages file is shown below.

 27

 Mar 12 14:53:37 sparkyfs login: ROOT LOGIN ttyp3 FROM setekfs.itstd.sr
 Mar 12 15:18:08 sparkyfs login: ROOT LOGIN ttyp3 FROM setekfs.itstd.sr
 Mar 12 16:50:25 sparkyfs login: ROOT LOGIN ttyp4 FROM pongfs.itstd.sri
 Mar 12 16:52:20 sparkyfs vmunix: sd2c: read failed, no retries
 Mar 13 06:01:18 sparkyfs vmunix: /: file system full
 Mar 13 08:02:03 sparkyfs login: ROOT LOGIN ttyp4 FROM triceratops.itst
 Mar 13 08:28:52 sparkyfs su: davy on /dev/ttyp3
 Mar 13 08:38:03 sparkyfs login: ROOT LOGIN ttyp4 FROM triceratops.itst
 Mar 13 10:56:54 sparkyfs automount[154]: host aaifs not responding
 Mar 13 11:30:42 sparkyfs login: REPEATED LOGIN FAILURES ON ttyp3 FROM
  intrepid.itstd.s, daemon

Of particular interest in this sample are the messages from the login and su programs.
Whenever someone logs in as ``root,'' login logs this information. Generally, logging in as
``root'' directly, rather than using the su command, should be discouraged, as it is hard to
track which person is actually using the account. Once this ability has been disabled, as
described in Section 2.2.2, detecting a security violation becomes a simple matter of searching
the messages file for lines of this type.

 Login also logs any case of someone repeatedly trying to log in to an account and fail-
ing. After three attempts, login will refuse to let the person try anymore. Searching for these
messages in the messages file can alert you to a cracker attempting to guess someone's pass-
word.

 Finally, when someone uses the su command, either to become ``root'' or someone
else, su logs the success or failure of this operation. These messages can be used to check
for users sharing their passwords, as well as for a cracker who has penetrated one account and
is trying to penetrate others.

3.2.2   The showmount Command

 The showmount command [Sun88a, 1764] can be used on an NFS file server to display
the names of all hosts that currently have something mounted from the server. With no
options, the program simply displays a list of all the hosts. With the -a and -d options, the
output is somewhat more useful. The first option, -a, causes showmount to list all the host
and directory combinations. For example,

 bronto.itstd.sri.com:/usr/share
 bronto.itstd.sri.com:/usr/local.new
 bronto.itstd.sri.com:/usr/share/lib
 bronto.itstd.sri.com:/var/spool/mail
 cascades.itstd.sri.com:/sparky/a
 clyde.itstd.sri.com:/laser_dumps
 cm1.itstd.sri.com:/sparky/a
 coco0.itstd.sri.com:/sparky/a

There will be one line of output for each directory mounted by a host. With the -d option,
showmount displays a list of all directories that are presently mounted by some host.

 28

 The output from showmount should be checked for two things. First, only machines
local to your organization should appear there. If you have set up proper netgroups as
described in Section 2.2.3, this should not be a problem. Second, only ``normal'' directories
should be mounted. If you find unusual directories being mounted, you should find out who
is mounting them and why - although it is probably innocent, it may indicate someone trying
to get around your security mechanisms.

3.3   FILE SYSTEM SECURITY

 Checking for security holes in the file system is another important part of making your
system secure. Primarily, you need to check for files that can be modified by unauthorized
users, files that can inadvertently grant users too many permissions, and files that can inadver-
tently grant access to crackers. It is also important to be able to detect unauthorized
modifications to the file system, and to recover from these modifications when they are made.

3.3.1   The find Command

 The find command [Sun88a, 183-185] is a general-purpose command for searching the
file system. Using various arguments, complex matching patterns based on a file's name,
type, mode, owner, modification time, and other characteristics, can be constructed. The
names of files that are found using these patterns can then be printed out, or given as argu-
ments to other UNIX commands. The general format of a find command is

 % find directories options

where directories is a list of directory names to search (e.g., /usr), and options contains the
options to control what is being searched for. In general, for the examples in this section, you
will always want to search from the root of the file system (/), in order to find all files match-
ing the patterns presented.

 This section describes how to use find to search for four possible security problems that
were described in Section 2.

3.3.1.1   Finding Setuid and Setgid Files

 It is important to check the system often for unauthorized setuid and setgid programs.
Because these programs grant special privileges to the user who is executing them, it is neces-
sary to ensure that insecure programs are not installed. Setuid ``root'' programs should be
closely guarded - a favorite trick of many crackers is to break into ``root'' once, and leave a
setuid program hidden somewhere that will enable them to regain super-user powers even if
the original hole is plugged.

 The command to search for setuid and setgid files is

 29

 # find / -type f -a \( -perm -4000 -o -perm -2000 \) -print

The options to this command have the following meanings:

 / The name of the directory to be searched. In this case, we want to search the entire
 file system, so we specify /. You might instead restrict the search to /usr or
 /home.

 -type f
 Only examine files whose type is ``f,'' regular file. Other options include ``d'' for
 directory, ``l'' for symbolic link, ``c'' for character-special devices, and ``b'' for
 block-special devices.

 -a This specifies ``and.'' Thus, we want to know about files whose type is ``regular
 file,'' and whose permissions bits match the other part of this expression.

 \( -perm -4000 -o -perm -2000 \)
 The parentheses in this part of the command are used for grouping. Thus, every-
 thing in this part of the command matches a single pattern, and is treated as the
 other half of the ``and'' clause described above.

 -perm -4000
 This specifies a match if the ``4000'' bit (specified as an octal number) is set
 in the file's permission modes. This is the set-user-id bit.

 -o This specifies ``or.'' Thus, we want to match if the file has the set-user-id bit
 or the set-group-id bit set.

 -perm -2000
 This specifies a match if the ``2000'' bit (specified as an octal number) is set
 in the file's permission modes. This is the set-group-id bit.

 -printThis indicates that for any file that matches the specified expression (is a regular
 file and has the setuid or setgid bits set in its permissions), print its name on the
 screen.

 After executing this command (depending on how much disk space you have, it can take
anywhere from 15 minutes to a couple of hours to complete), you will have a list of files that
have setuid or setgid bits set on them. You should then examine each of these programs, and
determine whether they should actually have these permissions. You should be especially
suspicious of programs that are not in one of the directories (or a subdirectory) shown below.

 /bin
 /etc
 /usr/bin
 /usr/ucb
 /usr/etc

 One file distributed with SunOS, /usr/etc/restore, is distributed with the setuid bit set on
it, and should not be, because of a security hole. You should be sure to remove the setuid bit
from this program by executing the command

 30

 # chmod u-s /usr/etc/restore

3.3.1.2   Finding World-Writable Files

 World-writable files, particularly system files, can be a security hole if a cracker gains
access to your system and modifies them. Additionally, world-writable directories are
dangerous, since they allow a cracker to add or delete files as he wishes. The find command
to find all world-writable files is

 # find / -perm -2 -print

In this case, we do not use the -type option to restrict the search, since we are interested in
directories and devices as well as files. The -2 specifies the world write bit (in octal).

 This list of files will be fairly long, and will include some files that should be world
writable. You should not be concerned if terminal devices in /dev are world writable. You
should also not be concerned about line printer error log files being world writable. Finally,
symbolic links may be world writable - the permissions on a symbolic link, although they
exist, have no meaning.

3.3.1.3   Finding Unowned Files

 Finding files that are owned by nonexistent users can often be a clue that a cracker has
gained access to your system. Even if this is not the case, searching for these files gives you
an opportunity to clean up files that should have been deleted at the same time the user her-
self was deleted. The command to find unowned files is

 # find / -nouser -print

The -nouser option matches files that are owned by a user id not contained in the
/etc/passwd database. A similar option, -nogroup, matches files owned by nonexistent
groups. To find all files owned by nonexistent users or groups, you would use the -o option
as follows:

 # find / -nouser -o -nogroup -print

3.3.1.4   Finding .rhosts Files

 As mentioned in Section 2.2.1.2, users should be prohibited from having .rhosts files in
their accounts. To search for this, it is only necessary to search the parts of the file system
that contain home directories (i.e., you can skip / and /usr):

 # find /home -name .rhosts -print

The -name option indicates that the complete name of any file whose name matches .rhosts

 31

should be printed on the screen.

3.3.2   Checklists

 Checklists can be a useful tool for discovering unauthorized changes made to system
directories. They aren't practical on file systems that contain users' home directories since
these change all the time. A checklist is a listing of all the files contained in a group of
directories: their sizes, owners, modification dates, and so on. Periodically, this information is
collected and compared with the information in the master checklist. Files that do not match
in all attributes can be suspected of having been changed.

 There are several utilities that implement checklists available from public software sites
(see Section 4). However, a simple utility can be constructed using only the standard UNIX
ls and diff commands.

 First, use the ls command [Sun88a, 285] to generate a master list. This is best done
immediately after installing the operating system, but can be done at any time provided you're
confident about the correctness of the files on the disk. A sample command is shown below.

 # ls -aslgR /bin /etc /usr > MasterChecklist

The file MasterChecklist now contains a complete list of all the files in these directories.
You will probably want to edit it and delete the lines for files you know will be changing
often (e.g., /etc/utmp, /usr/adm/acct). The MasterChecklist file should be stored somewhere
safe where a cracker is unlikely to find it (since he could otherwise just change the data in it):
either on a different computer system, or on magnetic tape.

 To search for changes in the file system, run the above ls command again, saving the
output in some other file, say CurrentList. Now use the diff command [Sun88a, 150] to com-
pare the two files:

 # diff MasterChecklist CurrentList

Lines that are only in the master checklist will be printed preceded by a ``<,'' and lines that
are only in the current list will be preceded by a ``>.'' If there is one line for a file, preceded
by a ``<,'' this means that the file has been deleted since the master checklist was created. If
there is one line for a file, preceded by a ``>,'' this means that the file has been created since
the master checklist was created. If there are two lines for a single file, one preceded by ``<''
and the other by ``>,'' this indicates that some attribute of the file has changed since the mas-
ter checklist was created.

 By carefully constructing the master checklist, and by remembering to update it periodi-
cally (you can replace it with a copy of CurrentList, once you're sure the differences between
the lists are harmless), you can easily monitor your system for unauthorized changes. The
software packages available from the public software distribution sites implement basically the
same scheme as the one here, but offer many more options for controlling what is examined
and reported.

 32

3.3.3   Backups

 It is impossible to overemphasize the need for a good backup strategy. File system
backups not only protect you in the even of hardware failure or accidental deletions, but they
also protect you against unauthorized file system changes made by a cracker.

 A good backup strategy will dump the entire system at level zero (a ``full'' dump) at
least once a month. Partial (or ``incremental'') dumps should be done at least twice a week,
and ideally they should be done daily. The dump command [Sun88a, 1612-1614] is recom-
mended over other programs such as tar and cpio. This is because only dump is capable of
creating a backup that can be used to restore a disk to the exact state it was in when it was
dumped. The other programs do not take into account files deleted or renamed between
dumps, and do not handle some specialized database files properly.

3.4   KNOW YOUR SYSTEM

 Aside from running large monitoring programs such as those described in the previous
sections, simple everyday UNIX commands can also be useful for spotting security violations.
By running these commands often, whenever you have a free minute (for example, while
waiting for someone to answer the phone), you will become used to seeing a specific pattern
of output. By being familiar with the processes normally running on your system, the times
different users typically log in, and so on, you can easily detect when something is out of the
ordinary.

3.4.1   The ps Command

 The ps command [Sun88a, 399-402] displays a list of the processes running on your sys-
tem. Ps has numerous options, too many to list here. Generally, however, for the purpose of
monitoring, the option string -alxww is the most useful.* On a Sun system running SunOS
4.0, you should expect to see at least the following:

 swapper, pagedaemon
 System programs that help the virtual memory system.

 /sbin/init
 The init process, which is responsible for numerous tasks, including bringing up
 login processes on terminals.

 portmap, ypbind, ypserv
 Parts of the Yellow Pages system.

 biod, nfsd, rpc.mountd, rpc.quotad, rpc.lockd
 Parts of the Network File System (NFS). If the system you are looking at is not a
\(ru \(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru

 * This is true for Berkeley-based systems. On System V systems, the option string -elf should be used in-
stead.

 33

 file server, the nfsd processes probably won't exist.

 rarpd, rpc.bootparamd
 Part of the system that allows diskless clients to boot.

 Other commands you should expect to see are update (file system updater); getty (one
per terminal and one for the console); lpd (line printer daemon); inetd (Internet daemon, for
starting other network servers); sh and csh (the Bourne shell and C shell, one or more per
logged in user). In addition, if there are users logged in, you'll probably see invocations of
various compilers, text editors, and word processing programs.

3.4.2   The who and w Commands

 The who command, as mentioned previously, displays the list of users currently logged
in on the system. By running this periodically, you can learn at what times during the day
various users log in. Then, when you see someone logged in at a different time, you can
investigate and make sure that it's legitimate.

 The w command [Sun88a, 588] is somewhat of a cross between who and ps. Not only
does it show a list of who is presently logged in, but it also displays how long they have been
idle (gone without typing anything), and what command they are currently running.

3.4.3   The ls Command

 Simple as its function is, ls is actually very useful for detecting file system problems.
Periodically, you should use ls on the various system directories, checking for files that
shouldn't be there. Most of the time, these files will have just ``landed'' somewhere by
accident. However, by keeping a close watch on things, you will be able to detect a cracker
long before you might have otherwise.

 When using ls to check for oddities, be sure to use the -a option, which lists files
whose names begin with a period (.). Be particularly alert for files or directories named ``...'',
or ``..(space)'', which many crackers like to use. (Of course, remember that ``.'' and ``..'' are
supposed to be there.)

3.5   KEEP YOUR EYES OPEN

 Monitoring for security breaches is every bit as important as preventing them in the first
place. Because it's virtually impossible to make a system totally secure, there is always the
chance, no matter how small, that a cracker will be able to gain access. Only by monitoring
can this be detected and remedied.

 34

 SECTION 4
 SOFTWARE FOR IMPROVING SECURITY

 Because security is of great concern to many sites, a wealth of software has been
developed for improving the security of UNIX systems. Much of this software has been
developed at universities and other public institutions, and is available free for the asking.
This section describes how this software can be obtained, and mentions some of the more
important programs available.

4.1   OBTAINING FIXES AND NEW VERSIONS

 Several sites on the Internet maintain large repositories of public-domain and freely dis-
tributable software, and make this material available for anonymous FTP. This section
describes some of the larger repositories.

4.1.1   Sun Fixes on UUNET

 Sun Microsystems has contracted with UUNET Communications Services, Inc. to make
fixes for bugs in Sun software available via anonymous FTP. You can access these fixes by
using the ftp command [Sun88a, 195-201] to connect to the host ftp.uu.net. Then change into
the directory sun-fixes, and obtain a directory listing, as shown in the example on the follow-
ing page.

 35

% ftp ftp.uu.net
Connected to uunet.UU.NET.
220 uunet FTP server (Version 5.93 Tue Mar 20 11:01:52 EST 1990) ready.
Name (ftp.uu.net:davy): anonymous
331 Guest login ok, send ident as password.
Password: enter your mail address [email protected] here
230 Guest login ok, access restrictions apply.
ftp> cd sun-fixes
250 CWD command successful.
ftp> dir
200 PORT command successful.
150 Opening ASCII mode data connection for /bin/ls.
total 2258
-rw-r--r-- 1 38 22 4558 Aug 31 1989 README
-rw-r--r-- 1 38 22 484687 Dec 14 1988 ddn.tar.Z
-rw-r--r-- 1 38 22 140124 Jan 13 1989 gated.sun3.Z
-rwxr-xr-x 1 38 22 22646 Dec 14 1988 in.ftpd.sun3.Z
.....
.....
-rw-r--r-- 1 38 22 72119 Aug 31 1989 sendmail.sun3.Z
-rwxr-xr-x 1 38 22 99147 Aug 31 1989 sendmail.sun4.Z
-rw-r--r-- 1 38 22 3673 Jul 11 1989 wall.sun3.Z
-rw-r--r-- 1 38 22 4099 Jul 11 1989 wall.sun4.Z
-rwxr-xr-x 1 38 22 7955 Jan 18 1989 ypbind.sun3.Z
-rwxr-xr-x 1 38 22 9237 Jan 18 1989 ypbind.sun4.Z
226 Transfer complete.
1694 bytes received in 0.39 seconds (4.2 Kbytes/s)
ftp> quit
221 Goodbye.
%

The file README contains a brief description of what each file in this directory contains, and
what is required to install the fix.

4.1.2   Berkeley Fixes

 The University of California at Berkeley also makes fixes available via anonymous FTP;
these fixes pertain primarily to the current release of BSD UNIX (currently release 4.3). How-
ever, even if you are not running their software, these fixes are still important, since many
vendors (Sun, DEC, Sequent , etc.) base their software on the Berkeley releases.

 The Berkeley fixes are available for anonymous FTP from the host ucbarpa.berkeley.edu
in the directory 4.3/ucb-fixes. The file INDEX in this directory describes what each file con-
tains.

 Berkeley also distributes new versions of sendmail and named [Sun88a, 1758-1760,
1691-1692] from this machine. New versions of these commands are stored in the 4.3 direc-
tory, usually in the files sendmail.tar.Z and bind.tar.Z, respectively.

 36

4.1.3   Simtel-20 and UUNET

 The two largest general-purpose software repositories on the Internet are the hosts
wsmr-simtel20.army.mil and ftp.uu.net.

 wsmr-simtel20.army.mil is a TOPS-20 machine operated by the U. S. Army at White
Sands Missile Range, New Mexico. The directory pd2:<unix-c> contains a large amount of
UNIX software, primarily taken from the comp.sources newsgroups. The file 000-master-
index.txt contains a master list and description of each piece of software available in the repo-
sitory. The file 000-intro-unix-sw.txt contains information on the mailing list used to
announce new software, and describes the procedures used for transferring files from the
archive with FTP.

 ftp.uu.net is operated by UUNET Communications ServicesF.v in Falls Church, Vir-
ginia. This company sells Internet and USENET access to sites all over the country (and inter-
nationally). The software posted to the following USENET source newsgroups is stored here,
in directories of the same name:

 comp.sources.games
 comp.sources.misc
 comp.sources.sun
 comp.sources.unix
 comp.sources.x

Numerous other distributions, such as all the freely distributable Berkeley UNIX source code,
Internet Request for Comments (RFCs), and so on are also stored on this machine.

4.1.4   Vendors

 Many vendors make fixes for bugs in their software available electronically, either via
mailing lists or via anonymous FTP. You should contact your vendor to find out if they offer
this service, and if so, how to access it. Some vendors that offer these services include Sun
Microsystems (see above), Digital Equipment Corp., the University of California at Berkeley
(see above), and Apple Computer.

4.2   THE NPASSWD COMMAND

 The npasswd command, developed by Clyde Hoover at the University of Texas at Aus-
tin, is intended to be a replacement for the standard UNIX passwd command [Sun88a, 379],
as well as the Sun yppasswd command [Sun88a, 611]. npasswd makes passwords more
secure by refusing to allow users to select insecure passwords. The following capabilities are
provided by npasswd:

 \(bu Configurable minimum password length

 \(bu Configurable to force users to use mixed case or digits and punctuation

 37

 \(bu Checking for ``simple'' passwords such as a repeated letter

 \(bu Checking against the host name and other host-specific information

 \(bu Checking against the login name, first and last names, and so on

 \(bu Checking for words in various dictionaries, including the system dictionary.

 The npasswd distribution is available for anonymous FTP from emx.utexas.edu in the
directory pub/npasswd.

4.3   THE COPS PACKAGE

 COPS is a security tool for system administrators that checks for numerous common
security problems on UNIX systems, including many of the things described in this document.
COPS is a collection of shell scripts and C programs that can easily be run on almost any
UNIX variant. Among other things, it checks the following items and sends the results to the
system administrator:

 \(bu Checks /dev/kmem and other devices for world read/writability.

 \(bu Checks special/important files and directories for ``bad'' modes (world writable,
 etc.).

 \(bu Checks for easily guessed passwords.

 \(bu Checks for duplicate user ids, invalid fields in the password file, etc.

 \(bu Checks for duplicate group ids, invalid fields in the group file, etc.

 \(bu Checks all users' home directories and their .cshrc, .login, .profile, and .rhosts
 files for security problems.

 \(bu Checks all commands in the /etc/rc files [Sun88a, 1724-1725] and cron files
 [Sun88a, 1606-1607] for world writability.

 \(bu Checks for bad ``root'' paths, NFS file system exported to the world, etc.

 \(bu Includes an expert system that checks to see if a given user (usually ``root'') can be
 compromised, given that certain rules are true.

 \(bu Checks for changes in the setuid status of programs on the system.

 The COPS package is available from the comp.sources.unix archive on ftp.uu.net, and
also from the repository on wsmr-simtel20.army.mil.

4.4   SUN C2 SECURITY FEATURES

 With the release of SunOS 4.0, Sun has included security features that allow the system
to operate at a higher level of security, patterned after the C2* classification. These features
\(ru \(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru\(ru

 * C2 is one of several security classifications defined by the National Computer Security Center, and is
described in [NCSC85], the ``orange book.''

 38

can be installed as one of the options when installing the system from the distribution tapes.
The security features added by this option include

 \(bu Audit trails that record all login and logout times, the execution of administrative
 commands, and the execution of privileged (setuid) operations.

 \(bu A more secure password file mechanism (``shadow password file'') that prevents
 crackers from obtaining a list of the encrypted passwords.

 \(bu DES encryption capability.

 \(bu A (more) secure NFS implementation that uses public-key encryption to authenticate
 the users of the system and the hosts on the network, to be sure they really are who
 they claim to be.

These security features are described in detail in [Sun88c].

4.5   KERBEROS

 Kerberos [Stei88] is an authentication system developed by the Athena Project at the
Massachusetts Institute of Technology. Kerberos is a third-party authentication service, which
is trusted by other network services. When a user logs in, Kerberos authenticates that user
(using a password), and provides the user with a way to prove her identity to other servers
and hosts scattered around the network.

 This authentication is then used by programs such as rlogin [Sun88a, 418-419] to allow
the user to log in to other hosts without a password (in place of the .rhosts file). The authen-
tication is also used by the mail system in order to guarantee that mail is delivered to the
correct person, as well as to guarantee that the sender is who he claims to be. NFS has also
been modified by M.I.T. to work with Kerberos, thereby making the system much more
secure.

 The overall effect of installing Kerberos and the numerous other programs that go with it
is to virtually eliminate the ability of users to ``spoof'' the system into believing they are
someone else. Unfortunately, installing Kerberos is very intrusive, requiring the modification
or replacement of numerous standard programs. For this reason, a source license is usually
necessary. There are plans to make Kerberos a part of 4.4BSD, to be released by the Univer-
sity of California at Berkeley sometime in 1990.

 39

 40

 SECTION 5
 KEEPING ABREAST OF THE BUGS

 One of the hardest things about keeping a system secure is finding out about the security
holes before a cracker does. To combat this, there are several sources of information you can
and should make use of on a regular basis.

5.1   THE COMPUTER EMERGENCY RESPONSE TEAM

 The Computer Emergency Response Team (CERT) was established in December 1988 by
the Defense Advanced Research Projects Agency to address computer security concerns of
research users of the Internet. It is operated by the Software Engineering Institute at
Carnegie-Mellon University. The CERT serves as a focal point for the reporting of security
violations, and the dissemination of security advisories to the Internet community. In addi-
tion, the team works with vendors of various systems in order to coordinate the fixes for secu-
rity problems.

 The CERT sends out security advisories to the cert-advisory mailing list whenever
appropriate. They also operate a 24-hour hotline that can be called to report security prob-
lems (e.g., someone breaking into your system), as well as to obtain current (and accurate)
information about rumored security problems.

 To join the cert-advisory mailing list, send a message to cert@cert.sei.cmu.edu and ask
to be added to the mailing list. Past advisories are available for anonymous FTP from the
host cert.sei.cmu.edu. The 24-hour hotline number is (412) 268-7090.

5.2   DDN MANAGEMENT BULLETINS

 The DDN Management Bulletin is distributed electronically by the Defense Data Net-
work (DDN) Network Information Center under contract to the Defense Communications
Agency. It is a means of communicating official policy, procedures, and other information of
concern to management personnel at DDN facilities.

 The DDN Security Bulletin is distributed electronically by the DDN SCC (Security Coor-
dination Center), also under contract to DCA, as a means of communicating information on
network and host security exposures, fixes, and concerns to security and management person-
nel at DDN facilities.

 Anyone may join the mailing lists for these two bulletins by sending a message to
nic@nic.ddn.mil and asking to be placed on the mailing lists.

 41

5.3   SECURITY-RELATED MAILING LISTS

 There are several other mailing lists operated on the Internet that pertain directly or
indirectly to various security issues. Some of the more useful ones are described below.

5.3.1   Security

 The UNIX Security mailing list exists to notify system administrators of security prob-
lems before they become common knowledge, and to provide security enhancement informa-
tion. It is a restricted-access list, open only to people who can be verified as being principal
systems people at a site. Requests to join the list must be sent by either the site contact listed
in the Network Information Center's WHOIS database, or from the ``root'' account on one of
the major site machines. You must include the destination address you want on the list, an
indication of whether you want to be on the mail reflector list or receive weekly digests, the
electronic mail address and voice telephone number of the site contact if it isn't you, and the
name, address, and telephone number of your organization. This information should be sent
to security-request@cpd.com.

5.3.2   RISKS

 The RISKS digest is a component of the ACM Committee on Computers and Public Pol-
icy, moderated by Peter G. Neumann. It is a discussion forum on risks to the public in com-
puters and related systems, and along with discussing computer security and privacy issues,
has discussed such subjects as the Stark incident, the shooting down of the Iranian airliner in
the Persian Gulf (as it relates to the computerized weapons systems), problems in air and rail-
road traffic control systems, software engineering, and so on. To join the mailing list, send a
message to risks-request@csl.sri.com. This list is also available in the USENET newsgroup
comp.risks.

5.3.3   TCP-IP

 The TCP-IP list is intended to act as a discussion forum for developers and maintainers
of implementations of the TCP/IP protocol suite. It also discusses network-related security
problems when they involve programs providing network services, such as sendmail. To join
the TCP-IP list, send a message to tcp-ip-request@nic.ddn.mil. This list is also available in
the USENET newsgroup comp.protocols.tcp-ip.

5.3.4   SUN-SPOTS, SUN-NETS, SUN-MANAGERS

 The SUN-SPOTS, SUN-NETS, and SUN-MANAGERS lists are all discussion groups for
users and administrators of systems supplied by Sun Microsystems. SUN-SPOTS is a fairly

 42

general list, discussing everything from hardware configurations to simple UNIX questions.
To subscribe, send a message to sun-spots-request@rice.edu. This list is also available in the
USENET newsgroup comp.sys.sun.

 SUN-NETS is a discussion list for items pertaining to networking on Sun systems. Much
of the discussion is related to NFS, Yellow Pages, and name servers. To subscribe, send a
message to sun-nets-request@umiacs.umd.edu.

 SUN-MANAGERS is a discussion list for Sun system administrators and covers all
aspects of Sun system administration. To subscribe, send a message to sun-managers-
request@eecs.nwu.edu.

5.3.5   VIRUS-L

 The VIRUS-L list is a forum for the discussion of computer virus experiences, protection
software, and related topics. The list is open to the public, and is implemented as a mail
reflector, not a digest. Most of the information is related to personal computers, although
some of it may be applicable to larger systems. To subscribe, send the line

 SUB VIRUS-L your full name

to the address listserv%lehiibm1.bitnet@mitvma.mit.edu.

 43

 44

 SECTION 6
 SUGGESTED READING

 This section suggests some alternate sources of information pertaining to the security and
administration of the UNIX operating system.

UNIX System Administration Handbook
Evi Nemeth, Garth Snyder, Scott Seebass
Prentice Hall, 1989, $26.95

 This is perhaps the best general-purpose book on UNIX system administration currently
 on the market. It covers Berkeley UNIX, SunOS, and System V. The 26 chapters and
 17 appendices cover numerous topics, including booting and shutting down the system,
 the file system, configuring the kernel, adding a disk, the line printer spooling system,
 Berkeley networking, sendmail, and uucp. Of particular interest are the chapters on
 running as the super-user, backups, and security.

UNIX Operating System Security
F. T. Grammp and R. H. Morris
AT&T Bell Laboratories Technical Journal
October 1984

 This is an excellent discussion of some of the more common security problems in
 UNIX and how to avoid them, written by two of Bell Labs' most prominent security
 experts.

Password Security: A Case History
Robert Morris and Ken Thompson
Communications of the ACM
November 1979

 An excellent discussion on the problem of password security, and some interesting
 information on how easy it is to crack passwords and why. This document is usually
 reprinted in most vendors' UNIX documentation.

On the Security of UNIX
Dennis M. Ritchie
May 1975

 A discussion on UNIX security from one of the original creators of the system. This
 document is usually reprinted in most vendors' UNIX documentation.

The Cuckoo's Egg
Clifford Stoll
Doubleday, 1989, $19.95

 45

 An excellent story of Stoll's experiences tracking down the German crackers who were
 breaking into his systems and selling the data they found to the KGB. Written at a
 level that nontechnical users can easily understand.

System and Network Administration
Sun Microsystems
May, 1988

 Part of the SunOS documentation, this manual covers most aspects of Sun system
 administration, including security issues. A must for anyone operating a Sun system,
 and a pretty good reference for other UNIX systems as well.

Security Problems in the TCP/IP Protocol Suite
S. M. Bellovin
ACM Computer Communications Review
April, 1989

 An interesting discussion of some of the security problems with the protocols in use on
 the Internet and elsewhere. Most of these problems are far beyond the capabilities of
 the average cracker, but it is still important to be aware of them. This article is techni-
 cal in nature, and assumes familiarity with the protocols.

A Weakness in the 4.2BSD UNIX TCP/IP Software
Robert T. Morris
AT&T Bell Labs Computer Science Technical Report 117
February, 1985

 An interesting article from the author of the Internet worm, which describes a method
 that allows remote hosts to ``spoof'' a host into believing they are trusted. Again, this
 article is technical in nature, and assumes familiarity with the protocols.

Computer Viruses and Related Threats: A Management Guide
John P. Wack and Lisa J. Carnahan
National Institute of Standards and Technology
Special Publication 500-166

 This document provides a good introduction to viruses, worms, trojan horses, and so
 on, and explains how they work and how they are used to attack computer systems.
 Written for the nontechnical user, this is a good starting point for learning about these
 security problems. This document can be ordered for $2.50 from the U. S. Govern-
 ment Printing Office, document number 003-003-02955-6.

 46

 SECTION 7
 CONCLUSIONS

 Computer security is playing an increasingly important role in our lives as more and
more operations become computerized, and as computer networks become more widespread.
In order to protect your systems from snooping and vandalism by unauthorized crackers, it is
necessary to enable the numerous security features provided by the UNIX system.

 In this document, we have covered the major areas that can be made more secure:

 \(bu Account security

 \(bu Network security

 \(bu File system security.

Additionally, we have discussed how to monitor for security violations, where to obtain
security-related software and bug fixes, and numerous mailing lists for finding out about secu-
rity problems that have been discovered.

 Many crackers are not interested in breaking into specific systems, but rather will break
into any system that is vulnerable to the attacks they know. Eliminating these well-known
holes and monitoring the system for other security problems will usually serve as adequate
defense against all but the most determined crackers. By using the procedures and sources
described in this document, you can make your system more secure.

 47

 48

 REFERENCES

[Eich89] Eichin, Mark W., and Jon A. Rochlis. With Microscope and Tweezers: An
 Analysis of the Internet Virus of November 1988. Massachusetts Institute of
 Technology. February 1989.

[Elme88] Elmer-DeWitt, Philip. `` `The Kid Put Us Out of Action.' '' Time, 132 (20):
 76, November 14, 1988.

[Gram84] Grammp, F. T., and R. H. Morris. ``UNIX Operating System Security.'' AT&T
 Bell Laboratories Technical Journal, 63 (8): 1649-1672, October 1984.

[Hind83] Hinden, R., J. Haverty, and A. Sheltzer. ``The DARPA Internet: Interconnect-
 ing Heterogeneous Computer Networks with Gateways.'' IEEE Computer
 Magazine, 16 (9): 33-48, September 1983.

[McLe87] McLellan, Vin. ``NASA Hackers: There's More to the Story.'' Digital Review,
 November 23, 1987, p. 80.

[Morr78] Morris, Robert, and Ken Thompson. ``Password Security: A Case History.''
 Communications of the ACM, 22 (11): 594-597, November 1979. Reprinted in
 UNIX System Manager's Manual, 4.3 Berkeley Software Distribution. Univer-
 sity of California, Berkeley. April 1986.

[NCSC85] National Computer Security Center. Department of Defense Trusted Computer
 System Evaluation Criteria, Department of Defense Standard DOD 5200.28-
 STD, December, 1985.

[Quar86] Quarterman, J. S., and J. C. Hoskins. ``Notable Computer Networks.'' Com-
 munications of the ACM, 29 (10): 932-971, October 1986.

[Reed84] Reeds, J. A., and P. J. Weinberger. ``File Security and the UNIX System Crypt
 Command.'' AT&T Bell Laboratories Technical Journal, 63 (8): 1673-1683,
 October 1984.

[Risk87] Forum on Risks to the Public in Computers and Related Systems. ACM Com-
 mittee on Computers and Public Policy, Peter G. Neumann, Moderator. Inter-
 net mailing list. Issue 5.73, December 13, 1987.

[Risk88] Forum on Risks to the Public in Computers and Related Systems. ACM Com-
 mittee on Computers and Public Policy, Peter G. Neumann, Moderator. Inter-
 net mailing list. Issue 7.85, December 1, 1988.

[Risk89a] Forum on Risks to the Public in Computers and Related Systems. ACM Com-
 mittee on Computers and Public Policy, Peter G. Neumann, Moderator. Inter-
 net mailing list. Issue 8.2, January 4, 1989.

[Risk89b] Forum on Risks to the Public in Computers and Related Systems. ACM Com-
 mittee on Computers and Public Policy, Peter G. Neumann, Moderator. Inter-
 net mailing list. Issue 8.9, January 17, 1989.

[Risk90] Forum on Risks to the Public in Computers and Related Systems. ACM Com-
 mittee on Computers and Public Policy, Peter G. Neumann, Moderator. Inter-
 net mailing list. Issue 9.69, February 20, 1990.

 49

[Ritc75] Ritchie, Dennis M. ``On the Security of UNIX.'' May 1975. Reprinted in
 UNIX System Manager's Manual, 4.3 Berkeley Software Distribution. Univer-
 sity of California, Berkeley. April 1986.

[Schu90] Schuman, Evan. ``Bid to Unhook Worm.'' UNIX Today!, February 5, 1990, p.
 1.

[Seel88] Seeley, Donn. A Tour of the Worm. Department of Computer Science,
 University of Utah. December 1988.

[Spaf88] Spafford, Eugene H. The Internet Worm Program: An Analysis. Technical
 Report CSD-TR-823. Department of Computer Science, Purdue University.
 November 1988.

[Stee88] Steele, Guy L. Jr., Donald R. Woods, Raphael A. Finkel, Mark R. Crispin,
 Richard M. Stallman, and Geoffrey S. Goodfellow. The Hacker's Dictionary.
 New York: Harper and Row, 1988.

[Stei88] Stein, Jennifer G., Clifford Neuman, and Jeffrey L. Schiller. ``Kerberos: An
 Authentication Service for Open Network Systems.'' USENIX Conference
 Proceedings, Dallas, Texas, Winter 1988, pp. 203-211.

[Stol88] Stoll, Clifford. ``Stalking the Wily Hacker.'' Communications of the ACM, 31
 (5): 484-497, May 1988.

[Stol89] Stoll, Clifford. The Cuckoo's Egg. New York: Doubleday, 1989.

[Sun88a] Sun Microsystems. SunOS Reference Manual, Part Number 800-1751-10, May
 1988.

[Sun88b] Sun Microsystems. System and Network Administration, Part Number 800-
 1733-10, May 1988.

[Sun88c] Sun Microsystems. Security Features Guide, Part Number 800-1735-10, May
 1988.

[Sun88d] Sun Microsystems. ``Network File System: Version 2 Protocol Specification.''
 Network Programming, Part Number 800-1779-10, May 1988, pp. 165-185.

 50

 APPENDIX A - SECURITY CHECKLIST

 This checklist summarizes the information presented in this paper, and can be used to
verify that you have implemented everything described.

Account Security
 \(sq Password policy developed and distributed to all users
 \(sq All passwords checked against obvious choices
 \(sq Expiration dates on all accounts
 \(sq No ``idle'' guest accounts
 \(sq All accounts have passwords or ``*'' in the password field
 \(sq No group accounts
 \(sq ``+'' lines in passwd and group checked if running Yellow Pages

Network Security
 \(sq hosts.equiv contains only local hosts, and no ``+''
 \(sq No .rhosts files in users' home directories
 \(sq Only local hosts in ``root'' .rhosts file, if any
 \(sq Only ``console'' labeled as ``secure'' in ttytab (servers only)
 \(sq No terminals labeled as ``secure'' in ttytab (clients only)
 \(sq No NFS file systems exported to the world
 \(sq ftpd version later than December, 1988
 \(sq No ``decode'' alias in the aliases file
 \(sq No ``wizard'' password in sendmail.cf
 \(sq No ``debug'' command in sendmail
 \(sq fingerd version later than November 5, 1988
 \(sq Modems and terminal servers handle hangups correctly

File System Security
 \(sq No setuid or setgid shell scripts
 \(sq Check all ``nonstandard'' setuid and setgid programs for security
 \(sq Setuid bit removed from /usr/etc/restore
 \(sq Sticky bits set on world-writable directories
 \(sq Proper umask value on ``root'' account
 \(sq Proper modes on devices in /dev

Backups
 \(sq Level 0 dumps at least monthly
 \(sq Incremental dumps at least bi-weekly

 51

 This page intentionally left blank.
 Just throw it out.

 lii

 CONTENTS

1 INTRODUCTION ......................................................................................................... 1
1.1 UNIX Security ............................................................................................................... 1
1.2 The Internet Worm ....................................................................................................... 2
1.3 Spies and Espionage ..................................................................................................... 2
1.4 Other Break-Ins ............................................................................................................. 3
1.5 Security is Important..................................................................................................... 3

2 IMPROVING SECURITY ........................................................................................... 5
2.1 Account Security ........................................................................................................... 5
2.1.1 Passwords ...................................................................................................................... 5
2.1.1.1 Selecting Passwords ...................................................................................................... 6
2.1.1.2 Password Policies .......................................................................................................... 7
2.1.1.3 Checking Password Security ........................................................................................ 7
2.1.2 Expiration Dates ............................................................................................................ 8
2.1.3 Guest Accounts ............................................................................................................. 8
2.1.4 Accounts Without Passwords ....................................................................................... 9
2.1.5 Group Accounts and Groups ........................................................................................ 9
2.1.6 Yellow Pages................................................................................................................. 10
2.2 Network Security .......................................................................................................... 11
2.2.1 Trusted Hosts ................................................................................................................ 11
2.2.1.1 The hosts.equiv File ...................................................................................................... 11
2.2.1.2 The .rhosts File ............................................................................................................. 12
2.2.2 Secure Terminals........................................................................................................... 12
2.2.3 The Network File System ............................................................................................. 13
2.2.3.1 The exports File ............................................................................................................ 13
2.2.3.2 The netgroup File .......................................................................................................... 14
2.2.3.3 Restricting Super-User Access ..................................................................................... 16
2.2.4 FTP ................................................................................................................................. 16
2.2.4.1 Trivial FTP .................................................................................................................... 17
2.2.5 Mail ............................................................................................................................... 18
2.2.6 Finger............................................................................................................................. 19
2.2.7 Modems and Terminal Servers..................................................................................... 19
2.2.8 Firewalls ........................................................................................................................ 20
2.3 File System Security ..................................................................................................... 20
2.3.1 Setuid Shell Scripts ....................................................................................................... 21
2.3.2 The Sticky Bit on Directories....................................................................................... 22
2.3.3 The Setgid Bit on Directories........................./q^]H.............................................................. 22
2.3.4 The umask Value .......................................................................................................... 22
2.3.5 Encrypting Files ............................................................................................................ 23
2.3.6 Devices .......................................................................................................................... 23
2.4 Security Is Your Responsibility ................................................................................... 24

 iii

 CONTENTS (continued)

3 MONITORING SECURITY ..................................................................................25
3.1 Account Security .....................................................................................................25
3.1.1 The lastlog File .......................................................................................................25
3.1.2 The utmp and wtmp Files .......................................................................................25
3.1.3 The acct File ...........................................................................................................26
3.2 Network Security ....................................................................................................27
3.2.1 The syslog Facility ..................................................................................................27
3.2.2 The showmount Command .....................................................................................28
3.3 File System Security ...............................................................................................29
3.3.1 The find Command .................................................................................................29
3.3.1.1 Finding Setuid and Setgid Files .............................................................................29
3.3.1.2 Finding World-Writable Files .................................................................................31
3.3.1.3 Finding Unowned Files ...........................................................................................31
3.3.1.4 Finding .rhosts Files ................................................................................................31
3.3.2 Checklists ................................................................................................................32
3.3.3 Backups ...................................................................................................................33
3.4 Know Your System .................................................................................................33
3.4.1 The ps Command ....................................................................................................33
3.4.2 The who and w Commands ....................................................................................34
3.4.3 The ls Command .....................................................................................................34
3.5 Keep Your Eyes Open ............................................................................................34

4 SOFTWARE FOR IMPROVING SECURITY .....................................................35
4.1 Obtaining Fixes and New Versions .......................................................................35
4.1.1 Sun Fixes on UUNET ..............................................................................................35
4.1.2 Berkeley Fixes .........................................................................................................36
4.1.3 Simtel-20 and UUNET ............................................................................................37
4.1.4 Vendors ...................................................................................................................37
4.2 The npasswd Command ..........................................................................................37
4.3 The COPS Package ..................................................................................................38
4.4 Sun C2 Security Features .......................................................................................38
4.5 Kerberos ..................................................................................................................39

5 KEEPING ABREAST OF THE BUGS .................................................................41
5.1 The Computer Emergency Response Team ...........................................................41
5.2 DDN Management Bulletins ...................................................................................41
5.3 Security-Related Mailing Lists ...............................................................................42
5.3.1 Security ....................................................................................................................42
5.3.2 RISKS .......................................................................................................................42
5.3.3 TCP-IP ......................................................................................................................42

 iv

 CONTENTS (concluded)

5.3.4 SUN-SPOTS, SUN-NETS, SUN-MANAGERS ....................................................42
5.3.5 VIRUS-L ..................................................................................................................43

6 SUGGESTED READING ......................................................................................45

7 CONCLUSIONS .....................................................................................................47

REFERENCES ......................................................................................................................49

APPENDIX A - SECURITY CHECKLIST .........................................................................51

 v

 vi

The UNIX Socket Services

-=-=-=-=-=-=-=-
Socket Services
-=-=-=-=-=-=-=-

Disclaimer:

The author takes no responsibility in
the actions of people who have read
this text. Please Distribute this text
file on your BBS, Homepage, or FTP site
and please do not change this or add to
it in any way what so ever.            

Port Number   Service Name   Protocol

7	      echo	     tcp
7	      echo	     udp
9	      discard	     tcp
9	      discard	     udp
11	      systat	     tcp
13	      daytime	     tcp
13	      daytime	     udp
15	      netstat	     tcp
17	      qotd	     tcp
17	      qotd	     udp
19	      chargen	     tcp
19	      chargen	     udp
20	      ftp-data	     tcp
21	      ftp	     tcp
23	      telnet	     tcp
25	      smtp	     tcp
37	      time	     tcp
37	      time	     udp
39	      rlp	     udp
42	      name	     tcp
42	      name	     udp
43	      whois	     tcp
53	      domain	     tcp
53	      domain	     udp
57	      mtp	     tcp
67	      bootp	     udp
69	      tftp	     udp
77	      rje	     tcp
79	      finger	     tcp
87	      link	     tcp
95	      hostnames	     tcp
102	      iso-tsap	     tcp
103	      dictionary     tcp
104	      x400-snd	     tcp
105	      csnet-ns	     tcp
109	      pop	     tcp
110	      pop3	     tcp
111	      portmap	     tcp
111	      portmap	     udp
113	      auth	     tcp
115	      sftp	     tcp
117	      path	     tcp
119	      nntp	     tcp
123	      ntp	     udp
137	      nbname	     udp
138	      nbdatagram     udp
139	      nbsession	     tcp
144	      NeWS	     tcp
153	      sgmp	     udp
158	      tcprepo	     tcp
161	      snmp	     udp
162	      snmp-trap	     udp
170	      print-srv	     tcp
175	      vmnet	     tcp
315	      load	     udp
400	      vmnet0	     tcp
500	      sytek	     udp
512	      exec	     tcp
512	      biff	     udp
513	      login	     tcp
513	      who	     udp
514	      shell	     tcp
514	      syslog	     udp
515	      printer	     tcp
517	      talk	     udp
518	      ntalk	     udp
520	      efs	     tcp
520	      route	     udp
525	      timed	     udp
526	      tempo	     tcp
530	      courier	     tcp
531	      conference     tcp
531	      rvd-control    udp
532	      netnews	     tcp
533	      netwall	     udp		
540	      uucp	     tcp
543	      klogin	     tcp
544	      kshell	     tcp
550	      new-rwho	     udp
556	      remotefs	     tcp
560	      rmonitor	     udp
561	      monitor	     udp
600	      garcon	     tcp
601	      maitrd	     tcp
602	      busboy	     tcp
700	      acctmaster     udp
701	      acctslave	     udp
702	      acct	     udp
703	      acctlogin	     udp
704	      acctprinter    udp
705	      acctinfo	     udp
706	      acctslave2     udp
707	      acctdisk	     udp
750	      kerberos	     tcp
750	      kerberos	     udp
751	      kerberos_mastertcp
751	      kerberos_masterudp
752	      passwd_server  udp
753	      userreg_server udp
754	      krb_prop	     tcp
888	      erlogin	     tcp

-=-=-=-=-=-=-=-=-=-=-
Relevation
[SeNSaTioN] Founder
sensation.ml.org
-=-=-=-=-=-=-=-=-=-=-

Improving the Securit of Your Unix System, by David A. Curry (April 1990)

 

                                                    Final Report + April 1990

                    IMPROVING THE SECURITY OF YOUR
		    UNIX SYSTEM

                    David A. Curry, Systems Programmer
                    Information and Telecommunications Sciences and
		    Technology Division

                    ITSTD-721-FR-90-21

                    Approved:

                    Paul K. Hyder, Manager
                    Computer Facility

                    Boyd C. Fair, General Manager
                    Division Operations Section

                    Michael S. Frankel, Vice President
                    Information and Telecommunications Sciences and
		    Technology Division

          SRI International  333 Ravenswood Avenue + Menlo Park, CA 94025 +
               (415) 326-6200 + FAX: (415) 326-5512 + Telex: 334486

                                      CONTENTS

          1       INTRODUCTION...........................................  1
          1.1     UNIX Security..........................................  1
          1.2     The Internet Worm......................................  2
          1.3     Spies and Espionage....................................  3
          1.4     Other Break-Ins........................................  4
          1.5     Security is Important..................................  4

          2       IMPROVING SECURITY.....................................  5
          2.1     Account Security.......................................  5
          2.1.1   Passwords..............................................  5
          2.1.1.1 Selecting Passwords....................................  6
          2.1.1.2 Password Policies......................................  8
          2.1.1.3 Checking Password Security.............................  8
          2.1.2   Expiration Dates.......................................  9
          2.1.3   Guest Accounts......................................... 10
          2.1.4   Accounts Without Passwords............................. 10
          2.1.5   Group Accounts and Groups.............................. 10
          2.1.6   Yellow Pages........................................... 11
          2.2     Network Security....................................... 12
          2.2.1   Trusted Hosts.......................................... 13
          2.2.1.1 The hosts.equiv File................................... 13
          2.2.1.2 The .rhosts File....................................... 14
          2.2.2   Secure Terminals....................................... 15
          2.2.3   The Network File System................................ 16
          2.2.3.1 The exports File....................................... 16
          2.2.3.2 The netgroup File...................................... 17
          2.2.3.3 Restricting Super-User Access.......................... 18
          2.2.4   FTP.................................................... 19
          2.2.4.1 Trivial FTP............................................ 20
          2.2.5   Mail................................................... 21
          2.2.6   Finger................................................. 22
          2.2.7   Modems and Terminal Servers............................ 23
          2.2.8   Firewalls.............................................. 23
          2.3     File System Security................................... 24
          2.3.1   Setuid Shell Scripts................................... 25
          2.3.2   The Sticky Bit on Directories.......................... 26
          2.3.3   The Setgid Bit on Directories.......................... 26
          2.3.4   The umask Value........................................ 27
          2.3.5   Encrypting Files....................................... 27
          2.3.6   Devices................................................ 28
          2.4     Security Is Your Responsibility........................ 29

          3       MONITORING SECURITY.................................... 31
          3.1     Account Security....................................... 31
          3.1.1   The lastlog File....................................... 31
          3.1.2   The utmp and wtmp Files................................ 31
          3.1.3   The acct File.......................................... 33
          3.2     Network Security....................................... 34

                                         iii

                                CONTENTS (concluded)

          3.2.1   The syslog Facility.................................... 34
          3.2.2   The showmount Command.................................. 35
          3.3     File System Security................................... 35
          3.3.1   The find Command....................................... 36
          3.3.1.1 Finding Setuid and Setgid Files........................ 36
          3.3.1.2 Finding World-Writable Files........................... 38
          3.3.1.3 Finding Unowned Files.................................. 38
          3.3.1.4 Finding .rhosts Files.................................. 39
          3.3.2   Checklists............................................. 39
          3.3.3   Backups................................................ 40
          3.4     Know Your System....................................... 41
          3.4.1   The ps Command......................................... 41
          3.4.2   The who and w Commands................................. 42
          3.4.3   The ls Command......................................... 42
          3.5     Keep Your Eyes Open.................................... 42

          4       SOFTWARE FOR IMPROVING SECURITY........................ 45
          4.1     Obtaining Fixes and New Versions....................... 45
          4.1.1   Sun Fixes on UUNET..................................... 45
          4.1.2   Berkeley Fixes......................................... 46
          4.1.3   Simtel-20 and UUNET.................................... 47
          4.1.4   Vendors................................................ 47
          4.2     The npasswd Command.................................... 48
          4.3     The COPS Package....................................... 48
          4.4     Sun C2 Security Features............................... 49
          4.5     Kerberos............................................... 50

          5       KEEPING ABREAST OF THE BUGS............................ 51
          5.1     The Computer Emergency Response Team................... 51
          5.2     DDN Management Bulletins............................... 51
          5.3     Security-Related Mailing Lists......................... 52
          5.3.1   Security............................................... 52
          5.3.2   RISKS.................................................. 52
          5.3.3   TCP-IP................................................. 53
          5.3.4   SUN-SPOTS, SUN-NETS, SUN-MANAGERS...................... 53
          5.3.5   VIRUS-L................................................ 53

          6       SUGGESTED READING...................................... 55

          7       CONCLUSIONS............................................ 57

          REFERENCES..................................................... 59

          APPENDIX A - SECURITY CHECKLIST................................ 63

                                         iv

                                       SECTION 1

                                     INTRODUCTION

          1.1   UNIX SECURITY

                 The UNIX operating system, although now in widespread  use
            in  environments  concerned  about  security,  was  not  really
            designed with security in mind [Ritc75].  This  does  not  mean
            that  UNIX  does  not  provide any security mechanisms; indeed,
            several very good ones are available.  However, most  ``out  of
            the  box''  installation  procedures from companies such as Sun
            Microsystems still install the operating  system  in  much  the
            same  way  as it was installed 15 years ago:  with little or no
            security enabled.

                 The reasons for this state of affairs are largely histori-
            cal.   UNIX  was  originally designed by programmers for use by
            other programmers.  The environment in which it  was  used  was
            one of open cooperation, not one of privacy.  Programmers typi-
            cally collaborated with each other on projects, and hence  pre-
            ferred  to be able to share their files with each other without
            having to climb over security hurdles.  Because the first sites
            outside  of  Bell  Laboratories to install UNIX were university
            research laboratories, where a similar environment existed,  no
            real need for greater security was seen until some time later.

                 In the early 1980s, many universities began to move  their
            UNIX systems out of the research laboratories and into the com-
            puter centers, allowing (or forcing) the user population  as  a
            whole  to  use  this new and wonderful system.  Many businesses
            and government sites began to install  UNIX  systems  as  well,
            particularly  as  desktop workstations became more powerful and
            affordable.  Thus, the UNIX operating system is no longer being
            used only in environments where open collaboration is the goal.
            Universities require their students to use the system for class
            assignments,  yet  they  do not want the students to be able to
            copy from each other.  Businesses use their  UNIX  systems  for
            confidential  tasks  such  as bookkeeping and payroll.  And the
            government uses UNIX systems for various unclassified yet  sen-
            sitive purposes.

                 To complicate matters, new features  have  been  added  to
            UNIX  over  the  years,  making security even more difficult to
            control.  Perhaps  the  most  problematic  features  are  those
          _________________________
          UNIX is a registered trademark of AT&T.  VAX is  a  trademark  of
          Digital  Equipment  Corporation.  Sun-3 and NFS are trademarks of
          Sun Microsystems.  Annex is a trademark of Xylogics, Inc.

                                          1

            relating to networking:  remote login,  remote  command  execu-
            tion,  network  file  systems, diskless workstations, and elec-
            tronic mail.  All of these features have increased the  utility
            and  usability  of UNIX by untold amounts.  However, these same
            features, along with the widespread connection of UNIX  systems
            to  the  Internet  and  other networks, have opened up many new
            areas of vulnerability to unauthorized abuse of the system.

          1.2   THE INTERNET WORM

                 On the evening of November  2,  1988,  a  self-replicating
            program,  called  a worm, was released on the Internet [Seel88,
            Spaf88, Eich89].  Overnight, this  program  had  copied  itself
            from  machine  to  machine, causing the machines it infected to
            labor under huge loads, and denying service  to  the  users  of
            those  machines.   Although the program only infected two types
            of computers,* it spread quickly, as did  the  concern,  confu-
            sion,  and  sometimes  panic  of  system  administrators  whose
            machines were affected.  While many system administrators  were
            aware that something like this could theoretically happen - the
            security holes exploited by the worm  were  well  known  -  the
            scope  of the worm's break-ins came as a great surprise to most
            people.

                 The worm itself did  not  destroy  any  files,  steal  any
            information  (other  than account passwords), intercept private
            mail, or plant other destructive software  [Seel88].   However,
            it did manage to severely disrupt the operation of the network.
            Several sites, including parts of  MIT,  NASA's  Ames  Research
            Center  and  Goddard  Space  Flight  Center, the Jet Propulsion
            Laboratory, and the U. S. Army Ballistic  Research  Laboratory,
            disconnected themselves from the Internet to avoid recontamina-
            tion.  In addition, the Defense Communications  Agency  ordered
            the  connections  between the MILNET and ARPANET shut down, and
            kept them down for nearly 24 hours  [Eich89,  Elme88].   Ironi-
            cally,  this was perhaps the worst thing to do, since the first
            fixes to combat the  worm  were  distributed  via  the  network
            [Eich89].

                 This incident was perhaps the most widely  described  com-
            puter  security  problem  ever.   The  worm was covered in many
            newspapers and magazines around the country including  the  New
            York  Times,  Wall  Street  Journal,  Time  and  most computer-
            oriented technical publications, as well as on all three  major
          _________________________
            * Sun-3 systems from Sun  Microsystems  and  VAX  systems  from
          Digital  Equipment  Corp.,  both running variants of 4.x BSD UNIX
          from the University of California at Berkeley.

                                          2

            television networks, the Cable News Network, and National  Pub-
            lic  Radio.   In  January  1990, a United States District Court
            jury found Robert Tappan Morris, the author of the worm, guilty
            of  charges  brought  against him under a 1986 federal computer
            fraud and abuse law.  Morris faces up to five years  in  prison
            and  a $250,000 fine [Schu90].  Sentencing is scheduled for May
            4, 1990.

          1.3   SPIES AND ESPIONAGE

                 In August  1986,  the  Lawrence  Berkeley  Laboratory,  an
            unclassified  research laboratory at the University of Califor-
            nia at Berkeley,  was  attacked  by  an  unauthorized  computer
            intruder  [Stol88, Stol89].  Instead of immediately closing the
            holes the intruder was using, the system  administrator,  Clif-
            ford  Stoll,  elected  to  watch  the intruder and document the
            weaknesses he  exploited.   Over  the  next  10  months,  Stoll
            watched  the  intruder  attack  over  400  computers around the
            world, and successfully enter about 30.  The  computers  broken
            into  were located at universities, military bases, and defense
            contractors [Stol88].

                 Unlike many intruders seen on the Internet, who  typically
            enter  systems  and  browse  around  to see what they can, this
            intruder was looking for something specific.   Files  and  data
            dealing  with the Strategic Defense Initiative, the space shut-
            tle, and other military topics all  seemed  to  be  of  special
            interest.  Although it is unlikely that the intruder would have
            found any truly classified  information  (the  Internet  is  an
            unclassified  network),  it  was  highly probable that he could
            find a wealth of sensitive material [Stol88].

                 After a year of tracking the intruder (eventually  involv-
            ing  the FBI, CIA, National Security Agency, Air Force Intelli-
            gence, and authorities in West Germany), five men in  Hannover,
            West  Germany  were  arrested.   In  March  1989, the five were
            charged with espionage:  they had  been  selling  the  material
            they  found  during their exploits to the KGB.  One of the men,
            Karl Koch (``Hagbard''), was later found burned to death in  an
            isolated  forest  outside  Hannover.  No suicide note was found
            [Stol89].  In February 1990, three  of  the  intruders  (Markus
            Hess,  Dirk  Bresinsky,  and  Peter  Carl)  were  convicted  of
            espionage in a German court  and  sentenced  to  prison  terms,
            fines, and the loss of their rights to participate in elections
            [Risk90].  The last of the intruders, Hans Hubner  (``Pengo''),
            still faces trial in Berlin.

                                          3

          1.4   OTHER BREAK-INS

                 Numerous other computer security problems have occurred in
            recent  years,  with  varying levels of publicity.  Some of the
            more widely known incidents include break-ins  on  NASA's  SPAN
            network [McLe87], the IBM ``Christmas Virus'' [Risk87], a virus
            at Mitre Corp. that caused the MILNET to  be  temporarily  iso-
            lated from other networks [Risk88], a worm that penetrated DEC-
            NET networks [Risk89a], break-ins on  U.  S.  banking  networks
            [Risk89b], and a multitude of viruses, worms, and trojan horses
            affecting personal computer users.

          1.5   SECURITY IS IMPORTANT

                 As the previous stories demonstrate, computer security  is
            an  important  topic.   This  document  describes  the security
            features provided by the UNIX operating system,  and  how  they
            should  be  used.  The discussion centers around version 4.x of
            SunOS, the version of UNIX sold by Sun Microsystems.   Most  of
            the  information  presented  applies equally well to other UNIX
            systems.  Although there is no way  to  make  a  computer  com-
            pletely  secure against unauthorized use (other than to lock it
            in a room and turn it off), by following  the  instructions  in
            this  document  you  can  make  your  system impregnable to the
            ``casual'' system cracker,* and make it more difficult for  the
            sophisticated cracker to penetrate.

          _________________________
            * The term ``hacker,'' as applied to computer users, originally
          had an honorable connotation:  ``a person who enjoys learning the
          details  of  programming  systems  and  how  to   stretch   their
          capabilities  - as opposed to most users of computers, who prefer
          to  learn  only  the   minimum   amount   necessary''   [Stee88].
          Unfortunately,  the media has distorted this definition and given
          it a dishonorable meaning.  In deference to the true hackers,  we
          will use the term ``cracker'' throughout this document.

                                          4

                                       SECTION 2

                                  IMPROVING SECURITY

                 UNIX system security can be divided into three main  areas
            of  concern.   Two of these areas, account security and network
            security, are primarily  concerned  with  keeping  unauthorized
            users  from gaining access to the system.  The third area, file
            system security,  is  concerned  with  preventing  unauthorized
            access,  either  by  legitimate  users or crackers, to the data
            stored in the system.  This section describes the UNIX security
            tools  provided to make each of these areas as secure as possi-
            ble.

          2.1   ACCOUNT SECURITY

                 One of the easiest ways for a cracker to get into a system
            is by breaking into someone's account.  This is usually easy to
            do, since many systems have old accounts whose users have  left
            the organization, accounts with easy-to-guess passwords, and so
            on.  This section describes methods that can be used  to  avoid
            these problems.

          2.1.1   Passwords

                 The password is the most vital part of UNIX account  secu-
            rity.  If a cracker can discover a user's password, he can then
            log in to the system and operate with all the  capabilities  of
            that user.  If the password obtained is that of the super-user,
            the problem is more serious:  the cracker will  have  read  and
            write  access  to  every  file on the system.  For this reason,
            choosing secure passwords is extremely important.

                 The UNIX passwd program [Sun88a, 379] places very few res-
            trictions  on  what  may  be used as a password.  Generally, it
            requires that passwords contain five or more lowercase letters,
            or  four  characters  if a nonalphabetic or uppercase letter is
            included.  However, if the  user  ``insists''  that  a  shorter
            password be used (by entering it three times), the program will
            allow it.  No checks  for  obviously  insecure  passwords  (see
            below)  are  performed.   Thus, it is incumbent upon the system
            administrator to ensure that the passwords in use on the system
            are secure.

                                          5

                 In [Morr78], the authors describe experiments conducted to
            determine typical users' habits in the choice of passwords.  In
            a collection of 3,289 passwords, 16% of  them  contained  three
            characters or less, and an astonishing 86% were what could gen-
            erally be described as  insecure.   Additional  experiments  in
            [Gram84]  show  that  by  trying  three  simple guesses on each
            account - the login name, the login name in  reverse,  and  the
            two  concatenated  together  -  a  cracker can expect to obtain
            access to between 8 and 30 percent of the accounts on a typical
            system.   A second experiment showed that by trying the 20 most
            common female first names, followed by a single digit (a  total
            of  200  passwords), at least one password was valid on each of
            several dozen machines surveyed.   Further  experimentation  by
            the  author  has  found  that by trying variations on the login
            name, user's first and last names, and a list  of  nearly  1800
            common  first  names, up to 50  percent of the passwords on any
            given system can be cracked in a matter of two or three days.

          2.1.1.1   Selecting Passwords

                 The object when choosing a password is to make it as  dif-
            ficult as possible for a cracker to make educated guesses about
            what you've chosen.  This  leaves  him  no  alternative  but  a
            brute-force   search,  trying  every  possible  combination  of
            letters, numbers, and punctuation.  A search of this sort, even
            conducted on a machine that could try one million passwords per
            second (most  machines  can  try  less  than  one  hundred  per
            second),  would require, on the average, over one hundred years
            to complete.  With this as our goal, and by using the  informa-
            tion  in  the  preceding text, a set of guidelines for password
            selection can be constructed:

                 +    Don't  use  your  login  name  in  any  form  (as-is,
                      reversed, capitalized, doubled, etc.).

                 +    Don't use your first or last name in any form.

                 +    Don't use your spouse's or child's name.

                 +    Don't use other  information  easily  obtained  about
                      you.   This includes license plate numbers, telephone
                      numbers, social security numbers, the brand  of  your
                      automobile, the name of the street you live on, etc.

                 +    Don't use a password of all digits, or all  the  same
                      letter.  This significantly decreases the search time
                      for a cracker.

                 +    Don't use a word contained  in  (English  or  foreign

                                          6

                      language)  dictionaries,  spelling  lists,  or  other
                      lists of words.

                 +    Don't use a password shorter than six characters.

                 +    Do use a password with mixed-case alphabetics.

                 +    Do use  a  password  with  nonalphabetic  characters,
                      e.g., digits or punctuation.

                 +    Do use a password that is easy to  remember,  so  you
                      don't have to write it down.

                 +    Do use a password that you can type quickly,  without
                      having to look at the keyboard.  This makes it harder
                      for someone to steal your password by  watching  over
                      your shoulder.

                 Although this list may seem to restrict  passwords  to  an
            extreme,  there  are several methods for choosing secure, easy-
            to-remember passwords that obey the above rules.  Some of these
            include the following:

                 +    Choose a line or two from a song or poem, and use the
                      first  letter of each word.  For example, ``In Xanadu
                      did Kubla  Kahn  a  stately  pleasure  dome  decree''
                      becomes ``IXdKKaspdd.''

                 +    Alternate  between  one  consonant  and  one  or  two
                      vowels,  up  to eight characters.  This provides non-
                      sense words that are usually pronounceable, and  thus
                      easily  remembered.   Examples  include  ``routboo,''
                      ``quadpop,'' and so on.

                 +    Choose two short words and concatenate them  together
                      with  a punctation character between them.  For exam-
                      ple: ``dog;rain,'' ``book+mug,'' ``kid?goat.''

                 The importance of obeying these password  selection  rules
            cannot  be  overemphasized.   The Internet worm, as part of its
            strategy for breaking into new  machines,  attempted  to  crack
            user  passwords.   First, the worm tried simple choices such as
            the login name, user's first and last names, and so on.   Next,
            the  worm  tried each word present in an internal dictionary of
            432 words (presumably  Morris  considered  these  words  to  be
            ``good''  words  to  try).   If all else failed, the worm tried
            going through the system  dictionary,  /usr/dict/words,  trying
            each  word  [Spaf88].   The password selection rules above suc-
            cessfully guard against all three of these strategies.

                                          7

          2.1.1.2   Password Policies

                 Although asking users to select secure passwords will help
            improve  security,  by  itself  it  is  not enough.  It is also
            important to form a set of password  policies  that  all  users
            must obey, in order to keep the passwords secure.

                 First and foremost, it is important to  impress  on  users
            the  need  to  keep their passwords in their minds only.  Pass-
            words should never be written down on desk blotters, calendars,
            and  the like.  Further, storing passwords in files on the com-
            puter must be prohibited.  In either case, by writing the pass-
            word  down  on  a  piece  of paper or storing it in a file, the
            security of the user's account  is  totally  dependent  on  the
            security  of  the paper or file, which is usually less than the
            security offered by the password encryption software.

                 A second important policy is that users  must  never  give
            out  their  passwords to others.  Many times, a user feels that
            it is easier to give someone else his password in order to copy
            a  file,  rather  than to set up the permissions on the file so
            that it can be copied.  Unfortunately, by giving out the  pass-
            word  to  another person, the user is placing his trust in this
            other person not to distribute the password further,  write  it
            down, and so on.

                 Finally, it is important to establish a policy that  users
            must  change  their  passwords  from  time to time, say twice a
            year.  This is difficult to enforce  on  UNIX,  since  in  most
            implementations, a password-expiration scheme is not available.
            However, there are ways to implement  this  policy,  either  by
            using  third-party  software  or by sending a memo to the users
            requesting that they change their passwords.

                 This set of policies should be printed and distributed  to
            all  current  users  of the system.  It should also be given to
            all new users when they receive  their  accounts.   The  policy
            usually  carries  more  weight  if you can get it signed by the
            most ``impressive'' person  in  your  organization  (e.g.,  the
            president of the company).

          2.1.1.3   Checking Password Security

                 The procedures and policies described in the previous sec-
            tions,  when  properly  implemented,  will  greatly  reduce the
            chances of a cracker breaking into your  system  via  a  stolen
            account.   However,  as  with all security measures, you as the

                                          8

            system administrator must periodically check to  be  sure  that
            the  policies  and procedures are being adhered to.  One of the
            unfortunate truisms of password security  is  that,  ``left  to
            their own ways, some people will still use cute doggie names as
            passwords'' [Gram84].

                 The best way to check the security  of  the  passwords  on
            your  system  is to use a password-cracking program much like a
            real cracker would use.  If you succeed in cracking  any  pass-
            words,  those  passwords  should be changed immediately.  There
            are a few freely available password cracking  programs  distri-
            buted  via various source archive sites; these are described in
            more detail in Section 4.  A fairly extensive cracking  program
            is  also  available  from  the  author.  Alternatively, you can
            write your own cracking program, and  tailor  it  to  your  own
            site.   For  a  list  of  things  to check for, see the list of
            guidelines above.

          2.1.2   Expiration Dates

                 Many sites, particularly those  with  a  large  number  of
            users,  typically  have several old accounts lying around whose
            owners have since left the organization.  These accounts are  a
            major  security  hole:  not only can they be broken into if the
            password is insecure, but because nobody is using  the  account
            anymore, it is unlikely that a break-in will be noticed.

                 The simplest way to prevent unused accounts  from  accumu-
            lating  is to place an expiration date on every account.  These
            expiration dates should be near enough in the future  that  old
            accounts  will  be  deleted  in a timely manner, yet far enough
            apart that the users will not become annoyed.  A good figure is
            usually one year from the date the account was installed.  This
            tends to spread the expirations out over the year, rather  than
            clustering  them  all  at the beginning or end.  The expiration
            date can easily be stored in the password  file  (in  the  full
            name field).  A simple shell script can be used to periodically
            check that all accounts have expiration dates, and that none of
            the dates has passed.

                 On the first day of each month, any user whose account has
            expired  should be contacted to be sure he is still employed by
            the organization, and that he is actively  using  the  account.
            Any  user  who  cannot  be  contacted,  or who has not used his
            account recently, should be deleted from the system.  If a user
            is  unavailable  for some reason (e.g., on vacation) and cannot
            be contacted, his account should be disabled by  replacing  the
            encrypted  password in the password file entry with an asterisk
            (*).  This makes it impossible to log in to  the  account,  yet

                                          9

            leaves  the  account  available  to be re-enabled on the user's
            return.

          2.1.3   Guest Accounts

                 Guest accounts present still another  security  hole.   By
            their  nature,  these  accounts are rarely used, and are always
            used by people who should only have access to the  machine  for
            the  short period of time they are guests.  The most secure way
            to handle guest accounts is to install  them  on  an  as-needed
            basis,  and delete them as soon as the people using them leave.
            Guest accounts should never be given simple passwords  such  as
            ``guest'' or ``visitor,'' and should never be allowed to remain
            in the password file when they are not being used.

          2.1.4   Accounts Without Passwords

                 Some sites have installed  accounts  with  names  such  as
            ``who,''  ``date,'' ``lpq,'' and so on that execute simple com-
            mands.  These accounts are intended to allow users  to  execute
            these  commands without having to log in to the machine.  Typi-
            cally these accounts have no password associated with them, and
            can  thus  be used by anyone.  Many of the accounts are given a
            user id of zero, so that they execute with  super-user  permis-
            sions.

                 The problem with these accounts is that they  open  poten-
            tial  security  holes.  By not having passwords on them, and by
            having  super-user  permissions,  these  accounts   practically
            invite  crackers  to  try  to  penetrate them.  Usually, if the
            cracker can  gain  access  to  the  system,  penetrating  these
            accounts  is  simple, because each account executes a different
            command.  If the cracker can replace any one of these  commands
            with one of his own, he can then use the unprotected account to
            execute his program with super-user permissions.

                 Simply put,  accounts  without  passwords  should  not  be
            allowed on any UNIX system.

          2.1.5   Group Accounts and Groups

                 Group accounts have become popular at many sites, but  are
            actually  a  break-in  waiting to happen.  A group account is a

                                         10

            single account shared by several people, e.g., by all the  col-
            laborators  on a project.  As mentioned in the section on pass-
            word security, users should not share  passwords  -  the  group
            account  concept directly violates this policy.  The proper way
            to allow users to share information, rather than giving them  a
            group  account  to  use,  is to place these users into a group.
            This is done by editing the  group  file,  /etc/group  [Sun88a,
            1390;  Sun88b, 66], and creating a new group with the users who
            wish to collaborate listed as members.

                 A line in the group file looks like

                    groupname:password:groupid:user1,user2,user3,...

            The groupname is the name assigned to the group,  much  like  a
            login  name.   It  may  be the same as someone's login name, or
            different.  The maximum length of a group name is eight charac-
            ters.   The password field is unused in BSD-derived versions of
            UNIX, and should contain an asterisk (*).   The  groupid  is  a
            number  from 0 to 65535 inclusive.  Generally, numbers below 10
            are reserved for special  purposes,  but  you  may  choose  any
            unused number.  The last field is a comma-separated (no spaces)
            list of the login names of the users in the group.  If no login
            names  are  listed, then the group has no members.  To create a
            group called ``hackers'' with Huey, Duey, and Louie as members,
            you would add a line such as this to the group file:

                    hackers:*:100:huey,duey,louie

                 After the group has been created,  the  files  and  direc-
            tories  the  members  wish to share can then be changed so that
            they are owned by this group, and the group permission bits  on
            the  files  and  directories can be set to allow sharing.  Each
            user retains his own account, with his own password, thus  pro-
            tecting the security of the system.

                 For example, to change Huey's ``programs'' directory to be
            owned  by  the new group and properly set up the permissions so
            that all members of the group may  access  it,  the  chgrp  and
            chmod commands would be used as follows [Sun88a, 63-66]:

                    # chgrp hackers ~huey/programs
                    # chmod -R g+rw ~huey/programs

          2.1.6   Yellow Pages

                 The Sun Yellow Pages system [Sun88b, 349-374] allows  many

                                         11

            hosts to share password files, group files, and other files via
            the network, while the files are stored on only a single  host.
            Unfortunately, Yellow Pages also contains a few potential secu-
            rity holes.

                 The principal way Yellow Pages works is to have a  special
            line  in  the  password or group file that begins with a ``+''.
            In the password file, this line looks like

                    +::0:0:::

            and in the group file, it looks like

                    +:

            These lines should only be present in the files stored on  Yel-
            low  Pages  client machines.  They should not be present in the
            files on the Yellow Pages master machine(s).   When  a  program
            reads  the  password  or group file and encounters one of these
            lines, it goes through the network and requests the information
            it wants from the Yellow Pages server instead of trying to find
            it in the local file.  In this way, the data does not  have  to
            be  maintained on every host.  Since the master machine already
            has all the information, there is no need for this special line
            to be present there.

                 Generally speaking, the Yellow  Pages  service  itself  is
            reasonably  secure.   There are a few openings that a sophisti-
            cated (and dedicated) cracker could exploit, but Sun is rapidly
            closing  these.   The  biggest problem with Yellow Pages is the
            ``+'' line in the password file.  If the ``+'' is deleted  from
            the  front of the line, then this line loses its special Yellow
            Pages meaning.  It instead becomes a regular password file line
            for an account with a null login name, no password, and user id
            zero (super-user).  Thus, if a  careless  system  administrator
            accidentally  deletes the ``+''.  the whole system is wide open
            to any attack.*

                 Yellow Pages is too useful a service to suggest turning it
            off,  although  turning  it  off  would  make  your system more
            secure.  Instead, it is recommended that you read carefully the
            information  in  the  Sun manuals in order to be fully aware of
            Yellow Pages' abilities and its limitations.

          2.2   NETWORK SECURITY

          _________________________
            * Actually, a line like this without a ``+''  is  dangerous  in
          any password file, regardless of whether Yellow Pages is in use.

                                         12

                 As trends  toward  internetworking  continue,  most  sites
            will, if they haven't already, connect themselves to one of the
            numerous regional networks springing  up  around  the  country.
            Most  of these regional networks are also interconnected, form-
            ing the Internet [Hind83, Quar86].  This means that  the  users
            of  your  machine  can  access other hosts and communicate with
            other users around the world.   Unfortunately,  it  also  means
            that  other  hosts  and  users from around the world can access
            your machine, and attempt to break into it.

                 Before internetworking became  commonplace,  protecting  a
            system  from  unauthorized  access  simply  meant  locking  the
            machine in a room by itself.  Now that machines  are  connected
            by networks, however, security is much more complex.  This sec-
            tion describes the tools and methods  available  to  make  your
            UNIX networks as secure as possible.

          2.2.1   Trusted Hosts

                 One of the most convenient features of the  Berkeley  (and
            Sun)  UNIX  networking  software  is the concept of ``trusted''
            hosts.  The software allows the specification  of  other  hosts
            (and  possibly users) who are to be considered trusted - remote
            logins and remote command executions from these hosts  will  be
            permitted without requiring the user to enter a password.  This
            is very convenient, because users do not  have  to  type  their
            password  every  time they use the network.  Unfortunately, for
            the same  reason,  the  concept  of  a  trusted  host  is  also
            extremely insecure.

                 The Internet worm made extensive use of the  trusted  host
            concept to spread itself throughout the network [Seel88].  Many
            sites that had already disallowed trusted hosts did fairly well
            against  the  worm  compared  with  those  sites that did allow
            trusted hosts.  Even though it is a security  hole,  there  are
            some  valid  uses  for  the trusted host concept.  This section
            describes how to properly implement the trusted hosts  facility
            while preserving as much security as possible.

          2.2.1.1   The hosts.equiv File

                 The file /etc/hosts.equiv [Sun88a, 1397] can  be  used  by
            the  system  administrator  to  indicate  trusted  hosts.  Each
            trusted host is listed in the file, one host per  line.   If  a
            user  attempts  to  log  in (using rlogin) or execute a command
            (using  rsh)  remotely  from  one  of  the  systems  listed  in

                                         13

            hosts.equiv,  and  that user has an account on the local system
            with the same login name, access is permitted without requiring
            a password.

                 Provided adequate care is taken to allow only local  hosts
            in  the hosts.equiv file, a reasonable compromise between secu-
            rity and convenience can be achieved.  Nonlocal hosts  (includ-
            ing  hosts  at  remote  sites  of the same organization) should
            never be trusted.  Also, if there  are  any  machines  at  your
            organization that are installed in ``public'' areas (e.g., ter-
            minal rooms) as opposed to  private  offices,  you  should  not
            trust these hosts.

                 On Sun systems, hosts.equiv is controlled with the  Yellow
            Pages  software.   As distributed, the default hosts.equiv file
            distributed by Sun contains a single line:

                    +

            This indicates that every known host (i.e., the  complete  con-
            tents  of  the  host file) should be considered a trusted host.
            This is totally incorrect and  a  major  security  hole,  since
            hosts  outside  the local organization should never be trusted.
            A  correctly  configured  hosts.equiv  should  never  list  any
            ``wildcard''  hosts  (such  as  the  ``+''); only specific host
            names should be used.  When installing a new  system  from  Sun
            distribution  tapes,  you  should be sure to either replace the
            Sun default hosts.equiv with a  correctly  configured  one,  or
            delete the file altogether.

          2.2.1.2   The .rhosts File

                 The .rhosts file [Sun88a, 1397] is similar in concept  and
            format  to the hosts.equiv file, but allows trusted access only
            to specific host-user combinations, rather  than  to  hosts  in
            general.*  Each user may create a  .rhosts  file  in  his  home
            directory,  and allow access to her account without a password.
            Most people use this mechanism to allow trusted access  between
            accounts  they have on systems owned by different organizations
            who do not trust each other's  hosts  in  hosts.equiv.   Unfor-
            tunately,  this  file presents a major security problem:  While
            hosts.equiv is under the system administrator's control and can
            be  managed  effectively,  any  user  may create a .rhosts file
            granting access to whomever  he  chooses,  without  the  system
            administrator's knowledge.
          _________________________
             Actually,  hosts.equiv  may  be  used  to  specify  host-user
          combinations as well, but this is rarely done.

                                         14

                 The only secure way to manage .rhosts  files  is  to  com-
            pletely  disallow them on the system.  The system administrator
            should check the system often for  violations  of  this  policy
            (see  Section 3.3.1.4).  One possible exception to this rule is
            the ``root'' account; a .rhosts file may be necessary to  allow
            network backups and the like to be completed.

          2.2.2   Secure Terminals

                 Under newer versions of UNIX, the concept of a  ``secure''
            terminal  has  been  introduced.   Simply  put,  the super-user
            (``root'') may not log in on a nonsecure terminal, even with  a
            password.   (Authorized  users  may still use the su command to
            become super-user, however.)   The  file  /etc/ttytab  [Sun88a,
            1478]  is  used  to  control  which  terminals  are  considered
            secure.| A short excerpt from this file is shown below.

                    console  "/usr/etc/getty std.9600"  sun      off secure
                    ttya     "/usr/etc/getty std.9600"  unknown  off secure
                    ttyb     "/usr/etc/getty std.9600"  unknown  off secure
                    ttyp0    none                       network  off secure
                    ttyp1    none                       network  off secure
                    ttyp2    none                       network  off secure

            The keyword ``secure'' at the end of each line  indicates  that
            the terminal is considered secure.  To remove this designation,
            simply edit the file and delete the ``secure'' keyword.   After
            saving the file, type the command (as super-user):

                    # kill -HUP 1

            This tells the init process to reread the ttytab file.

                 The Sun default configuration for ttytab  is  to  consider
            all  terminals  secure,  including ``pseudo'' terminals used by
            the remote login software.  This means that ``root'' may log in
            remotely  from  any  host on the network.  A more secure confi-
            guration would consider as secure only directly connected  ter-
            minals,  or  perhaps only the console device.  This is how file
            servers and other machines with disks should be set up.

                 The most secure configuration is to remove the  ``secure''
            designation  from  all terminals, including the console device.
            This requires that those users with super-user authority  first
            log in as themselves, and then become the super-user via the su
          _________________________
            | Under non-Sun versions of Berkeley UNIX, this file is  called
          /etc/ttys.

                                         15

            command.  It also requires the ``root'' password to be  entered
            when  rebooting  in single-user mode, in order to prevent users
            from rebooting their desktop workstations and obtaining  super-
            user  access.   This is how all diskless client machines should
            be set up.

          2.2.3   The Network File System

                 The Network File System  (NFS)  [Sun88d]  is  designed  to
            allow  several  hosts  to share files over the network.  One of
            the most common uses of NFS is to allow  diskless  workstations
            to be installed in offices, while keeping all disk storage in a
            central location.  As distributed by Sun, NFS has  no  security
            features enabled.  This means that any host on the Internet may
            access your files via NFS, regardless of whether you trust them
            or not.

                 Fortunately, there are several easy ways to make NFS  more
            secure.   The  more commonly used methods are described in this
            section, and these can be used to make your files quite  secure
            from  unauthorized  access  via NFS.  Secure NFS, introduced in
            SunOS Release 4.0,  takes  security  one  step  further,  using
            public-key  encryption  techniques to ensure authorized access.
            Discussion of secure NFS is deferred until Section 4.

          2.2.3.1   The exports File

                 The file /etc/exports [Sun88a, 1377] is perhaps one of the
            most  important  parts  of  NFS configuration.  This file lists
            which file systems are exported (made available  for  mounting)
            to  other  systems.  A typical exports file as installed by the
            Sun installation procedure looks something like this:

                    /usr
                    /home
                    /var/spool/mail
                    #
                    /export/root/client1    -access=client1,root=client1
                    /export/swap/client1    -access=client1,root=client1
                    #
                    /export/root/client2    -access=client2,root=client2
                    /export/swap/client2    -access=client2,root=client2

            The root= keyword specifies the list of hosts that are  allowed
            to  have  super-user  access  to  the  files  in the named file
            system.   This  keyword  is  discussed  in  detail  in  Section

                                         16

            2.2.3.3.   The  access=  keyword  specifies  the  list of hosts
            (separated by colons) that are allowed to mount the named  file
            system.   If no access= keyword is specified for a file system,
            any host anywhere on the network may mount that file system via
            NFS.

                 Obviously, this presents a major security  problem,  since
            anyone  who can mount your file systems via NFS can then peruse
            them at her leisure.  Thus, it is important that all file  sys-
            tems  listed in exports have an access= keyword associated with
            them.  If you have only a few hosts which  must  mount  a  file
            system, you can list them individually in the file:

                    /usr    -access=host1:host2:host3:host4:host5

            However, because the maximum number of hosts that can be listed
            this  way is ten, the access= keyword will also allow netgroups
            to be specified.  Netgroups are described in the next section.

                 After making any changes to the exports file,  you  should
            run the command

                    # exportfs -a

            in order to make the changes take effect.

          2.2.3.2   The netgroup File

                 The file /etc/netgroup [Sun88a, 1407] is  used  to  define
            netgroups.   This  file is controlled by Yellow Pages, and must
            be rebuilt in the Yellow Pages maps whenever  it  is  modified.
            Consider the following sample netgroup file:

                    A_Group      (servera,,) (clienta1,,) (clienta2,,)

                    B_Group      (serverb,,) (clientb1,,) (clientb2,,)

                    AdminStaff   (clienta1,mary,) (clientb3,joan,)

                    AllSuns      A_Group B_Group

            This file defines  four  netgroups,  called  A_Group,  B_Group,
            AdminStaff,  and  AllSuns.   The AllSuns netgroup is actually a
            ``super group'' containing all the members of the  A_Group  and
            B_Group netgroups.

                 Each member of a netgroup is defined as a  triple:  (host,
            user,  domain).  Typically, the domain field is never used, and
            is simply left blank.  If either the host or user field is left

                                         17

            blank,  then any host or user is considered to match.  Thus the
            triple (host,,) matches any user on the named host,  while  the
            triple (,user,) matches the named user on any host.

                 Netgroups are useful when restricting access to  NFS  file
            systems via the exports file.  For example, consider this modi-
            fied version of the file from the previous section:

                    /usr                    -access=A_Group
                    /home                   -access=A_Group:B_Group
                    /var/spool/mail         -access=AllSuns
                    #
                    /export/root/client1    -access=client1,root=client1
                    /export/swap/client1    -access=client1,root=client1
                    #
                    /export/root/client2    -access=client2,root=client2
                    /export/swap/client2    -access=client2,root=client2

            The /usr file system may now only be mounted by  the  hosts  in
            the A_Group netgroup, that is, servera, clienta1, and clienta2.
            Any other host that  tries  to  mount  this  file  system  will
            receive  an ``access denied'' error.  The /home file system may
            be mounted by any of the hosts in either the A_Group or B_Group
            netgroups.   The /var/spool/mail file system is also restricted
            to these hosts, but in this example we used the ``super group''
            called AllSuns.

                 Generally, the best way to configure the netgroup file  is
            to make a single netgroup for each file server and its clients,
            and then to make other super groups,  such  as  AllSuns.   This
            allows  you  the  flexibility  to specify the smallest possible
            group of hosts for each file system in /etc/exports.

                 Netgroups can also be used in the password file  to  allow
            access  to a given host to be restricted to the members of that
            group, and they can be used in the hosts.equiv file to central-
            ize  maintenance  of the list of trusted hosts.  The procedures
            for doing this are defined in more detail in the Sun manual.

          2.2.3.3   Restricting Super-User Access

                 Normally, NFS translates the super-user id to a special id
            called ``nobody'' in order to prevent a user with ``root'' on a
            remote workstation from accessing other people's  files.   This
            is  good  for  security,  but  sometimes  a nuisance for system
            administration, since you  cannot  make  changes  to  files  as
            ``root'' through NFS.

                 The exports file  also  allows  you  to  grant  super-user

                                         18

            access  to  certain file systems for certain hosts by using the
            root= keyword.  Following this keyword a  colon-separated  list
            of  up  to  ten  hosts  may  be  specified; these hosts will be
            allowed to access the file system as  ``root''  without  having
            the  user  id  converted  to  ``nobody.''  Netgroups may not be
            specified to the root= keyword.

                 Granting ``root'' access to a  host  should  not  be  done
            lightly.   If a host has ``root'' access to a file system, then
            the super-user on that host will have complete  access  to  the
            file system, just as if you had given him the ``root'' password
            on the server.  Untrusted hosts should never be given  ``root''
            access to NFS file systems.

          2.2.4   FTP

                 The File Transfer Protocol, implemented  by  the  ftp  and
            ftpd  programs  [Sun88a,  195-201,  1632-1634], allows users to
            connect to remote systems and transfer files  back  and  forth.
            Unfortunately,  older  versions  of  these  programs  also  had
            several bugs in them that allowed crackers to break into a sys-
            tem.   These bugs have been fixed by Berkeley, and new versions
            are available.  If your  ftpd*  was  obtained  before  December
            1988, you should get a newer version (see Section 4).

                 One  of  the  more  useful  features   of   FTP   is   the
            ``anonymous''  login.   This  special login allows users who do
            not have an account on your machine to have  restricted  access
            in  order to transfer files from a specific directory.  This is
            useful if you wish to distribute  software  to  the  public  at
            large  without  giving  each  person  who wants the software an
            account on your machine.  In order to securely set up anonymous
            FTP you should follow the specific instructions below:

                 1.   Create  an  account  called  ``ftp.''   Disable   the
                      account  by  placing  an asterisk (*) in the password
                      field.  Give the account a  special  home  directory,
                      such as /usr/ftp or /usr/spool/ftp.

                 2.   Make the home directory owned by ``ftp'' and  unwrit-
                      able by anyone:

                              # chown ftp ~ftp
                              # chmod 555 ~ftp

          _________________________
            * On Sun systems, ftpd is stored in the file  /usr/etc/in.ftpd.
          On most other systems, it is called /etc/ftpd.

                                         19

                 3.   Make the directory ~ftp/bin, owned by the  super-user
                      and  unwritable  by  anyone.   Place a copy of the ls
                      program in this directory:

                              # mkdir ~ftp/bin
                              # chown root ~ftp/bin
                              # chmod 555 ~ftp/bin
                              # cp -p /bin/ls ~ftp/bin
                              # chmod 111 ~ftp/bin/ls

                 4.   Make the directory ~ftp/etc, owned by the  super-user
                      and  unwritable by anyone.  Place copies of the pass-
                      word and group files in this directory, with all  the
                      password  fields  changed  to asterisks (*).  You may
                      wish to delete all but a  few  of  the  accounts  and
                      groups  from  these files; the only account that must
                      be present is ``ftp.''

                              # mkdir ~ftp/etc
                              # chown root ~ftp/etc
                              # chmod 555 ~ftp/etc
                              # cp -p /etc/passwd /etc/group ~ftp/etc
                              # chmod 444 ~ftp/etc/passwd ~ftp/etc/group

                 5.   Make the directory ~ftp/pub,  owned  by  ``ftp''  and
                      world-writable.   Users may then place files that are
                      to be accessible via anonymous FTP in this directory:

                              # mkdir ~ftp/pub
                              # chown ftp ~ftp/pub
                              # chmod 777 ~ftp/pub

                 Because the anonymous FTP feature allows anyone to  access
            your  system  (albeit  in a very limited way), it should not be
            made available on every host  on  the  network.   Instead,  you
            should  choose  one  machine (preferably a server or standalone
            host) on which to allow this service.   This  makes  monitoring
            for  security  violations  much easier.  If you allow people to
            transfer files to your machine (using  the  world-writable  pub
            directory,  described  above),  you should check often the con-
            tents of the directories into which they are allowed to  write.
            Any suspicious files you find should be deleted.

          2.2.4.1   Trivial FTP

                 The Trivial File Transfer Protocol, TFTP, is used  on  Sun

                                         20

            workstations  (and others) to allow diskless hosts to boot from
            the network.  Basically, TFTP is a stripped-down version of FTP
            -  there is no user authentication, and the connection is based
            on the User Datagram Protocol instead of the Transmission  Con-
            trol  Protocol.  Because they are so stripped-down, many imple-
            mentations of TFTP have security holes.  You should check  your
            hosts by executing the command sequence shown below.

                    % tftp
                    tftp> connect yourhost
                    tftp> get /etc/motd tmp
                    Error code 1: File not found
                    tftp> quit
                    %

            If your version does not respond with ``File not  found,''  and
            instead  transfers the file, you should replace your version of
            tftpd* with a newer one.   In  particular,  versions  of  SunOS
            prior to release 4.0 are known to have this problem.

          2.2.5   Mail

                 Electronic mail is one of the main reasons for  connecting
            to outside networks.  On most versions of Berkeley-derived UNIX
            systems,  including  those  from  Sun,  the  sendmail   program
            [Sun88a,  1758-1760;  Sun88b,  441-488]  is  used to enable the
            receipt and delivery of mail.  As with the FTP software,  older
            versions of sendmail have several bugs that allow security vio-
            lations.  One of these bugs was used with great success by  the
            Internet  worm  [Seel88, Spaf88].  The current version of send-
            mail from Berkeley is version 5.61, of January 1989.   Sun  is,
            as  of  this  writing, still shipping version 5.59, which has a
            known security problem.  They have, however, made a fixed  ver-
            sion  available.   Section  4 details how to obtain these newer
            versions.

                 Generally, with the exception of the security  holes  men-
            tioned  above,  sendmail is reasonably secure when installed by
            most vendors' installation procedures.  There are,  however,  a
            few  precautions  that  should be taken to ensure secure opera-
            tion:

                 1.   Remove the ``decode'' alias  from  the  aliases  file
                      (/etc/aliases or /usr/lib/aliases).
          _________________________
            * On   Sun   systems,   tftpd   is   stored   in    the    file
          /usr/etc/in.tftpd.    On   most   other  systems,  it  is  called
          /etc/tftpd.

                                         21

                 2.   If you create aliases that allow messages to be  sent
                      to  programs, be absolutely sure that there is no way
                      to obtain a shell or send commands to  a  shell  from
                      these programs.

                 3.   Make sure the ``wizard'' password is disabled in  the
                      configuration  file, sendmail.cf.  (Unless you modify
                      the distributed configuration files,  this  shouldn't
                      be a problem.)

                 4.   Make  sure  your  sendmail  does  not   support   the
                      ``debug'' command.  This can be done with the follow-
                      ing commands:

                      % telnet localhost 25
                      220 yourhost Sendmail 5.61 ready at 9 Mar 90 10:57:36 PST
                      debug
                      500 Command unrecognized
                      quit
                      %

                      If your sendmail responds to  the  ``debug''  command
                      with  ``200  Debug  set,'' then you are vulnerable to
                      attack and should replace your sendmail with a  newer
                      version.

            By following the procedures above, you can be  sure  that  your
            mail system is secure.

          2.2.6   Finger

                 The ``finger'' service, provided  by  the  finger  program
            [Sun88a,  186-187],  allows  you  to obtain information about a
            user such as her full name, home directory,  last  login  time,
            and  in  some cases when she last received mail and/or read her
            mail.  The fingerd  program  [Sun88a,  1625]  allows  users  on
            remote hosts to obtain this information.

                 A bug in fingerd was also exercised with  success  by  the
            Internet worm [Seel88, Spaf88].  If your version of fingerd* is
            older than November 5, 1988, it should be replaced with a newer
            version.  New  versions  are  available  from  several  of  the
            sources described in Section 4.

          _________________________
            * On Sun systems, fingerd is stored in /usr/etc/in.fingerd.  On
          most other systems, it is called /etc/fingerd.

                                         22

          2.2.7   Modems and Terminal Servers

                 Modems and  terminal  servers  (terminal  switches,  Annex
            boxes,  etc.) present still another potential security problem.
            The main problem with these devices is one of  configuration  -
            misconfigured hardware can allow security breaches.  Explaining
            how to configure every brand of modem and terminal server would
            require  volumes.   However,  the  following  items  should  be
            checked for on any modems or terminal servers installed at your
            site:

                 1.   If a user dialed up to a modem hangs  up  the  phone,
                      the  system should log him out.  If it doesn't, check
                      the hardware connections and the kernel configuration
                      of the serial ports.

                 2.   If a user logs off, the system should force the modem
                      to hang up.  Again, check the hardware connections if
                      this doesn't work.

                 3.   If the connection from a terminal server to the  sys-
                      tem is broken, the system should log the user off.

                 4.   If the terminal server is connected  to  modems,  and
                      the  user hangs up, the terminal server should inform
                      the system that the user has hung up.

                 Most modem and terminal server manuals cover in detail how
            to  properly connect these devices to your system.  In particu-
            lar you should pay close attention to the  ``Carrier  Detect,''
            ``Clear to Send,'' and ``Request to Send'' connections.

          2.2.8   Firewalls

                 One of the newer ideas in network security is  that  of  a
            firewall.   Basically,  a  firewall is a special host that sits
            between  your  outside-world  network  connection(s)  and  your
            internal  network(s).   This  host  does  not  send out routing
            information about your internal network, and thus the  internal
            network is ``invisible'' from the outside.  In order to config-
            ure a firewall machine, the following considerations need to be
            taken:

                 1.   The firewall does not advertise routes.   This  means
                      that users on the internal network must log in to the
                      firewall in order to access hosts on remote networks.
                      Likewise,  in  order  to  log  in  to  a  host on the

                                         23

                      internal network from the outside, a user must  first
                      log  in  to  the  firewall  machine.   This is incon-
                      venient, but more secure.

                 2.   All electronic mail sent by your users must  be  for-
                      warded  to  the  firewall  machine  if  it  is  to be
                      delivered  outside  your   internal   network.    The
                      firewall  must  receive all incoming electronic mail,
                      and then redistribute it.  This can  be  done  either
                      with aliases for each user or by using name server MX
                      records.

                 3.   The firewall machine should not mount any  file  sys-
                      tems  via NFS, or make any of its file systems avail-
                      able to be mounted.

                 4.   Password security on the  firewall  must  be  rigidly
                      enforced.

                 5.   The firewall host should not trust  any  other  hosts
                      regardless  of  where  they  are.   Furthermore,  the
                      firewall should not be trusted by any other host.

                 6.   Anonymous FTP and other similar services should  only
                      be  provided  by  the firewall host, if they are pro-
                      vided at all.

                 The purpose of the firewall is to  prevent  crackers  from
            accessing other hosts on your network.  This means, in general,
            that you must maintain strict and rigidly enforced security  on
            the  firewall,  but  the  other  hosts are less vulnerable, and
            hence security may be somewhat lax.  But  it  is  important  to
            remember  that  the  firewall  is  not  a complete cure against
            crackers - if a cracker can break into the firewall machine, he
            can then try to break into any other host on your network.

          2.3   FILE SYSTEM SECURITY

                 The last defense against system crackers are  the  permis-
            sions  offered  by the file system.  Each file or directory has
            three sets of permission bits associated with it:  one set  for
            the  user who owns the file, one set for the users in the group
            with which the file is associated, and one set  for  all  other
            users  (the  ``world''  permissions).   Each set contains three
            identical permission bits, which control the following:

                 read     If set, the file or directory may  be  read.   In
                          the  case  of  a  directory, read access allows a
                          user to see the  contents  of  a  directory  (the

                                         24

                          names of the files contained therein), but not to
                          access them.

                 write    If set, the file  or  directory  may  be  written
                          (modified).   In  the  case of a directory, write
                          permission implies the ability to create, delete,
                          and  rename  files.   Note  that  the  ability to
                          remove a file is not controlled  by  the  permis-
                          sions  on the file, but rather the permissions on
                          the directory containing the file.

                 execute  If set, the file or  directory  may  be  executed
                          (searched).   In the case of a directory, execute
                          permission implies the ability  to  access  files
                          contained in that directory.

                 In addition, a fourth permission bit is available in  each
            set  of  permissions.  This bit has a different meaning in each
            set of permission bits:

                 setuid  If set in the owner permissions, this bit controls
                         the  ``set  user  id''  (setuid) status of a file.
                         Setuid status means that when a  program  is  exe-
                         cuted,  it  executes  with  the permissions of the
                         user owning the program, in addition to  the  per-
                         missions  of  the user executing the program.  For
                         example, sendmail is setuid ``root,'' allowing  it
                         to  write files in the mail queue area, which nor-
                         mal users are not allowed  to  do.   This  bit  is
                         meaningless on nonexecutable files.

                 setgid  If set in the group permissions, this bit controls
                         the  ``set  group  id'' (setgid) status of a file.
                         This behaves in exactly the same way as the setuid
                         bit, except that the group id is affected instead.
                         This bit is meaningless  on  non-executable  files
                         (but see below).

                 sticky  If set in the world  permissions,  the  ``sticky''
                         bit  tells  the  operating  system  to  do special
                         things with the text image of an executable  file.
                         It  is  mostly  a  holdover from older versions of
                         UNIX, and has little if any use today.   This  bit
                         is  also  meaningless  on nonexecutable files (but
                         see below).

          2.3.1   Setuid Shell Scripts

               Shell scripts that have the setuid or  setgid  bits  set  on

                                         25

          them  are not secure, regardless of how many safeguards are taken
          when writing them.  There are numerous software  packages  avail-
          able  that  claim  to  make  shell  scripts secure, but every one
          released so far has not managed to solve all the problems.

               Setuid and setgid shell scripts should never be  allowed  on
          any UNIX system.

          2.3.2   The Sticky Bit on Directories

               Newer versions of UNIX have attached a new  meaning  to  the
          sticky  bit.   When this bit is set on a directory, it means that
          users may not delete or rename other users' files in this  direc-
          tory.   This  is  typically  useful for the /tmp directory.  Nor-
          mally, /tmp  is  world-writable,  enabling  any  user  to  delete
          another  user's  files.  By setting the sticky bit on /tmp, users
          may only delete their own files from this directory.

               To set the sticky bit on a directory, use the command

                  # chmod o+t directory

          2.3.3   The Setgid Bit on Directories

               In SunOS 4.0, the setgid bit was also given a  new  meaning.
          Two  rules can be used for assigning group ownership to a file in
          SunOS:

               1.   The System V mechanism, which says that a  user's  pri-
                    mary  group id (the one listed in the password file) is
                    assigned to any file he creates.

               2.   The Berkeley mechanism, which says that the group id of
                    a file is set to the group id of the directory in which
                    it is created.

               If the setgid bit  is  set  on  a  directory,  the  Berkeley
          mechanism  is  enabled.   Otherwise,  the  System  V mechanism is
          enabled.  Normally, the Berkeley mechanism is used; this  mechan-
          ism must be used if creating directories for use by more than one
          member of a group (see Section 2.1.5).

               To set the setgid bit on a directory, use the command

                                         26

                  # chmod g+s directory

          2.3.4   The umask Value

               When a file is created by a program, say a text editor or  a
          compiler,  it  is typically created with all permissions enabled.
          Since this is rarely desirable (you don't want other users to  be
          able  to write your files), the umask value is used to modify the
          set of permissions a file is created with.  Simply put, while the
          chmod  command  [Sun88a,  65-66]  specifies  what  bits should be
          turned on, the umask value specifies what bits should  be  turned
          off.

               For example, the default umask on most systems is 022.  This
          means  that  write  permission  for the group and world should be
          turned off whenever a file is created.  If instead you wanted  to
          turn  off all group and world permission bits, such that any file
          you created would not be readable,  writable,  or  executable  by
          anyone except yourself, you would set your umask to 077.

               The umask value is specified in the .cshrc or .profile files
          read  by  the  shell  using the umask command [Sun88a, 108, 459].
          The ``root'' account should have the line

                  umask 022

          in its /.cshrc file, in order to prevent the accidental  creation
          of world-writable files owned by the super-user.

          2.3.5   Encrypting Files

               The standard UNIX crypt command [Sun88a, 95] is not  at  all
          secure.  Although it is reasonable to expect that crypt will keep
          the casual ``browser'' from reading a file, it will present noth-
          ing  more  than  a  minor  inconvenience to a determined cracker.
          Crypt implements a one-rotor machine along the lines of the  Ger-
          man  Enigma  (broken  in World War II).  The methods of attack on
          such a machine are well known, and a sufficiently large file  can
          usually  be  decrypted  in  a few hours even without knowledge of
          what the file contains [Reed84].   In  fact,  publicly  available
          packages  of  programs designed to ``break'' files encrypted with
          crypt have been around for several years.

               There are software implementations of another algorithm, the
          Data  Encryption  Standard  (DES),  available  on  some  systems.

                                         27

          Although this algorithm is much more secure than  crypt,  it  has
          never  been  proven  that  it  is totally secure, and many doubts
          about its security have been raised in recent years.

               Perhaps the best thing to say about encrypting  files  on  a
          computer system is this:  if you think you have a file whose con-
          tents are important enough to encrypt, then that file should  not
          be stored on the computer in the first place.  This is especially
          true of systems with limited security, such as UNIX  systems  and
          personal computers.

               It  is  important  to  note  that  UNIX  passwords  are  not
          encrypted  with  the  crypt program.  Instead, they are encrypted
          with a modified version of the DES that generates one-way encryp-
          tions  (that is, the password cannot be decrypted).  When you log
          in, the system does  not  decrypt  your  password.   Instead,  it
          encrypts  your  attempted  password, and if this comes out to the
          same result as encrypting your real password, you are allowed  to
          log in.

          2.3.6   Devices

               The security of devices is an important issue in UNIX.  Dev-
          ice files (usually residing in /dev) are used by various programs
          to access the data on the disk drives or  in  memory.   If  these
          device files are not properly protected, your system is wide open
          to a cracker.  The entire list of devices is too long to go  into
          here, since it varies widely from system to system.  However, the
          following guidelines apply to all systems:

               1.   The files /dev/kmem,  /dev/mem,  and  /dev/drum  should
                    never  be  readable  by the world.  If your system sup-
                    ports the notion of the ``kmem'' group (most newer sys-
                    tems  do) and utilities such as ps are setgid ``kmem,''
                    then these files should be owned by user  ``root''  and
                    group ``kmem,'' and should be mode 640.  If your system
                    does not support the notion of the ``kmem'' group,  and
                    utilities  such  as  ps are setuid ``root,'' then these
                    files should be owned by user ``root'' and mode 600.

               2.   The disk devices, such as /dev/sd0a, /dev/rxy1b,  etc.,
                    should  be  owned  by  user ``root'' and group ``opera-
                    tor,'' and should be mode 640.  Note that each disk has
                    eight  partitions  and two device files for each parti-
                    tion.  Thus, the disk ``sd0'' would have the  following
                    device files associated with it in /dev:

                                         28

                            sd0a     sd0e     rsd0a     rsd0e
                            sd0b     sd0f     rsd0b     rsd0f
                            sd0c     sd0g     rsd0c     rsd0g
                            sd0d     sd0h     rsd0d     rsd0h

               3.   With very few exceptions, all other devices  should  be
                    owned  by  user  ``root.''  One exception is terminals,
                    which are changed to be owned  by  the  user  currently
                    logged  in on them.  When the user logs out, the owner-
                    ship of the terminal is automatically changed  back  to
                    ``root.''

          2.4   SECURITY IS YOUR RESPONSIBILITY

               This section has detailed numerous tools for improving secu-
          rity  provided  by the UNIX operating system.  The most important
          thing to note about these tools is that although they are  avail-
          able,  they  are  typically not put to use in most installations.
          Therefore, it is incumbent on you, the system  administrator,  to
          take the time and make the effort to enable these tools, and thus
          to protect your system from unauthorized access.

                                         29

                                         30

                                      SECTION 3

                                 MONITORING SECURITY

               One of the most important tasks in keeping any computer sys-
          tem  secure  is  monitoring  the  security  of  the system.  This
          involves examining system log files for unauthorized accesses  of
          the  system, as well as monitoring the system itself for security
          holes.  This section describes the procedures for doing this.  An
          additional  part  of monitoring security involves keeping abreast
          of security problems found by others; this is described  in  Sec-
          tion 5.

          3.1   ACCOUNT SECURITY

               Account security should be monitored periodically  in  order
          to  check for two things: users logged in when they ``shouldn't''
          be (e.g., late at night, when they're  on  vacation,  etc.),  and
          users  executing  commands  they wouldn't normally be expected to
          use.  The commands described in  this  section  can  be  used  to
          obtain this information from the system.

          3.1.1   The lastlog File

               The file /usr/adm/lastlog [Sun88a, 1485]  records  the  most
          recent  login  time  for  each  user  of the system.  The message
          printed each time you log in, e.g.,

                  Last login: Sat Mar 10 10:50:48 from spam.itstd.sri.c

          uses the time stored in the lastlog file.  Additionally, the last
          login  time reported by the finger command uses this time.  Users
          should be told to carefully examine this time whenever  they  log
          in,  and  to report unusual login times to the system administra-
          tor.  This is an easy way to detect account break-ins, since each
          user should remember the last time she logged into the system.

          3.1.2   The utmp and wtmp Files

               The file /etc/utmp [Sun88a, 1485] is used to record  who  is

                                         31

          currently  logged  into  the  system.  This file can be displayed
          using the who command [Sun88a, 597]:

                  % who
                  hendra   tty0c   Mar 13 12:31
                  heidari  tty14   Mar 13 13:54
                  welgem   tty36   Mar 13 12:15
                  reagin   ttyp0   Mar 13 08:54   (aaifs.itstd.sri.)
                  ghg      ttyp1   Mar  9 07:03   (hydra.riacs.edu)
                  compion  ttyp2   Mar  1 03:01   (ei.ecn.purdue.ed)

          For each user, the login name, terminal being used,  login  time,
          and  remote  host  (if the user is logged in via the network) are
          displayed.

               The file /usr/adm/wtmp [Sun88a, 1485] records each login and
          logout  time  for  every  user.   This file can also be displayed
          using the who command:

                  % who /usr/adm/wtmp
                  davy     ttyp4    Jan  7 12:42 (annex01.riacs.ed)
                           ttyp4    Jan  7 15:33
                  davy     ttyp4    Jan  7 15:33 (annex01.riacs.ed)
                           ttyp4    Jan  7 15:35
                  hyder    ttyp3    Jan  8 09:07 (triceratops.itst)
                           ttyp3    Jan  8 11:43

          A line that contains a login name indicates  the  time  the  user
          logged  in; a line with no login name indicates the time that the
          terminal was logged off.  Unfortunately,  the  output  from  this
          command  is  rarely as simple as in the example above; if several
          users log in at once, the login and logout times  are  all  mixed
          together and must be matched up by hand using the terminal name.

               The wtmp file may also be examined using  the  last  command
          [Sun88a,  248].   This command sorts out the entries in the file,
          matching up login and logout  times.   With  no  arguments,  last
          displays  all  information  in the file.  By giving the name of a
          user or terminal, the output can be restricted to the information
          about  the  user or terminal in question.  Sample output from the
          last command is shown below.

          % last
          davy      ttyp3  intrepid.itstd.s Tue Mar 13 10:55 - 10:56 (00:00)
          hyder     ttyp3  clyde.itstd.sri. Mon Mar 12 15:31 - 15:36 (00:04)
          reboot    ~                       Mon Mar 12 15:16
          shutdown  ~                       Mon Mar 12 15:16
          arms      ttyp3  clyde0.itstd.sri Mon Mar 12 15:08 - 15:12 (00:04)
          hyder     ttyp3  spam.itstd.sri.c Sun Mar 11 21:08 - 21:13 (00:04)
          reboot    ~                       Sat Mar 10 20:05
          davy      ftp    hydra.riacs.edu  Sat Mar 10 13:23 - 13:30 (00:07)

                                         32

          For each login session, the user name, terminal used, remote host
          (if  the user logged in via the network), login and logout times,
          and session duration are shown.  Additionally, the times  of  all
          system  shutdowns  and  reboots  (generated  by  the shutdown and
          reboot commands  [Sun88a,  1727,  1765])  are  recorded.   Unfor-
          tunately,  system crashes are not recorded.  In newer versions of
          the operating system, pseudo logins such as  those  via  the  ftp
          command  are  also  recorded;  an example of this is shown in the
          last line of the sample output, above.

          3.1.3   The acct File

               The file /usr/adm/acct [Sun88a, 1344-1345] records each exe-
          cution of a command on the system, who executed it, when, and how
          long it took.  This information is logged  each  time  a  command
          completes,  but only if your kernel was compiled with the SYSACCT
          option enabled (the option is enabled in  some  GENERIC  kernels,
          but is usually disabled by default).

               The acct file can be displayed using  the  lastcomm  command
          [Sun88a,  249].   With  no  arguments, all the information in the
          file is displayed.  However, by giving a command name, user name,
          or  terminal name as an argument, the output can be restricted to
          information about the given command, user, or  terminal.   Sample
          output from lastcomm is shown below.

          % lastcomm
          sh         S     root     __         0.67 secs Tue Mar 13 12:45
          atrun            root     __         0.23 secs Tue Mar 13 12:45
          lpd         F    root     __         1.06 secs Tue Mar 13 12:44
          lpr        S     burwell  tty09      1.23 secs Tue Mar 13 12:44
          troff            burwell  tty09     12.83 secs Tue Mar 13 12:44
          eqn              burwell  tty09      1.44 secs Tue Mar 13 12:44
          df               kindred  ttyq7      0.78 secs Tue Mar 13 12:44
          ls               kindred  ttyq7      0.28 secs Tue Mar 13 12:44
          cat              kindred  ttyq7      0.05 secs Tue Mar 13 12:44
          stty             kindred  ttyq7      0.05 secs Tue Mar 13 12:44
          tbl              burwell  tty09      1.08 secs Tue Mar 13 12:44
          rlogin     S     jones    ttyp3      5.66 secs Tue Mar 13 12:38
          rlogin      F    jones    ttyp3      2.53 secs Tue Mar 13 12:41
          stty             kindred  ttyq7      0.05 secs Tue Mar 13 12:44

          The first column indicates the name of  the  command.   The  next
          column displays certain flags on the command:  an ``F'' means the
          process spawned a child process, ``S'' means the process ran with
          the  set-user-id  bit  set, ``D'' means the process exited with a
          core dump, and ``X'' means the  process  was  killed  abnormally.
          The  remaining  columns  show  the  name  of the user who ran the
          program, the terminal he ran it from (if applicable), the  amount

                                         33

          of  CPU  time  used by the command (in seconds), and the date and
          time the process started.

          3.2   NETWORK SECURITY

               Monitoring network security is more difficult, because there
          are  so many ways for a cracker to attempt to break in.  However,
          there are some programs available to aid you in this task.  These
          are described in this section.

          3.2.1   The syslog Facility

               The syslog facility  [Sun88a,  1773]  is  a  mechanism  that
          enables  any command to log error messages and informational mes-
          sages to the system console, as well as to  a  log  file.   Typi-
          cally,  error  messages  are logged in the file /usr/adm/messages
          along with the date, time, name of the program sending  the  mes-
          sage, and (usually) the process id of the program.  A sample seg-
          ment of the messages file is shown below.

          Mar 12 14:53:37 sparkyfs login: ROOT LOGIN ttyp3 FROM setekfs.itstd.sr
          Mar 12 15:18:08 sparkyfs login: ROOT LOGIN ttyp3 FROM setekfs.itstd.sr
          Mar 12 16:50:25 sparkyfs login: ROOT LOGIN ttyp4 FROM pongfs.itstd.sri
          Mar 12 16:52:20 sparkyfs vmunix: sd2c:  read failed, no retries
          Mar 13 06:01:18 sparkyfs vmunix: /: file system full
          Mar 13 08:02:03 sparkyfs login: ROOT LOGIN ttyp4 FROM triceratops.itst
          Mar 13 08:28:52 sparkyfs su: davy on /dev/ttyp3
          Mar 13 08:38:03 sparkyfs login: ROOT LOGIN ttyp4 FROM triceratops.itst
          Mar 13 10:56:54 sparkyfs automount[154]: host aaifs not responding
          Mar 13 11:30:42 sparkyfs login: REPEATED LOGIN FAILURES ON ttyp3 FROM
                          intrepid.itstd.s, daemon

          Of particular interest in this sample are the messages  from  the
          login  and  su  programs.   Whenever someone logs in as ``root,''
          login logs this information.  Generally, logging in  as  ``root''
          directly,   rather   than   using   the  su  command,  should  be
          discouraged, as it is hard to  track  which  person  is  actually
          using  the  account.   Once  this  ability  has been disabled, as
          described  in  Section  2.2.2,  detecting  a  security  violation
          becomes  a simple matter of searching the messages file for lines
          of this type.

               Login also logs any case of someone repeatedly trying to log
          in  to  an account and failing.  After three attempts, login will
          refuse to let  the  person  try  anymore.   Searching  for  these
          messages  in  the  messages  file  can  alert  you  to  a cracker

                                         34

          attempting to guess someone's password.

               Finally, when someone uses the su command, either to  become
          ``root'' or someone  else, su logs the success or failure of this
          operation.  These messages can be used to check for users sharing
          their  passwords, as well as for a cracker who has penetrated one
          account and is trying to penetrate others.

          3.2.2   The showmount Command

               The showmount command [Sun88a, 1764] can be used on  an  NFS
          file server to display the names of all hosts that currently have
          something mounted from the server.  With no options, the  program
          simply  displays  a  list  of  all the hosts.  With the -a and -d
          options, the output is somewhat more useful.  The  first  option,
          -a,  causes showmount to list all the host and directory combina-
          tions.  For example,

                  bronto.itstd.sri.com:/usr/share
                  bronto.itstd.sri.com:/usr/local.new
                  bronto.itstd.sri.com:/usr/share/lib
                  bronto.itstd.sri.com:/var/spool/mail
                  cascades.itstd.sri.com:/sparky/a
                  clyde.itstd.sri.com:/laser_dumps
                  cm1.itstd.sri.com:/sparky/a
                  coco0.itstd.sri.com:/sparky/a

          There will be one line of output for each directory mounted by  a
          host.   With  the  -d  option,  showmount  displays a list of all
          directories that are presently mounted by some host.

               The output from showmount should be checked for two  things.
          First,  only  machines  local  to your organization should appear
          there.  If you have set up proper netgroups as described in  Sec-
          tion  2.2.3,  this  should not be a problem.  Second, only ``nor-
          mal'' directories should be mounted.  If you find unusual  direc-
          tories  being  mounted,  you should find out who is mounting them
          and why - although it is probably innocent, it may indicate some-
          one trying to get around your security mechanisms.

          3.3   FILE SYSTEM SECURITY

               Checking for security holes in the file  system  is  another
          important part of making your system secure.  Primarily, you need
          to check for files that can be modified  by  unauthorized  users,
          files  that  can  inadvertently grant users too many permissions,

                                         35

          and files that can inadvertently grant access to crackers.  It is
          also important to be able to detect unauthorized modifications to
          the file system, and to recover  from  these  modifications  when
          they are made.

          3.3.1   The find Command

               The find command [Sun88a, 183-185] is a general-purpose com-
          mand  for  searching  the  file system.  Using various arguments,
          complex matching patterns based on a  file's  name,  type,  mode,
          owner,  modification time, and other characteristics, can be con-
          structed.  The names of files that are found using these patterns
          can then be printed out, or given as arguments to other UNIX com-
          mands.  The general format of a find command is

                  % find directories options

          where directories is a list of directory names to  search  (e.g.,
          /usr),  and options contains the options to control what is being
          searched for.  In general, for the examples in this section,  you
          will  always want to search from the root of the file system (/),
          in order to find all files matching the patterns presented.

               This section describes how to use find to  search  for  four
          possible security problems that were described in Section 2.

          3.3.1.1   Finding Setuid and Setgid Files

               It is important to check the system often  for  unauthorized
          setuid and setgid programs.  Because these programs grant special
          privileges to the user who is executing them, it is necessary  to
          ensure that insecure programs are not installed.  Setuid ``root''
          programs should be closely guarded - a  favorite  trick  of  many
          crackers  is to break into ``root'' once, and leave a setuid pro-
          gram hidden somewhere that will enable them to regain  super-user
          powers even if the original hole is plugged.

               The command to search for setuid and setgid files is

                  # find / -type f -a \( -perm -4000 -o -perm -2000 \) -print

          The options to this command have the following meanings:

               /    The name of the directory  to  be  searched.   In  this
                    case,  we  want to search the entire file system, so we
                    specify /.  You might instead restrict  the  search  to

                                         36

                    /usr or /home.

               -type f
                    Only examine files whose type is ``f,''  regular  file.
                    Other  options  include  ``d'' for directory, ``l'' for
                    symbolic link, ``c'' for character-special devices, and
                    ``b'' for block-special devices.

               -a   This specifies ``and.''  Thus, we want  to  know  about
                    files whose type is ``regular file,'' and whose permis-
                    sions bits match the other part of this expression.

               \( -perm -4000 -o -perm -2000 \)
                    The parentheses in this part of the  command  are  used
                    for  grouping.   Thus,  everything  in this part of the
                    command matches a single pattern, and is treated as the
                    other half of the ``and'' clause described above.

                    -perm -4000
                         This specifies a match if the ``4000'' bit (speci-
                         fied as an octal number) is set in the file's per-
                         mission modes.  This is the set-user-id bit.

                    -o   This specifies ``or.''  Thus, we want to match  if
                         the  file  has  the  set-user-id  bit  or the set-
                         group-id bit set.

                    -perm -2000
                         This specifies a match if the ``2000'' bit (speci-
                         fied as an octal number) is set in the file's per-
                         mission modes.  This is the set-group-id bit.

               -printThis indicates that for  any  file  that  matches  the
                    specified  expression  (is  a  regular file and has the
                    setuid or setgid bits set in  its  permissions),  print
                    its name on the screen.

               After executing this command (depending  on  how  much  disk
          space  you have, it can take anywhere from 15 minutes to a couple
          of hours to complete), you will have a list of  files  that  have
          setuid  or setgid bits set on them.  You should then examine each
          of these programs, and determine  whether  they  should  actually
          have  these  permissions.  You should be especially suspicious of
          programs that are not in one of the directories (or  a  subdirec-
          tory) shown below.

                  /bin
                  /etc
                  /usr/bin
                  /usr/ucb
                  /usr/etc

                                         37

               One file distributed with SunOS, /usr/etc/restore,  is  dis-
          tributed  with  the  setuid  bit  set  on  it, and should not be,
          because of a security hole.  You should be  sure  to  remove  the
          setuid bit from this program by executing the command

                  # chmod u-s /usr/etc/restore

          3.3.1.2   Finding World-Writable Files

               World-writable files, particularly system files,  can  be  a
          security  hole if a cracker gains access to your system and modi-
          fies  them.    Additionally,   world-writable   directories   are
          dangerous,  since  they allow a cracker to add or delete files as
          he wishes.  The find command to find all world-writable files is

                  # find / -perm -2 -print

          In this case, we do not use the  -type  option  to  restrict  the
          search,  since  we  are  interested in directories and devices as
          well as files.  The -2 specifies the world write bit (in octal).

               This list of files will be fairly  long,  and  will  include
          some files that should be world writable.  You should not be con-
          cerned if terminal devices  in  /dev  are  world  writable.   You
          should  also  not be concerned about line printer error log files
          being world writable.  Finally, symbolic links may be world writ-
          able  -  the permissions on a symbolic link, although they exist,
          have no meaning.

          3.3.1.3   Finding Unowned Files

               Finding files that are owned by nonexistent users can  often
          be  a clue that a cracker has gained access to your system.  Even
          if this is not the case, searching for these files gives  you  an
          opportunity  to  clean  up files that should have been deleted at
          the same time the user herself was deleted.  The command to  find
          unowned files is

                  # find / -nouser -print

          The -nouser option matches files that are owned by a user id  not
          contained   in  the  /etc/passwd  database.   A  similar  option,
          -nogroup, matches files owned by nonexistent groups.  To find all
          files  owned by nonexistent users or groups, you would use the -o
          option as follows:

                                         38

                  # find / -nouser -o -nogroup -print

          3.3.1.4   Finding .rhosts Files

               As mentioned in Section 2.2.1.2, users should be  prohibited
          from having .rhosts files in their accounts.  To search for this,
          it is only necessary to search the parts of the file system  that
          contain home directories (i.e., you can skip / and /usr):

                  # find /home -name .rhosts -print

          The -name option indicates that the complete  name  of  any  file
          whose name matches .rhosts should be printed on the screen.

          3.3.2   Checklists

               Checklists can be a useful tool for discovering unauthorized
          changes  made  to  system  directories.  They aren't practical on
          file systems that contain users'  home  directories  since  these
          change  all  the time.  A checklist is a listing of all the files
          contained in a group of directories:  their sizes, owners, modif-
          ication dates, and so on.  Periodically, this information is col-
          lected and compared with the information in the master checklist.
          Files  that  do  not  match in all attributes can be suspected of
          having been changed.

               There are several utilities that implement checklists avail-
          able from public software sites (see Section 4).  However, a sim-
          ple utility can be constructed using only the  standard  UNIX  ls
          and diff commands.

               First, use the ls command [Sun88a, 285] to generate a master
          list.  This is best done immediately after installing the operat-
          ing system, but can be done at any time provided you're confident
          about the correctness of the files on the disk.  A sample command
          is shown below.

                  # ls -aslgR /bin /etc /usr > MasterChecklist

          The file MasterChecklist now contains a complete list of all  the
          files  in  these  directories.  You will probably want to edit it
          and delete the lines for files you know will  be  changing  often
          (e.g.,   /etc/utmp,  /usr/adm/acct).   The  MasterChecklist  file
          should be stored somewhere safe where a cracker  is  unlikely  to

                                         39

          find  it  (since  he could otherwise just change the data in it):
          either on a different computer system, or on magnetic tape.

               To search for changes in the file system, run the  above  ls
          command  again,  saving  the  output  in  some  other  file,  say
          CurrentList.  Now use the diff command [Sun88a, 150]  to  compare
          the two files:

                  # diff MasterChecklist CurrentList

          Lines that are only in the master checklist will be printed  pre-
          ceded  by  a  ``<,''  and lines that are only in the current list
          will be preceded by a ``>.''  If there is one line  for  a  file,
          preceded  by  a  ``<,'' this means that the file has been deleted
          since the master checklist was created.  If there is one line for
          a  file,  preceded  by a ``>,'' this means that the file has been
          created since the master checklist was created.  If there are two
          lines  for  a single file, one preceded by ``<'' and the other by
          ``>,'' this indicates that some attribute of the file has changed
          since the master checklist was created.

               By carefully  constructing  the  master  checklist,  and  by
          remembering  to update it periodically (you can replace it with a
          copy of CurrentList, once you're sure the differences between the
          lists are harmless), you can easily monitor your system for unau-
          thorized changes.  The software packages available from the  pub-
          lic  software  distribution  sites  implement  basically the same
          scheme as the one here, but offer many more options for  control-
          ling what is examined and reported.

          3.3.3   Backups

               It is impossible to overemphasize the need for a good backup
          strategy.   File  system backups not only protect you in the even
          of hardware failure or accidental deletions, but they  also  pro-
          tect  you  against  unauthorized  file  system  changes made by a
          cracker.

               A good backup strategy will dump the entire system at  level
          zero  (a  ``full''  dump)  at  least  once  a month.  Partial (or
          ``incremental'') dumps should be done at least twice a week,  and
          ideally  they  should  be  done daily.  The dump command [Sun88a,
          1612-1614] is recommended over other programs  such  as  tar  and
          cpio.   This is because only dump is capable of creating a backup
          that can be used to restore a disk to the exact state it  was  in
          when  it was dumped.  The other programs do not take into account
          files deleted or renamed between dumps, and do  not  handle  some
          specialized database files properly.

                                         40

          3.4   KNOW YOUR SYSTEM

               Aside from running large monitoring programs such  as  those
          described in the previous sections, simple everyday UNIX commands
          can also be useful for spotting security violations.  By  running
          these  commands often, whenever you have a free minute (for exam-
          ple, while waiting for someone to answer  the  phone),  you  will
          become  used  to  seeing  a specific pattern of output.  By being
          familiar with the processes normally running on your system,  the
          times different users typically log in, and so on, you can easily
          detect when something is out of the ordinary.

          3.4.1   The ps Command

               The ps command [Sun88a, 399-402]  displays  a  list  of  the
          processes  running  on your system.  Ps has numerous options, too
          many to list here.  Generally, however, for the purpose of  moni-
          toring, the option string -alxww is the most useful.*  On  a  Sun
          system  running  SunOS 4.0, you should expect to see at least the
          following:

               swapper, pagedaemon
                    System programs that help the virtual memory system.

               /sbin/init
                    The init process, which  is  responsible  for  numerous
                    tasks,  including bringing up login processes on termi-
                    nals.

               portmap, ypbind, ypserv
                    Parts of the Yellow Pages system.

               biod, nfsd, rpc.mountd, rpc.quotad, rpc.lockd
                    Parts of the Network File System (NFS).  If the  system
                    you  are  looking  at  is  not  a file server, the nfsd
                    processes probably won't exist.

               rarpd, rpc.bootparamd
                    Part of the system  that  allows  diskless  clients  to
                    boot.

               Other commands you should expect to  see  are  update  (file
          system  updater);  getty  (one  per  terminal  and  one  for  the
          _________________________
            * This  is  true  for  Berkeley-based  systems.   On  System  V
          systems, the option string -elf should be used instead.

                                         41

          console); lpd (line printer daemon); inetd (Internet daemon,  for
          starting other network servers); sh and csh (the Bourne shell and
          C shell, one or more per logged in user).  In addition, if  there
          are  users  logged in, you'll probably see invocations of various
          compilers, text editors, and word processing programs.

          3.4.2   The who and w Commands

               The who command, as mentioned previously, displays the  list
          of  users  currently  logged  in  on the system.  By running this
          periodically, you can learn at what times during the day  various
          users  log  in.   Then,  when you see someone logged in at a dif-
          ferent time, you can investigate and make sure that it's  legiti-
          mate.

               The w command [Sun88a, 588] is somewhat of a  cross  between
          who  and  ps.   Not  only does it show a list of who is presently
          logged in, but it also displays how  long  they  have  been  idle
          (gone  without  typing  anything),  and  what  command  they  are
          currently running.

          3.4.3   The ls Command

               Simple as its function is, ls is actually  very  useful  for
          detecting  file system problems.  Periodically, you should use ls
          on the  various  system  directories,  checking  for  files  that
          shouldn't be there.  Most of the time, these files will have just
          ``landed'' somewhere by accident.  However, by  keeping  a  close
          watch on things, you will be able to detect a cracker long before
          you might have otherwise.

               When using ls to check for oddities, be sure to use  the  -a
          option,  which  lists  files whose names begin with a period (.).
          Be particularly alert for files or directories named ``...'',  or
          ``..(space)'',  which  many  crackers  like  to use.  (Of course,
          remember that ``.'' and ``..'' are supposed to be there.)

          3.5   KEEP YOUR EYES OPEN

               Monitoring for security breaches is every bit  as  important
          as  preventing  them  in the first place.  Because it's virtually
          impossible to make a system totally secure, there is  always  the
          chance,  no matter how small, that a cracker will be able to gain

                                         42

          access.  Only by monitoring can this be detected and remedied.

                                         43

                                         44

                                      SECTION 4

                           SOFTWARE FOR IMPROVING SECURITY

               Because security is of great concern to many sites, a wealth
          of software has been developed for improving the security of UNIX
          systems.  Much of this software has been developed  at  universi-
          ties and other public institutions, and is available free for the
          asking.   This  section  describes  how  this  software  can   be
          obtained, and mentions some of the more important programs avail-
          able.

          4.1   OBTAINING FIXES AND NEW VERSIONS

               Several sites on the Internet maintain large repositories of
          public-domain  and  freely  distributable software, and make this
          material available for anonymous  FTP.   This  section  describes
          some of the larger repositories.

          4.1.1   Sun Fixes on UUNET

               Sun Microsystems has contracted  with  UUNET  Communications
          Services,  Inc.  to make fixes for bugs in Sun software available
          via anonymous FTP.  You can access these fixes by using  the  ftp
          command  [Sun88a,  195-201]  to  connect  to the host ftp.uu.net.
          Then change into the directory sun-fixes, and obtain a  directory
          listing, as shown in the example on the following page.

                                         45

          % ftp ftp.uu.net
          Connected to uunet.UU.NET.
          220 uunet FTP server (Version 5.93 Mar 20 11:01:52 EST 1990) ready
          Name (ftp.uu.net:davy): anonymous
          331 Guest login ok, send ident as password.
          Password:            enter your mail address [email protected] here
          230 Guest login ok, access restrictions apply.
          ftp> cd sun-fixes
          250 CWD command successful.
          ftp> dir
          200 PORT command successful.
          150 Opening ASCII mode data connection for /bin/ls.
          total 2258
          -rw-r--r--  1 38       22           4558 Aug 31  1989 README
          -rw-r--r--  1 38       22         484687 Dec 14  1988 ddn.tar.Z
          -rw-r--r--  1 38       22         140124 Jan 13  1989 gated.sun3.Z
          -rwxr-xr-x  1 38       22          22646 Dec 14  1988 in.ftpd.sun3.Z
          .....
          .....
          -rw-r--r--  1 38       22          72119 Aug 31  1989 sendmail.sun3.Z
          -rwxr-xr-x  1 38       22          99147 Aug 31  1989 sendmail.sun4.Z
          -rw-r--r--  1 38       22           3673 Jul 11  1989 wall.sun3.Z
          -rw-r--r--  1 38       22           4099 Jul 11  1989 wall.sun4.Z
          -rwxr-xr-x  1 38       22           7955 Jan 18  1989 ypbind.sun3.Z
          -rwxr-xr-x  1 38       22           9237 Jan 18  1989 ypbind.sun4.Z
          226 Transfer complete.
          1694 bytes received in 0.39 seconds (4.2 Kbytes/s)
          ftp> quit
          221 Goodbye.
          %

          The file README contains a brief description of what each file in
          this directory contains, and what is required to install the fix.

          4.1.2   Berkeley Fixes

               The University of California at Berkeley  also  makes  fixes
          available via anonymous FTP; these fixes pertain primarily to the
          current release of BSD UNIX (currently  release  4.3).   However,
          even if you are not running their software, these fixes are still
          important, since many vendors (Sun, DEC,  Sequent  ,  etc.)  base
          their software on the Berkeley releases.

               The Berkeley fixes are available for anonymous FTP from  the
          host  ucbarpa.berkeley.edu  in  the directory 4.3/ucb-fixes.  The
          file INDEX in this directory describes what each file contains.

               Berkeley also distributes new versions of sendmail and named
          [Sun88a,  1758-1760,  1691-1692] from this machine.  New versions

                                         46

          of these commands are stored in the 4.3 directory, usually in the
          files sendmail.tar.Z and bind.tar.Z, respectively.

          4.1.3   Simtel-20 and UUNET

               The two largest general-purpose software repositories on the
          Internet are the hosts wsmr-simtel20.army.mil and ftp.uu.net.

               wsmr-simtel20.army.mil is a TOPS-20 machine operated by  the
          U.  S. Army at White Sands Missile Range, New Mexico.  The direc-
          tory pd2:<unix-c> contains a large amount of UNIX software,  pri-
          marily  taken  from  the  comp.sources newsgroups.  The file 000-
          master-index.txt contains a master list and description  of  each
          piece  of  software  available  in the repository.  The file 000-
          intro-unix-sw.txt contains information on the mailing  list  used
          to  announce  new software, and describes the procedures used for
          transferring files from the archive with FTP.

               ftp.uu.net is operated  by  UUNET  Communications  Services,
          Inc.  in Falls Church, Virginia.  This company sells Internet and
          USENET access to sites all over  the  country  (and  internation-
          ally).   The software posted to the following USENET source news-
          groups is stored here, in directories of the same name:

                  comp.sources.games
                  comp.sources.misc
                  comp.sources.sun
                  comp.sources.unix
                  comp.sources.x

          Numerous other distributions, such as all the  freely  distribut-
          able  Berkeley  UNIX  source  code, Internet Request for Comments
          (RFCs), and so on are also stored on this machine.

          4.1.4   Vendors

               Many vendors make fixes for bugs in their software available
          electronically,  either  via  mailing lists or via anonymous FTP.
          You should contact your vendor to find out  if  they  offer  this
          service,  and  if  so, how to access it.  Some vendors that offer
          these services include  Sun  Microsystems  (see  above),  Digital
          Equipment  Corp.,  the  University of California at Berkeley (see
          above), and Apple Computer.

                                         47

          4.2   THE NPASSWD COMMAND

               The npasswd  command,  developed  by  Clyde  Hoover  at  the
          University  of  Texas  at Austin, is intended to be a replacement
          for the standard UNIX passwd command [Sun88a, 379],  as  well  as
          the  Sun yppasswd command [Sun88a, 611].  npasswd makes passwords
          more secure by refusing to allow users to select  insecure  pass-
          words.  The following capabilities are provided by npasswd:

               +    Configurable minimum password length

               +    Configurable to force users to use mixed case or digits
                    and punctuation

               +    Checking for ``simple'' passwords such  as  a  repeated
                    letter

               +    Checking against the host name and other  host-specific
                    information

               +    Checking against the login name, first and last  names,
                    and so on

               +    Checking for words in various  dictionaries,  including
                    the system dictionary.

               The npasswd distribution is available for anonymous FTP from
          emx.utexas.edu in the directory pub/npasswd.

          4.3   THE COPS PACKAGE

               COPS is a  security  tool  for  system  administrators  that
          checks  for  numerous  common  security problems on UNIX systems,
          including many of the things described in this document.  COPS is
          a  collection  of shell scripts and C programs that can easily be
          run on almost any UNIX variant.  Among other  things,  it  checks
          the  following items and sends the results to the system adminis-
          trator:

               +    Checks  /dev/kmem   and   other   devices   for   world
                    read/writability.

               +    Checks  special/important  files  and  directories  for
                    ``bad'' modes (world writable, etc.).

               +    Checks for easily guessed passwords.

                                         48

               +    Checks for duplicate user ids, invalid  fields  in  the
                    password file, etc.

               +    Checks for duplicate group ids, invalid fields  in  the
                    group file, etc.

               +    Checks all users' home directories  and  their  .cshrc,
                    .login,  .profile, and .rhosts files for security prob-
                    lems.

               +    Checks all  commands  in  the  /etc/rc  files  [Sun88a,
                    1724-1725] and cron files [Sun88a, 1606-1607] for world
                    writability.

               +    Checks for bad ``root'' paths, NFS file system exported
                    to the world, etc.

               +    Includes an expert system that checks to see if a given
                    user  (usually ``root'') can be compromised, given that
                    certain rules are true.

               +    Checks for changes in the setuid status of programs  on
                    the system.

               The COPS package is  available  from  the  comp.sources.unix
          archive  on  ftp.uu.net,  and  also  from the repository on wsmr-
          simtel20.army.mil.

          4.4   SUN C2 SECURITY FEATURES

               With the release of SunOS 4.0,  Sun  has  included  security
          features  that  allow  the system to operate at a higher level of
          security, patterned after the C2* classification.  These features
          can be installed as one of the options when installing the system
          from the distribution tapes.  The security features added by this
          option include

               +    Audit trails that record all login  and  logout  times,
                    the  execution of administrative commands, and the exe-
                    cution of privileged (setuid) operations.

               +    A more secure password file mechanism  (``shadow  pass-
                    word  file'')  that  prevents crackers from obtaining a
                    list of the encrypted passwords.
          _________________________
            * C2 is one of several security classifications defined by  the
          National  Computer Security Center, and is described in [NCSC85],
          the ``orange book.''

                                         49

               +    DES encryption capability.

               +    A (more) secure NFS implementation that uses public-key
                    encryption  to authenticate the users of the system and
                    the hosts on the network, to be sure  they  really  are
                    who they claim to be.

          These security features are described in detail in [Sun88c].

          4.5   KERBEROS

               Kerberos [Stei88] is an authentication system  developed  by
          the  Athena Project at the Massachusetts Institute of Technology.
          Kerberos  is  a  third-party  authentication  service,  which  is
          trusted by other network services.  When a user logs in, Kerberos
          authenticates that user (using a password), and provides the user
          with a way to prove her identity to other servers and hosts scat-
          tered around the network.

               This authentication is then used by programs such as  rlogin
          [Sun88a,  418-419]  to  allow  the  user to log in to other hosts
          without a password (in place of the .rhosts file).  The authenti-
          cation is also used by the mail system in order to guarantee that
          mail is delivered to the correct person, as well as to  guarantee
          that  the sender is who he claims to be.  NFS has also been modi-
          fied by M.I.T. to work with Kerberos, thereby making  the  system
          much more secure.

               The overall effect of installing Kerberos and  the  numerous
          other  programs  that  go  with  it is to virtually eliminate the
          ability of users to ``spoof'' the system into believing they  are
          someone   else.    Unfortunately,  installing  Kerberos  is  very
          intrusive, requiring the modification or replacement of  numerous
          standard  programs.  For this reason, a source license is usually
          necessary.  There are plans to make Kerberos a part of 4.4BSD, to
          be  released by the University of California at Berkeley sometime
          in 1990.

                                         50

                                      SECTION 5

                             KEEPING ABREAST OF THE BUGS

               One of the hardest things about keeping a system  secure  is
          finding  out  about the security holes before a cracker does.  To
          combat this, there are several sources of information you can and
          should make use of on a regular basis.

          5.1   THE COMPUTER EMERGENCY RESPONSE TEAM

               The Computer Emergency Response Team (CERT) was  established
          in December 1988 by the Defense Advanced Research Projects Agency
          to address computer security concerns of research  users  of  the
          Internet.   It  is operated by the Software Engineering Institute
          at Carnegie-Mellon University.  The CERT serves as a focal  point
          for  the  reporting of security violations, and the dissemination
          of security advisories to the Internet community.   In  addition,
          the  team works with vendors of various systems in order to coor-
          dinate the fixes for security problems.

               The CERT sends out security advisories to the  cert-advisory
          mailing  list  whenever appropriate.  They also operate a 24-hour
          hotline that can be called to  report  security  problems  (e.g.,
          someone  breaking into your system), as well as to obtain current
          (and accurate) information about rumored security problems.

               To join the cert-advisory mailing list, send  a  message  to
          cert@cert.sei.cmu.edu  and  ask  to be added to the mailing list.
          Past advisories are available for anonymous  FTP  from  the  host
          cert.sei.cmu.edu.  The 24-hour hotline number is (412) 268-7090.

          5.2   DDN MANAGEMENT BULLETINS

               The DDN Management Bulletin is distributed electronically by
          the  Defense  Data Network (DDN) Network Information Center under
          contract to the Defense Communications Agency.  It is a means  of
          communicating  official policy, procedures, and other information
          of concern to management personnel at DDN facilities.

               The DDN Security Bulletin is distributed  electronically  by
          the  DDN  SCC (Security Coordination Center), also under contract
          to DCA, as a means of communicating information  on  network  and

                                         51

          host  security  exposures,  fixes,  and  concerns to security and
          management personnel at DDN facilities.

               Anyone may join the mailing lists for these two bulletins by
          sending  a  message to nic@nic.ddn.mil and asking to be placed on
          the mailing lists.

          5.3   SECURITY-RELATED MAILING LISTS

               There are several other mailing lists operated on the Inter-
          net  that  pertain  directly  or  indirectly  to various security
          issues.  Some of the more useful ones are described below.

          5.3.1   Security

               The UNIX Security  mailing  list  exists  to  notify  system
          administrators  of  security  problems  before they become common
          knowledge, and to provide security enhancement  information.   It
          is a restricted-access list, open only to people who can be veri-
          fied as being principal systems people at a  site.   Requests  to
          join  the  list must be sent by either the site contact listed in
          the Network Information Center's  WHOIS  database,  or  from  the
          ``root''  account  on  one  of the major site machines.  You must
          include the destination address you want on the list, an  indica-
          tion  of  whether  you  want  to be on the mail reflector list or
          receive weekly digests, the electronic  mail  address  and  voice
          telephone  number  of  the  site contact if it isn't you, and the
          name, address, and telephone number of your  organization.   This
          information should be sent to security-request@cpd.com.

          5.3.2   RISKS

               The RISKS digest is a component of the ACM Committee on Com-
          puters and Public Policy, moderated by Peter G. Neumann.  It is a
          discussion forum on risks to the public in computers and  related
          systems,  and along with discussing computer security and privacy
          issues, has discussed such subjects as the  Stark  incident,  the
          shooting  down of the Iranian airliner in the Persian Gulf (as it
          relates to the computerized weapons systems), problems in air and
          railroad  traffic  control  systems, software engineering, and so
          on.   To  join  the  mailing  list,  send  a  message  to  risks-
          request@csl.sri.com.   This  list is also available in the USENET
          newsgroup comp.risks.

                                         52

          5.3.3   TCP-IP

               The TCP-IP list is intended to act as a discussion forum for
          developers  and maintainers of implementations of the TCP/IP pro-
          tocol suite.  It also discusses network-related security problems
          when  they  involve  programs providing network services, such as
          sendmail.  To join the TCP-IP list, send  a  message  to  tcp-ip-
          request@nic.ddn.mil.   This  list is also available in the USENET
          newsgroup comp.protocols.tcp-ip.

          5.3.4   SUN-SPOTS, SUN-NETS, SUN-MANAGERS

               The SUN-SPOTS, SUN-NETS, and SUN-MANAGERS lists are all dis-
          cussion  groups  for users and administrators of systems supplied
          by Sun Microsystems.  SUN-SPOTS is a fairly  general  list,  dis-
          cussing  everything  from  hardware configurations to simple UNIX
          questions.   To  subscribe,  send   a   message   to   sun-spots-
          request@rice.edu.   This  list  is  also  available in the USENET
          newsgroup comp.sys.sun.

               SUN-NETS is a discussion list for items pertaining  to  net-
          working  on  Sun  systems.   Much of the discussion is related to
          NFS, Yellow Pages, and name servers.  To subscribe, send  a  mes-
          sage to sun-nets-request@umiacs.umd.edu.

               SUN-MANAGERS is a discussion list for Sun system administra-
          tors  and  covers  all  aspects of Sun system administration.  To
          subscribe, send a message to sun-managers-request@eecs.nwu.edu.

          5.3.5   VIRUS-L

               The VIRUS-L list is a forum for the discussion  of  computer
          virus  experiences, protection software, and related topics.  The
          list is open to the public, and is implemented as a mail  reflec-
          tor,  not  a  digest.  Most of the information is related to per-
          sonal computers, although some of it may be applicable to  larger
          systems.  To subscribe, send the line

                  SUB VIRUS-L your full name

          to the address listserv%lehiibm1.bitnet@mitvma.mit.edu.

                                         53

                                         54

                                      SECTION 6

                                  SUGGESTED READING

               This section suggests some alternate sources of  information
          pertaining to the security and administration of the UNIX operat-
          ing system.

          UNIX System Administration Handbook
          Evi Nemeth, Garth Snyder, Scott Seebass
          Prentice Hall, 1989, $26.95

               This is perhaps the best general-purpose book on UNIX system
               administration  currently on the market.  It covers Berkeley
               UNIX, SunOS, and System V.  The 26 chapters  and  17  appen-
               dices  cover numerous topics, including booting and shutting
               down the system, the file system,  configuring  the  kernel,
               adding  a  disk,  the line printer spooling system, Berkeley
               networking, sendmail, and uucp.  Of particular interest  are
               the  chapters  on  running  as  the super-user, backups, and
               security.

          UNIX Operating System Security
          F. T. Grammp and R. H. Morris
          AT&T Bell Laboratories Technical Journal
          October 1984

               This is an excellent discussion of some of the  more  common
               security  problems in UNIX and how to avoid them, written by
               two of Bell Labs' most prominent security experts.

          Password Security: A Case History
          Robert Morris and Ken Thompson
          Communications of the ACM
          November 1979

               An excellent discussion on the problem of password security,
               and  some interesting information on how easy it is to crack
               passwords and why.  This document is  usually  reprinted  in
               most vendors' UNIX documentation.

          On the Security of UNIX
          Dennis M. Ritchie
          May 1975

               A discussion on UNIX security from one of the original crea-
               tors  of  the system.  This document is usually reprinted in
               most vendors' UNIX documentation.
          The Cuckoo's Egg

                                         55

          Clifford Stoll
          Doubleday, 1989, $19.95

               An excellent story of Stoll's experiences tracking down  the
               German  crackers who were breaking into his systems and sel-
               ling the data they found to the KGB.   Written  at  a  level
               that nontechnical users can easily understand.

          System and Network Administration
          Sun Microsystems
          May, 1988

               Part of the SunOS documentation,  this  manual  covers  most
               aspects  of  Sun  system  administration, including security
               issues.  A must for anyone operating a  Sun  system,  and  a
               pretty good reference for other UNIX systems as well.

          Security Problems in the TCP/IP Protocol Suite
          S. M. Bellovin
          ACM Computer Communications Review
          April, 1989

               An interesting discussion of some of the  security  problems
               with  the  protocols  in  use on the Internet and elsewhere.
               Most of these problems are far beyond  the  capabilities  of
               the  average  cracker, but it is still important to be aware
               of them.  This article is technical in nature,  and  assumes
               familiarity with the protocols.

          A Weakness in the 4.2BSD UNIX TCP/IP Software
          Robert T. Morris
          AT&T Bell Labs Computer Science Technical Report 117
          February, 1985

               An interesting article from the author of the Internet worm,
               which  describes  a  method  that  allows  remote  hosts  to
               ``spoof'' a host into believing they  are  trusted.   Again,
               this article is technical in nature, and assumes familiarity
               with the protocols.

          Computer Viruses and Related Threats: A Management Guide
          John P. Wack and Lisa J. Carnahan
          National Institute of Standards and Technology
          Special Publication 500-166

               This document  provides  a  good  introduction  to  viruses,
               worms,  trojan horses, and so on, and explains how they work
               and how they are used to attack computer  systems.   Written
               for the nontechnical user, this is a good starting point for
               learning about these security problems.  This  document  can
               be  ordered  for  $2.50  from  the U. S. Government Printing
               Office, document number 003-003-02955-6.

                                         56

                                      SECTION 7

                                     CONCLUSIONS

               Computer security is playing an increasingly important  role
          in our lives as more and more operations become computerized, and
          as computer networks become more widespread.  In order to protect
          your  systems  from snooping and vandalism by unauthorized crack-
          ers, it is necessary to enable  the  numerous  security  features
          provided by the UNIX system.

               In this document, we have covered the major areas  that  can
          be made more secure:

               +    Account security

               +    Network security

               +    File system security.

          Additionally, we have discussed how to monitor for security  vio-
          lations, where to obtain security-related software and bug fixes,
          and numerous mailing lists for finding out about  security  prob-
          lems that have been discovered.

               Many crackers are not interested in breaking  into  specific
          systems, but rather will break into any system that is vulnerable
          to the attacks they know.  Eliminating these well-known holes and
          monitoring  the  system  for other security problems will usually
          serve as adequate defense against all  but  the  most  determined
          crackers.   By using the procedures and sources described in this
          document, you can make your system more secure.

                                         57

                                         58

                                     REFERENCES

          [Eich89]  Eichin, Mark W., and Jon A. Rochlis.   With  Microscope
                    and  Tweezers:   An  Analysis  of the Internet Virus of
                    November 1988.  Massachusetts Institute of  Technology.
                    February 1989.

          [Elme88]  Elmer-DeWitt, Philip.   ``  `The  Kid  Put  Us  Out  of
                    Action.' '' Time, 132 (20): 76, November 14, 1988.

          [Gram84]  Grammp, F. T., and R. H. Morris.  ``UNIX Operating Sys-
                    tem Security.''  AT&T Bell Laboratories Technical Jour-
                    nal, 63 (8): 1649-1672, October 1984.

          [Hind83]  Hinden, R., J. Haverty, and A. Sheltzer.   ``The  DARPA
                    Internet:  Interconnecting  Heterogeneous Computer Net-
                    works with Gateways.''  IEEE Computer Magazine, 16 (9):
                    33-48, September 1983.

          [McLe87]  McLellan, Vin.  ``NASA Hackers:  There's  More  to  the
                    Story.''  Digital Review, November 23, 1987, p. 80.

          [Morr78]  Morris, Robert, and Ken Thompson.  ``Password Security:
                    A  Case History.''  Communications of the ACM, 22 (11):
                    594-597,  November  1979.   Reprinted  in  UNIX  System
                    Manager's  Manual,  4.3 Berkeley Software Distribution.
                    University of California, Berkeley.  April 1986.

          [NCSC85]  National  Computer  Security  Center.   Department   of
                    Defense  Trusted  Computer  System Evaluation Criteria,
                    Department  of  Defense   Standard   DOD   5200.28-STD,
                    December, 1985.

          [Quar86]  Quarterman, J. S., and J. C. Hoskins.   ``Notable  Com-
                    puter  Networks.''  Communications of the ACM, 29 (10):
                    932-971, October 1986.

          [Reed84]  Reeds, J. A., and P. J.  Weinberger.   ``File  Security
                    and the UNIX System Crypt Command.''  AT&T Bell Labora-
                    tories Technical Journal, 63  (8):  1673-1683,  October
                    1984.

          [Risk87]  Forum on Risks to the Public in Computers  and  Related
                    Systems.  ACM Committee on Computers and Public Policy,
                    Peter G. Neumann, Moderator.   Internet  mailing  list.
                    Issue 5.73, December 13, 1987.

          [Risk88]  Forum on Risks to the Public in Computers  and  Related
                    Systems.  ACM Committee on Computers and Public Policy,
                    Peter G. Neumann, Moderator.   Internet  mailing  list.

                                         59

                    Issue 7.85, December 1, 1988.

          [Risk89a] Forum on Risks to the Public in Computers  and  Related
                    Systems.  ACM Committee on Computers and Public Policy,
                    Peter G. Neumann, Moderator.   Internet  mailing  list.
                    Issue 8.2, January 4, 1989.

          [Risk89b] Forum on Risks to the Public in Computers  and  Related
                    Systems.  ACM Committee on Computers and Public Policy,
                    Peter G. Neumann, Moderator.   Internet  mailing  list.
                    Issue 8.9, January 17, 1989.

          [Risk90]  Forum on Risks to the Public in Computers  and  Related
                    Systems.  ACM Committee on Computers and Public Policy,
                    Peter G. Neumann, Moderator.   Internet  mailing  list.
                    Issue 9.69, February 20, 1990.

          [Ritc75]  Ritchie, Dennis M.  ``On the Security of  UNIX.''   May
                    1975.   Reprinted  in UNIX System Manager's Manual, 4.3
                    Berkeley Software Distribution.  University of Califor-
                    nia, Berkeley.  April 1986.

          [Schu90]  Schuman, Evan.  ``Bid to Unhook Worm.''   UNIX  Today!,
                    February 5, 1990, p. 1.

          [Seel88]  Seeley, Donn.  A Tour of the Worm.  Department of  Com-
                    puter Science, University of Utah.  December 1988.

          [Spaf88]  Spafford, Eugene H.   The  Internet  Worm  Program:  An
                    Analysis.   Technical Report CSD-TR-823.  Department of
                    Computer Science, Purdue University.  November 1988.

          [Stee88]  Steele, Guy L. Jr., Donald R. Woods, Raphael A. Finkel,
                    Mark  R.  Crispin, Richard M. Stallman, and Geoffrey S.
                    Goodfellow.  The Hacker's Dictionary.  New York: Harper
                    and Row, 1988.

          [Stei88]  Stein, Jennifer G., Clifford  Neuman,  and  Jeffrey  L.
                    Schiller.   ``Kerberos:  An  Authentication Service for
                    Open Network Systems.''  USENIX Conference Proceedings,
                    Dallas, Texas, Winter 1988, pp. 203-211.

          [Stol88]  Stoll, Clifford.  ``Stalking the Wily  Hacker.''   Com-
                    munications of the ACM, 31 (5): 484-497, May 1988.

          [Stol89]  Stoll, Clifford.  The Cuckoo's Egg.  New York:  Double-
                    day, 1989.

          [Sun88a]  Sun Microsystems.  SunOS Reference Manual, Part  Number
                    800-1751-10, May 1988.

          [Sun88b]  Sun Microsystems.  System and  Network  Administration,

                                         60

                    Part Number 800-1733-10, May 1988.

          [Sun88c]  Sun Microsystems.  Security Features Guide, Part Number
                    800-1735-10, May 1988.

          [Sun88d]  Sun Microsystems.  ``Network  File  System:  Version  2
                    Protocol  Specification.''   Network  Programming, Part
                    Number 800-1779-10, May 1988, pp. 165-185.

                                         61

                                         62

                           APPENDIX A - SECURITY CHECKLIST

               This checklist summarizes the information presented in  this
          paper, and can be used to verify that you have implemented every-
          thing described.

          Account Security
               []  Password policy developed and distributed to all users
               []  All passwords checked against obvious choices
               []  Expiration dates on all accounts
               []  No ``idle'' guest accounts
               []  All accounts have passwords or ``*'' in the password field
               []  No group accounts
               []  ``+'' lines in passwd and group checked if running Yellow Pages

          Network Security
               []  hosts.equiv contains only local hosts, and no ``+''
               []  No .rhosts files in users' home directories
               []  Only local hosts in ``root'' .rhosts file, if any
               []  Only ``console'' labeled as ``secure'' in ttytab (servers only)
               []  No terminals labeled as ``secure'' in ttytab (clients only)
               []  No NFS file systems exported to the world
               []  ftpd version later than December, 1988
               []  No ``decode'' alias in the aliases file
               []  No ``wizard'' password in sendmail.cf
               []  No ``debug'' command in sendmail
               []  fingerd version later than November 5, 1988
               []  Modems and terminal servers handle hangups correctly

          File System Security
               []  No setuid or setgid shell scripts
               []  Check all ``nonstandard'' setuid and setgid programs for security
               []  Setuid bit removed from /usr/etc/restore
               []  Sticky bits set on world-writable directories
               []  Proper umask value on ``root'' account
               []  Proper modes on devices in /dev

          Backups
               []  Level 0 dumps at least monthly
               []  Incremental dumps at least bi-weekly

                                         63