In the mid-1980s, Sun Microsystems developed a series of network protocols - Remote Procedure Call (RPC), the Network Information System (NIS, and previously known as Yellow Pages or YP), and the Network Filesystem (NFS) - that let a network of workstations operate as if they were a single computer system. RPC, NIS, and NFS were largely responsible for Sun's success as a computer manufacturer: they made it possible for every computer user at an organization to enjoy the power and freedom of an individual, dedicated computer system, while reaping the benefits of using a system that was centrally administered.
 Sun stopped using the name Yellow Pages when the company discovered that the name was a trademark of British Telecom in Great Britain. Nevertheless, the commands continue to start with the letters "yp."
Sun was not the first company to develop a network-based operating system, nor was Sun's approach technically the most sophisticated. One of the most important features that was missing was security: Sun's RPC and NFS had virtually none, effectively throwing open the resources of a computer system to the whims of the network's users.
Despite this failing (or perhaps, because of it), Sun's technology soon became the standard. Soon the University of California at Berkeley developed an implementation of RPC, NIS, and NFS that interoperated with Sun's. As UNIX workstations became more popular, other companies, such as HP, Digital, and even IBM either licensed or adopted Berkeley's software, licensed Sun's, or developed their own.
Over time, Sun developed some fixes for the security problems in RPC and NFS. Meanwhile, a number of other competing and complementary systems - for example, Kerberos and DCE - were developed for solving many of the same problems. As a result, today's system manager has a choice of many different systems for remote procedure calls and configuration management, each with its own trade-offs in terms of performance, ease of administration, and security. This chapter describes the main systems available today and makes a variety of observations on system security. For a full discussion of NFS, see Chapter 20, NFS.
A system for storing information on a network server
A mechanism for updating the stored information
A mechanism for distributing the information to other computers on the network
Early systems performed these functions and little else. In a friendly network environment, these are the only capabilities that are needed.
However, in an environment that is potentially hostile, or when an organization's network is connected with an external network that is not under that organization's control, security becomes a concern. To provide some degree of security for network services, the following additional capabilities are required:
Server authentication. Clients need to have some way of verifying that the server they are communicating with is a valid server.
Client authentication. Servers need to know that the clients are in fact valid client machines.
User authentication. There needs to be a mechanism for verifying that the user sitting in front of a client workstation is in fact who the user claims to be.
Data integrity. A system is required for verifying that the data received over the network has not been modified during its transmission.
Data confidentiality. A system is required for protecting information sent over the network from eavesdropping.
These capabilities are independent from one another. A system can provide for client authentication and user authentication, but also require that the clients implicitly trust that the servers on the network are, in fact, legitimate servers. A system can provide for authentication of the users and the computers, but send all information without encryption or digital signatures, making it susceptible to modification or monitoring en route.
Obviously, the most secure network systems provide all five network security capabilities.