More Detailed Discussions of Some Basic Concepts

The previous section offered a brief overview of the many concepts that an AFS system administrator needs to understand. The following sections examine some important concepts in more detail. Although not all concepts are new to an experienced administrator, reading this section helps ensure a common understanding of term and concepts.


A network is a collection of interconnected computers able to communicate with each other and transfer information back and forth.

A network can connect computers of any kind, but the typical network running AFS connects servers or high-function personal workstations with AFS file server machines. For more about the classes of machines used in an AFS environment, see Servers and Clients.

Distributed File Systems

A file system is a collection of files and the facilities (programs and commands) that enable users to access the information in the files. All computing environments have file systems.

Networked computing environments often use distributed file systems like AFS. A distributed file system takes advantage of the interconnected nature of the network by storing files on more than one computer in the network and making them accessible to all of them. In other words, the responsibility for file storage and delivery is "distributed" among multiple machines instead of relying on only one. Despite the distribution of responsibility, a distributed file system like AFS creates the illusion that there is a single filespace.

Servers and Clients

AFS uses a server/client model. In general, a server is a machine, or a process running on a machine, that provides specialized services to other machines. A client is a machine or process that makes use of a server's specialized service during the course of its own work, which is often of a more general nature than the server's. The functional distinction between clients and server is not always strict, however--a server can be considered the client of another server whose service it is using.

AFS divides the machines on a network into two basic classes, file server machines and client machines, and assigns different tasks and responsibilities to each.

File Server Machines. File server machines store the files in the distributed file system, and a server process running on the file server machine delivers and receives files. AFS file server machines run a number of server processes. Each process has a special function, such as maintaining databases important to AFS administration, managing security or handling volumes. This modular design enables each server process to specialize in one area, and thus perform more efficiently. For a description of the function of each AFS server process, see AFS Server Processes and the Cache Manager.

Not all AFS server machines must run all of the server processes. Some processes run on only a few machines because the demand for their services is low. Other processes run on only one machine in order to act as a synchronization site. See The Four Roles for File Server Machines.

Client Machines. The other class of machines are the client machines, which generally work directly for users, providing computational power and other general purpose tools but may also be other servers that use data stored in AFS to provide other services. Clients also provide users with access to the files stored on the file server machines. Clients run a Cache Manager, which is normally a combination of a kernel module and a running process that enables them to communicate with the AFS server processes running on the file server machines and to cache files. See The Cache Manager for more information. There are usually many more client machines in a cell than file server machines.


A cell is an independently administered site running AFS. In terms of hardware, it consists of a collection of file server machines defined as belonging to the cell. To say that a cell is administratively independent means that its administrators determine many details of its configuration without having to consult administrators in other cells or a central authority. For example, a cell administrator determines how many machines of different types to run, where to put files in the local tree, how to associate volumes and directories, and how much space to allocate to each user.

The terms local cell and home cell are equivalent, and refer to the cell in which a user has initially authenticated during a session, by logging onto a machine that belongs to that cell. All other cells are referred to as foreign from the user's perspective. In other words, throughout a login session, a user is accessing the filespace through a single Cache Manager--the one on the machine to which he or she initially logged in--and that Cache Manager is normally configured to have a default local cell. All other cells are considered foreign during that login session, even if the user authenticates in additional cells or uses the cd command to change directories into their file trees. This distinction is mostly invisible and irrelavant to users. For most purposes, users will see no difference between local and foreign cells.

It is possible to maintain more than one cell at a single geographical location. For instance, separate departments on a university campus or in a corporation can choose to administer their own cells. It is also possible to have machines at geographically distant sites belong to the same cell; only limits on the speed of network communication determine how practical this is.

Despite their independence, AFS cells generally agree to make their local filespace visible to other AFS cells, so that users in different cells can share files if they choose. If your cell is to participate in the "global" AFS namespace, it must comply with a few basic conventions governing how the local filespace is configured and how the addresses of certain file server machines are advertised to the outside world.

The Uniform Namespace and Transparent Access

One of the features that makes AFS easy to use is that it provides transparent access to the files in a cell's filespace. Users do not have to know which file server machine stores a file in order to access it; they simply provide the file's pathname, which AFS automatically translates into a machine location.

In addition to transparent access, AFS also creates a uniform namespace--a file's pathname is identical regardless of which client machine the user is working on. The cell's file tree looks the same when viewed from any client because the cell's file server machines store all the files centrally and present them in an identical manner to all clients.

To enable the transparent access and the uniform namespace features, the system administrator must follow a few simple conventions in configuring client machines and file trees. For details, see Making Other Cells Visible in Your Cell.


A volume is a conceptual container for a set of related files that keeps them all together on one file server machine partition. Volumes can vary in size, but are (by definition) smaller than a partition. Volumes are the main administrative unit in AFS, and have several characteristics that make administrative tasks easier and help improve overall system performance.

  • The relatively small size of volumes makes them easy to move from one partition to another, or even between machines.

  • You can maintain maximum system efficiency by moving volumes to keep the load balanced evenly among the different machines. If a partition becomes full, the small size of individual volumes makes it easy to find enough room on other machines for them.

  • Each volume corresponds logically to a directory in the file tree and keeps together, on a single partition, all the data that makes up the files in the directory (including possible subdirectories). By maintaining (for example) a separate volume for each user's home directory, you keep all of the user's files together, but separate from those of other users. This is an administrative convenience that is impossible if the partition is the smallest unit of storage.

  • The directory/volume correspondence also makes transparent file access possible, because it simplifies the process of file location. All files in a directory reside together in one volume and in order to find a file, a file server process need only know the name of the file's parent directory, information which is included in the file's pathname. AFS knows how to translate the directory name into a volume name, and automatically tracks every volume's location, even when a volume is moved from machine to machine. For more about the directory/volume correspondence, see Mount Points.

  • Volumes increase file availability through replication and backup.

  • Replication (placing copies of a volume on more than one file server machine) makes the contents more reliably available; for details, see Replication. Entire sets of volumes can be backed up as dump files (possibly to tape) and restored to the file system; see Configuring the AFS Backup System and Backing Up and Restoring AFS Data. In AFS, backup also refers to recording the state of a volume at a certain time and then storing it (either on tape or elsewhere in the file system) for recovery in the event files in it are accidentally deleted or changed. See Creating Backup Volumes.

  • Volumes are the unit of resource management. A space quota associated with each volume sets a limit on the maximum volume size. See Setting and Displaying Volume Quota and Current Size.

Mount Points

The previous section discussed how each volume corresponds logically to a directory in the file system: the volume keeps together on one partition all the data in the files residing in the directory. The directory that corresponds to a volume is called its root directory, and the mechanism that associates the directory and volume is called a mount point. A mount point is similar to a symbolic link in the file tree that specifies which volume contains the files kept in a directory. A mount point is not an actual symbolic link; its internal structure is different.


You must not create, in AFS, a symbolic link to a file whose name begins with the number sign (#) or the percent sign (%), because the Cache Manager interprets such a link as a mount point to a regular or read/write volume, respectively.

The use of mount points means that many of the elements in an AFS file tree that look and function just like standard UNIX file system directories are actually mount points. In form, a mount point is a symbolic link in a special format that names the volume containing the data for files in the directory. When the Cache Manager (see The Cache Manager) encounters a mount point--for example, in the course of interpreting a pathname--it looks in the volume named in the mount point. In the volume the Cache Manager finds an actual UNIX-style directory element--the volume's root directory--that lists the files contained in the directory/volume. The next element in the pathname appears in that list.

A volume is said to be mounted at the point in the file tree where there is a mount point pointing to the volume. A volume's contents are not visible or accessible unless it is mounted. Unlike some other file systems, AFS volumes can be mounted at multiple locations in the file system at the same time.


Replication refers to making a copy, or clone, of a source read/write volume and then placing the copy on one or more additional file server machines in a cell. One benefit of replicating a volume is that it increases the availability of the contents. If one file server machine housing the volume fails, users can still access the volume on a different machine. No one machine need become overburdened with requests for a popular file, either, because the file is available from several machines.

Replication is not necessarily appropriate for cells with limited disk space, nor are all types of volumes equally suitable for replication (replication is most appropriate for volumes that contain popular files that do not change very often). For more details, see When to Replicate Volumes.

Caching and Callbacks

Just as replication increases system availability, caching increases the speed and efficiency of file access in AFS. Each AFS client machine dedicates a portion of its local disk or memory to a cache where it stores data temporarily. Whenever an application program (such as a text editor) running on a client machine requests data from an AFS file, the request passes through the Cache Manager. The Cache Manager is a portion of the client machine's kernel that translates file requests from local application programs into cross-network requests to the File Server process running on the file server machine storing the file. When the Cache Manager receives the requested data from the File Server, it stores it in the cache and then passes it on to the application program.

Caching improves the speed of data delivery to application programs in the following ways:

  • When the application program repeatedly asks for data from the same file, it is already on the local disk. The application does not have to wait for the Cache Manager to request and receive the data from the File Server.

  • Caching data eliminates the need for repeated request and transfer of the same data, so network traffic is reduced. Thus, initial requests and other traffic can get through more quickly.

While caching provides many advantages, it also creates the problem of maintaining consistency among the many cached copies of a file and the source version of a file. This problem is solved using a mechanism referred to as a callback.

A callback is a promise by a File Server to a Cache Manager to inform the latter when a change is made to any of the data delivered by the File Server. Callbacks are used differently based on the type of file delivered by the File Server:

  • When a File Server delivers a writable copy of a file (from a read/write volume) to the Cache Manager, the File Server sends along a callback with that file. If the source version of the file is changed by another user, the File Server breaks the callback associated with the cached version of that file--indicating to the Cache Manager that it needs to update the cached copy.

  • When a File Server delivers a file from a read-only volume to the Cache Manager, the File Server sends along a callback associated with the entire volume (so it does not need to send any more callbacks when it delivers additional files from the volume). Only a single callback is required per accessed read-only volume because files in a read-only volume can change only when a new version of the complete volume is released. All callbacks associated with the old version of the volume are broken at release time.

The callback mechanism ensures that the Cache Manager always requests the most up-to-date version of a file. However, it does not ensure that the user necessarily notices the most current version as soon as the Cache Manager has it. That depends on how often the application program requests additional data from the File System or how often it checks with the Cache Manager.