Install and Configure NFS Server on Rocky Linux 8

Last Updated:
Install and Configure NFS Server on Rocky Linux 8

In this tutorial, we will learn how to Install and Configure NFS server on Rocky Linux 8. Network File system (NFS) is a commonly used file-based storage system that allows remote systems to access files over a computer network and interact with them as if they were locally mounted. This enables system Administrators to group resources onto centralized servers on a network for easy sharing.

Installing NFS server on Rocky Linux 8

To Configure NFS server, we will be using two Rocky Linux 8 servers;

  1. NFS server:
  2. NFS client:

Install NFS Packages

Before proceeding with the configuration, you need to install NFS packages by running the command below.

dnf install nfs-utils -y

Configure NFS Server on Rocky Linux 8

Once the NFS packages are installed, proceed to configure it.

Update host’s DNS domain name

NFS server domain name can be updated by editing the file, /etc/idmapd.conf, and uncommenting and changing the line below accordingly.

#Domain =

You can simply run the command below to uncomment and change the domain name.

Replace the domain name,, with your server’s domain name.

sed -i '/^#Domain/s/^#//;/Domain = /s/=.*/=' /etc/idmapd.conf

idmapd is the NFSv4 ID name mapping daemon which provides functionality to the NFSv4 kernel client and server, to which it communicates via upcalls, by translating user and group IDs to names, and vice versa.

Define NFS Server Shares

The file /etc/exports contains a table of local physical file systems on an NFS server that are accessible to NFS clients.

You need to edit this file and add file system or directory to be exported to client and specify the options to apply to those shares.

NB: Each entry for an exported file system has the following structure:

export host(options)


  • export is the file system or directory to be mounted on remote host
  • host is the remote host/client to be allowed to access a shared folder. The host can be defined as:
    • single host: You may specify a host either by an abbreviated name recognized be the resolver, the fully qualified domain name, an IPv4 address, or an IPv6 address.
    • IP networks: You can define hosts by specifying an IP address and netmask pair as address/netmask.
    • wildcards: Machine names may contain the wildcard characters * and ?, or may contain character class lists within [square brackets]. This can be used to make the exports file more compact; for instance, * matches all hosts in the domain As these characters also match the dots in a domain name, the given pattern will also match all hosts within any subdomain of
    • netgroups: NIS netgroups may be given as @group. Only the host part of each netgroup members is consider in checking for membership. Empty host parts or those containing a single dash (-) are ignored.
    • anonymous: This is specified by a single * character (not to be confused with the wildcard entry above) and will match all clients.
  • options are comma separated list of options. Some of the options that can be used include:
    • root_squash: Prevents root users connected remotely from having root privileges and assigns them the user ID for the user nfsnobody thus “squashing” the power of the remote root user to the lowest local user, preventing unauthorized alteration of files on the remote server.
    • no_root_squash: Turn off root squashing. Remote root users are able to change any file on the shared file system. This option is mainly useful for diskless clients. DO NOT USE THE NO_ROOT_SQUASH OPTION.
    • all_squash: Map all uids and gids to the anonymous user. Useful for NFS-exported public FTP directories, news spool directories, etc. The opposite option is no_all_squash, which is the default setting.
    • anonuid=UID and anongid=GUID: These options explicitly set the uid and gid of the anonymous account. It is primarily useful for PC/NFS clients, where you might want all requests appear to be from one user.
    • secure: This option requires that requests not using gss originate on an Internet port less than IPPORT_RESERVED (1024). This option is on by default. To turn it off, specify insecure.
    • rw: Allow both read and write requests on this NFS volume.
    • ro: Mounts the exported file system in read-only mode. Remote hosts are not able to make changes to the data shared on the file system. This is on by default.
    • async: allows the NFS server to violate the NFS protocol and reply to requests before any changes made by that request have been committed to stable storage. It improves performance, but at the cost that an unclean server restart (i.e. a crash) can cause data to be lost or corrupted.
    • sync: Reply to requests only after the changes have been committed to stable storage. This is on by default.
    • wdelay: Causes the NFS server to delay writing to the disk if it suspects another write request is imminent. This option is on by default.
    • no_wdelay: Turns off the above feature. This option has no effect if async is also set.
    • subtree_check: Enables subtree checking. On by default.

Read more on man exports.

In my setup, below is our NFS share to be shared with the specific host,

vim /etc/exports

This will allow users on the remote host,, to access the shared directory /home on the NFS server, with the ability to make changes (rw). Other options that are on by default include wdelay, sync, secure, root_squash.

Allow NFS Service on Firewalld

To allow remote hosts to access the NFS shares, you need to allow NFS service through the firewall if firewalld is running:

firewall-cmd --add-service={nfs,nfs3,mountd,rpc-bind} --permanent
firewall-cmd --reload

Running NFS Service

Start and enable both rpcbind and nfs-server

systemctl enable --now nfs-server rpcbind

Configure NFS client

After configuring the NFS server, the shared directory or file system has to be mounted on the client so it can be accessed.

Install NFS Packages

But before that, ensure that you install NFS packages.

On Ubuntu/Debian systems;

apt install nfs-common

On CentOS/RHEL/Rocky Linux and similar distros;

dnf install nfs-utils -y

Next, edit the /etc/idmap.conf file and add a domain name with your appropriate domain name as we did for the NFS server above.

sed -i '/^#Domain/s/^#//;/Domain = /s/=.*/=' /etc/idmapd.conf

Discover NFS Server Shares

Before mounting, you can try to discover NFS exports, that is, the shares available on the NFS server as shown below.

showmount -e

Ensure the hostname of the NFS server is resolvable. You can also use the IP address instead of the hostname;

Export list for

Then mount the shared directory

mount -t nfs /mnt

Confirm that the shared directory is mounted by using df -hT.

df -hT -P /mnt/
Filesystem                   Type  Size  Used Avail Use% Mounted on nfs4   14G  2.6G   11G  19% /mnt

Configuring Automounting

Automounting with FSTAB

NFS share can also be added to fstab for automounting when the system boots. fstab is a system configuration file that specifies how the Linux kernel should mount filesystems at boot time. To mount an NFS filesystem using fstab, you need to add a line to the fstab file that specifies the NFS server, the NFS share, and the mount point.

Below is an example of an NFS share mount entry.

Replace the hostname and share name accordingly.

echo ' /mnt nfs defaults 0 0' >> /etc/fstab

The _netdev mount option can also be used to tell the mount command to mount the file systems only when the network is activated. This option has been replaced by systemd unit. To ensure that file systems are mounted once the network is up, the must be enabled.

To test the usability of the NFS shares, navigate to /home directory on the NFS server and create a testfile.txt. Check its availability on the mount point on the NFS client. If the file exist the configuration is okay.

Automounting with Autofs

FSTAB cannot be used to manage several mount points at a time. To manage multiple mount points for the same NFS share, you can use the kernel-based automount utility, the autofs daemon. autofs is a daemon that automatically mounts filesystems on demand. When a user tries to access a directory that is mounted using autofs, the autofs daemon will automatically mount the filesystem. Once the user has finished accessing the directory, the autofs daemon will unmount the filesystem.

To proceed with the automounting configuration, install autofs. If you are using other Linux distros, consult their documentation on which package manager to use for installation.

dnf -y install autofs

Default configuration file for autofs is /etc/auto.master. The master map lists autofs controlled mount points on the system and their corresponding configuration files or network sources called automount maps.

Edit the /etc/auto.master file

vim /etc/auto.master 

Add a direct mount point at the end of the file. Direct mounts always have /- as the starting point in the master map file.

/- /etc/auto.mount

Save and exit the /etc/auto.master file.

Edit the mount point (/etc/auto.mount) and create a new map in the form:

mount-point options location

For example;

echo '/mnt -fstype=nfs,rw' >> /etc/auto.mount

Make sure the mount point directory already exists.

Start and enable autofs:

systemctl enable --now autofs

You can reboot your system to verify if the share is auto-mounted!

Other Tutorials

Install and Configure BackupPC on Rocky Linux 8

Create RAID Level 10 (RAID 1+0) on Ubuntu 20.04


We're passionate about sharing our knowledge and experiences with you through our blog. If you appreciate our efforts, consider buying us a virtual coffee. Your support keeps us motivated and enables us to continually improve, ensuring that we can provide you with the best content possible. Thank you for being a coffee-fueled champion of our work!

Photo of author
Co-founder of, Linux Tips and Tutorials. Linux/Unix admin and author at

Leave a Comment