What are the features of Linux file system?

Linux Systems and Artifacts

Cory Altheide, Harlan Carvey, in Digital Forensics with Open Source Tools, 2011

Linux System Organization and Artifacts

To be able to locate and identify Linux system artifacts, you will need to understand how a typical Linux system is structured. This section discusses how directories and files are organized in the file system, how users are managed, and the meaning of file metadata being examined.

Partitioning

Linux file systems operate from a single, unified namespace. Remember, everything is a file, and all files exist under the root directory, “/”. File systems on different local disks, removable media, and even remote servers will all appear underneath a single directory hierarchy, beginning from the root.

Filesystem Hierarchy

The standard directory structure Linux systems should adhere to is defined in the Filesystem Hierarchy Standard (FHS). This standard describes proper organization and use of the various directories found on Linux systems. The FHS is not enforced per se, but most Linux distributions adhere to it as best practice. The main directories found on a Linux system and the contents you should expect to find in them are shown in Table 5.1.

Table 5.1. Standard Linux Directories

/bin essential command binaries (for all users)
/boot files needed for the system bootloader
/dev device files
/etc system configuration files
/home user home directories
/lib essential shared libraries and kernel modules
/media mount points for removable media (usually for automounts)
/mnt temporary mount points (usually mounted manually)
/opt add-on application packages (outside of system package manager)
/root root user's home directory
/sbin system binaries
/tmp temporary files

Warning

/ vs /root

In traditional Unix nomenclature, “/” is referred to as “root,” as it is the root of the entire directory structure for the system. Unfortunately, this leads to confusion with the subdirectory “/root” found on many Linux systems. This is referred to as “slash root” or “root's home.”

Ownership and Permissions

Understanding file ownership and permission information is key to performing a successful examination of a Linux system. Ownership refers to the user and/or group that a file or directory belongs to, whereas permissions refer to the things these (and other) users can do with or to the file or directory. Access to files and directories on Linux systems are controlled by these two concepts. To examine this, we will refer back to the test “file1” created earlier in the chapter.

[email protected]:~$ stat file1

 File: 'file1'

 Size: 11  Blocks: 8  IO Block: 4096 regular file

Device: 801h/2049d Inode: 452126  Links: 1

Access: (0644/-rw-r--r--) Uid: ( 1000/ user) Gid: ( 1000/ user)

Access: 2010-10-19 21:06:36.534649312 -0700

Modify: 2010-10-19 21:06:34.798639051 -0700

Change: 2010-10-19 21:06:34.798639051 -0700

The fifth line contains the information of interest—the “Access: (0644/-rw-r—r--)” item are the permissions, and the rest of the line is the ownership information. This file is owned by User ID 1000 as well as Group ID 1000. We will discuss users and groups in detail later in the chapter.

Linux permissions are divided among three groups, and three tasks. Files and directories can be read, written, and executed. Permissions to perform these tasks can be assigned based to the owner, the group, or the world (aka anyone with access to the system). This file has the default permissions a file is assigned upon creation. Reading from left to right, the owner (UID 1000) can read and write to the file, anyone with a GID of 1000 can read it, and anyone with an account on the system can also read the file.

File Attributes

In addition to standard read/write/execute permissions, Ext file systems support “attributes.” These attributes are stored in a special “attribute block” referenced by the inode. On a Linux system, these can be viewed using the lsattr command. Attributes that may be of investigative interest include

(A)—no atime updates

(a)—append only

(i)—immutable

(j)—data journaling enabled

Remember that we are working outside of file system-imposed restrictions when we use forensic tools and techniques so these attributes do not impact our examination of data in question. The presence of specific attributes may be of investigative interest, however.

Hidden Files

On Linux systems, files are “hidden” from normal view by beginning the file name with a dot (.). These files are known as dotfiles and will not be displayed by default in most graphical applications and command line utilities. Hidden files and directories are a very rudimentary way to hide data and should not be considered overtly suspicious, as many applications use them to store their nonuser-serviceable bits.

/tmp

/tmp is the virtual dumping ground of a Linux system—it is a shared scratch space, and as such all users have write permissions to this directory. It is typically used for system-wide lock files and nonuser-specific temporary files. One example of a service that uses /tmp to store lock files is the X Window Server, which provides the back end used by Linux graphical user interfaces. The fact that all users and processes can write here means that the /tmp directory is a great choice for a staging or initial entry point for an attacker to place data on the system. As an added bonus, most users never examine /tmp and would not know which random files or directories are to be expected and which are not.

Another item to note with regard to the /tmp directory can be seen in the following directory listing:

drwxrwxrwt 13 root root 4.0K 2010-10-15 13:38 tmp

Note that the directory itself is world readable, writable, and executable, but the last permission entry is a “t,” not an “x” as we would expect. This indicates that the directory has the “sticky bit” set. Files under a directory with the sticky bit set can only be deleted by the user that owns them (or the root user), even if they are world or group writable. In effect, stickiness overrules other permissions.

Note

Sticky History

Decades ago, the sticky bit was placed on program files to indicate that their executable instructions should be kept in swap once the program exited. This would speed up subsequent executions for commonly used programs. While some Unix-like systems still support this behavior, it was never used for this purpose on Linux systems.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597495868000054

Energy Efficiency Scheduling in Hadoop

Wenhong Tian, Yong Zhao, in Optimized Cloud Resource Management and Scheduling, 2015

9.3.2.1 Resource collection

The resource collection module is implemented through reading Linux file system procfs. Procfs (or the proc filesystem) is a special file system in UNIX-like operating systems that presents information about processes and other system information. Therefore, we can use it to obtain CPU and memory information.

a.

Memory information

Total: The first line in /proc/meminfo;

Available: The second line in /proc/stat;

Mem=1 - Available/Total.

b.

CPU

Total: The first line in /proc/stat;

Each CPU: The second line /proc/stat;from CPU0-CPUn;

user, nice, sys, idle: The following four column numbers;.

We read these data twice, we present with “user_1 or user_2”,user+sys is the used CPU.

CPU=(int)rintf(((float)((user_2+sys_2+nice_2)-(user_1+sys_1+nice_1))/(float)(total_2−total_1) )*100).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128014769000094

Macintosh Forensic Analysis

Anthony Kokocinski, in Handbook of Digital Forensics and Investigation, 2010

Inodes

Lastly, although HFS, HFS+, and HFSX do not organize file data with inodes like many Linux or Unix file systems, files could be seen on the file system that have their names start with iNode and then contain a number (Figure 7.3). These are not inodes, nor are they typically seen during normal user usage. These files are the central location for data that is shared between hard links on a volume. This is the procedure that HFS+ uses to hard link files that share the same data with multiple names.

What are the features of Linux file system?

Figure 7.3A. EnCase displays the iNode files in a nameless folder. The name of this folder has unprintable characters. Previous versions of EnCase have had different interpretations of this folder; one previous name was Cases.

What are the features of Linux file system?

Figure 7.3B. FTK showing the same folder. FTK displays the folder name differently, either choosing not to display the unprintable characters or to use spaces as placeholders for them.

Although there can always be more said about imaging, partitioning structure, and file systems, for brevity this is as far as we will visit this topic in this chapter with some very notable exceptions in upcoming sections.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123742674000070

Postmortem Forensics

Cameron H. Malin, ... James M. Aquilina, in Malware Forensics Field Guide for Linux Systems, 2014

Examine Linux File System

Explore the file system for traces left by malware.

▸ File system data structures can provide substantial amounts of information related to a malware incident, including the timing of events and the actual content of malware. Various software applications for performing forensic examination are available but some have significant limitations when applied to Linux file systems. Therefore, it is necessary to become familiar with tools that are specifically designed for Linux forensic examination, and to double check important findings using multiple tools. In addition, malware is increasingly being designed to thwart file system analysis. Some malware alter date-time stamps on malicious files to make it more difficult to find them with time line analysis. Other malicious code is designed to only store certain information in memory to minimize the amount of data stored in the file system. To deal with such anti-forensic techniques, it is necessary to pay careful attention to time line analysis of file system date-time stamps and to files stored in common locations where malware might be found.

One of the first challenges is to determine what time periods to focus on initially. An approach is to use the mactime histogram feature in the Sleuth Kit to find spikes in activity as shown in Figure 3.13. The output of this command shows the most file system activity on April 7, 2004, when the operating system was installed, and reveals a spike in activity on April 8, 2004, around 07:00 and 08:00, which corresponds to the installation of a rootkit.

What are the features of Linux file system?

FIGURE 3.13. Histogram of file system date-time stamps created using mactime

Search for file types that attackers commonly use to aggregate and exfiltrate information. For example, if PGP files are not commonly used in the victim environment, searching for .asc file extensions and PGP headers may reveal activities related to the intrusion.

Review the contents of the /usr/sbin and /sbin directories for files with date-time stamps around the time of the incident, scripts that are not normally located in these directories (e.g., .sh or .php scripts), or executables not associated with any known application (hash analysis can assist in this type of review to exclude known files).

Since many of the items in the /dev directory are special files that refer to a block or character device (containing a “b” or “c” in the file permissions), digital investigators may find malware by looking for normal (non-special) files and directories.

Look for unusual or hidden files and directories, such as “.. ” (dot dot space) or “..^G ” (dot dot control-G), as these can be used to conceal tools and information stored on the system.

Intruders sometimes leave setuid copies of /bin/sh on a system to allow them root level access at a later time. Digital investigators can use the following commands to find setuid root files on the entire file system:

find /mnt/evidence -user root -perm -04000 –print

When one piece of malware is found in a particular directory (e.g., /dev or /tmp), an inspection of other files in that directory may reveal additional malware, sniffer logs, configuration files, and stolen files.

Looking for files that should not be on the compromised system (e.g., illegal music libraries, warez, etc.) can be a starting point for further analysis. For instance, the location of such files, or the dates such files were placed on the system, can narrow the focus of forensic analysis to a particular area or time period.

Time line analysis is one of the most powerful techniques for organizing and analyzing file system information. Combining date-time stamps of malware-related files and system-related files such as startup scripts and application configuration files can lead to an illuminating reconstruction of events surrounding a malware incident, including the initial vector of attack and subsequent entrenchment and data theft.

What are the features of Linux file system?

Tools for generating time lines from Linux file systems, including plaso, which incorporates log entries, are discussed in the Tool Box section.

Review date-time stamps of deleted inodes for large numbers of files being deleted around the same time, which might indicate malicious activity such as installation of a rootkit or trojanized service.

Because inodes are allocated on a next available basis, malicious files placed on the system at around the same time may be assigned consecutive inodes. Therefore, after one component of malware is located, it can be productive to inspect neighboring inodes. A corollary of such inode analysis is to look for files with out-of-place inodes among system binaries (Altheide and Casey, 2010). For instance, as shown in Figure 3.14, if malware was placed in /bin or /sbin directories, or if an application was replaced with a trojanized version, the inode number may appear as an outlier because the new inode number would not be similar to inode numbers of the other, original files.

What are the features of Linux file system?

FIGURE 3.14. Trojanized binaries ifconfig and syslogd in /sbin have inode numbers that differ significantly from the majority of other (legitimate) binaries in this directory

Some digital forensic tools sort directory entries alphabetically rather than keeping them in their original order. This can be significant when malware creates a directory and the entry is appended to the end of the directory listing. For example, Figure 3.15 shows the Digital Forensic Framework displaying the contents of the /dev directory in the left window pane with entries listed in the order that they exist within the directory file rather than ordered alphabetically (the tyyec entry was added last andcontains adore rootkit files). In this situation, the fact that the directory is last can be helpful in determining that it was created recently, even if date-time stamps have been altered using anti-forensic methods.

What are the features of Linux file system?

FIGURE 3.15. Rootkit directory displayed using the Digital Forensics Framework, which retains directory order

Once malware is identified on a Linux system, examine the file permissions to determine their owner and, if the owner is not root, look for other files owned by the offending account.

Investigative Considerations

It is often possible to narrow down the time period when malicious activity occurred on a computer, in which case digital investigators can create a time line of events on the system to identify malware and related components, such as keystroke capture logs.

There are many forensic techniques for examining Linux file systems that require a familiarity with the underlying data structures such as inode tables and journal entries. Therefore, to reduce the risk of overlooking important information, for each important file and time period in a malware incident, it is advisable to look in a methodical and comprehensive manner for patterns in related/surrounding inodes, directory entries, filenames, and journal entries using Linux forensic tools.

Although it is becoming more common for the modified time (mtime) of a file to be falsified by malware, the inode change time (ctime) is not typically updated. Therefore, discrepancies between the mtime and ctime may indicate that date-time stamps have been artificially manipulated (e.g., an mtime before the ctime).

The journal on EXT3 and EXT4 contains references to file system records that can be examined using the jls and jcat utilities in TSK.12

The increasing use of anti-forensic techniques in malware is making it more difficult to find traces on the file system. To mitigate this challenge, use all of the information available from other sources to direct a forensic analysis of the file system, including memory and logs.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597494700000036

Storage Software

James O'Reilly, in Network Storage, 2017

Problems With Swift Performance

Swift is generally a good, highly scalable storage solution, but it has a few major and possibly fundamental issues, especially in performance. These are just surfacing as issues, which is to be expected given the lack of deep experience with OpenStack. (Note that Ceph has a set of its own bottlenecks and this is generally true in all object stores now available).

First, Swift stores data on any one of several standard Linux file systems. This is an economic shortcut, but it smacks of the problems you get when you put a bus on railroad tracks—there are mismatches in function everywhere.

Object storage is database oriented. Perhaps the biggest issue is parsing the inode trees to find and access object blocks. Object stores often have hundreds of millions of small objects on each node and the flat-file nature of storing data means that having the inodes in memory works best by far. The problem in this means a lot of DRAM is used up for the inode cache, which still takes a significant search time even if it is all in memory. The problem is compounded when the inode trees are hit for every block.

Currently, a typical installation will likely use less cache and will find systems slowing drastically as they fill up.

Second, Swift has a messaging problem, just like Ceph. It can easily generate enough messaging to bring a typical system to its knees. There are a couple of hacks to help get around this, but the underlying complaint is that there is no pooling, which means connections are built from scratch for each communication. This is a pretty naïve design.

Third, in common with all the other object stores, Swift struggles with using SSD. This is a major problem already, but will mushroom as we move away from hard drives to all-SSD systems in 2017–20 timeframe. The competition has figured this, at least to the point of speeding up write operations by journaling onto a pair of SSDs. One can only say in mitigation that the early mind-set on object stores was to use it as the slow but very scalable cold storage for systems, and the interest in using it as universal storage has caught the architects by surprise.

Swift tends to be delivered in low-end servers, with 1 GbE links, for example. That smells strongly of fundamental performance problems, and the arrival of all-SSD systems will crack this wide open. For example, losing the underlying file system and using a key data store or something similar will shrink inode size from 1 KB each to perhaps 8 D-words, saving a lot of memory and making searches much faster.

Fixing messaging by supporting permanent links and pooling will solve another major problem and beefing up the CPU will help too. SSD spooling will be essential in the proxy servers if the data rate moves up significantly; else they’ll become the bottleneck on write operations.

Since Swift is getting a great deal of corporate attention as well as the open-source communities’ help, we can expect these problems to come under the scalpel and be corrected over time.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128038635000042

Introduction

Cory Altheide, Harlan Carvey, in Digital Forensics with Open Source Tools, 2012

Layout of the Book

Beyond the introductory chapter that follows, the rest of this book is divided up into eight chapters and one Appendix.

Chapter 2 discusses the Open Source Examination Platform. We walk through all the prerequisites required to start compiling source code into executable code, install interpreters, and ensure we have a proper environment to build software on Ubuntu and Windows. We also install a Linux emulation environment on Windows along with some additional packages to bring Windows closer to “feature parity” with Linux for our purposes.

Chapter 3 details Disk and File System Analysis using the Sleuth Kit. The Sleuth Kit is the premier open source file system forensic analysis framework. We explain use of the Sleuth Kit and the fundamentals of media analysis, disk and partition structures, and file system concepts. We also review additional core digital forensics topics such as hashing and the creation of forensic images.

Chapter 4 begins our operating system-specific examination chapters with Windows Systems and Artifacts. We cover analysis of FAT and NTFS file systems, including internal structures of the NTFS Master File Table, extraction and analysis of Registry hives, event logs, and other Windows-specific artifacts. Finally, because malware-related intrusion cases are becoming more and more prevalent, we discuss some of the artifacts that can be retrieved from Windows executable files.

We continue on to Chapter 5, Linux Systems and Artifacts, where we discuss analysis of the most common Linux file systems (Ext2 and 3) and identification, extraction, and analysis of artifacts found on Linux servers and desktops. System level artifacts include items involved in the Linux boot process, service control scripts, and user account management. User-generated artifacts include Linux graphical user environment traces indicating recently opened files, mounted volumes, and more.

Chapter 6 is the final operating system-specific chapter, in which we examine Mac OS X Systems and Artifacts. We examine the HFS+ file system using the Sleuth Kit as well as an HFS-specific tool, HFSXplorer. We also analyze the Property List files that make up the bulk of OS X configuration information and user artifacts.

Chapter 7 reviews Internet Artifacts. Internet Explorer, Mozilla Firefox, Apple Safari, and Google Chrome artifacts are processed and analyzed, along with Outlook, Maildir, and mbox formatted local mail.

Chapter 8 is all about File Analysis. This chapter covers the analysis of files that aren't necessarily bound to a single system or operating system—documents, graphics files, videos, and more. Analysis of these types of files can be a big part of any investigation, and as these files move frequently between systems, many have the chance to carry traces of their source system with them. In addition, many of these file formats contain embedded information that can persist beyond the destruction of the file system or any other malicious tampering this side of wiping.

Chapter 9 covers a range of topics under the themes of Automating Analysis and Extending Capabilities. We discuss the PyFLAG and DFF graphical investigation environments. We also review the fiwalk library designed to take the pain out of automated forensic data extraction. Additionally, we discuss the generation and analysis of timelines, along with some alternative ways to think about temporal analysis during an examination.

The Appendix discusses some non-open source tools that fill some niches not yet covered by open source tools. These tools are all available free of charge, but are not provided as open source software, and as such did not fit directly into the main content of the book. That said, the authors find these tools incredibly valuable and would be remiss in not including some discussion of them.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597495868000157

File Identification and Profiling

Cameron H. Malin, ... James M. Aquilina, in Malware Forensics Field Guide for Linux Systems, 2014

File Name

Acquire and document the full file name.

▶ Identifying and documenting the suspicious file name is a foundational step in file profiling. The file name, along with the respective file hash value, will be the main identifiers for the file specimen.

Gather the subject file name and associated attributes using the ls (“list”) command and the –al argument for “all” “long listing” format.

The output of this query, as applied against a suspect file (depicted in Figure 5.4), provides a listing of the file’s attributes, size, date, and time.

What are the features of Linux file system?

FIGURE 5.4. Using the ls –al command

The query reveals that the suspect file is 39326 bytes in size and has a time and date stamp of September 21, 2013, at 5:33 p.m. The time stamp in this instance is not particularly salient since it is the date and time that the file specimen was copied into the examination system for analysis.

Additional time stamp, inode information, and file system metadata associated with the file can be gathered using the stat, istat, and debugfs commands, as described in the Analysis Tip textbox, “A File is Born.”

Analysis Tip

“A File is Born”

Linux and Unix file systems have timestamps that reflect the change time of a respective inode (ctime), last file access (atime), and file modification time (mtime). A new feature in the EXT4 file system is a “created time” or “birth” timestamp (crtime, btime, or “Birth”) denoting when a respective file was created on the disk. Collectively, these timestamps can be acquired using the stat, istat and debugfs commands. Query a target file with stat (displays file system status) to gather file system data relating to the file, including inode number –and timestamps for access, modify, and change times. Notably “Birth” is empty; as of this writing stat does not natively display the birth time (xstat() is required by the kernel).

[email protected]:∼/home/lab/Malware Repository$ stat ato

 File: 'ato'

 Size: 39326    Blocks: 80    IO Block: 4096  regular file

Device: 801h/2049d Inode: 937005 Links: 1

Access: (0754/-rwxr-xr--) Uid: (1000/lab)  Gid: (1000/lab)

Access: 2013-09-21 17:42:07.716066235 -0700

Modify: 2013-09-21 17:33:57.732043481 -0700

Change: 2013-09-21 19:19:05.757617416 -0700

Birth: -

However, using the inode number provided by stat, additional inode details can be gathered using the istat command (which displays meta-data structure details) by supplying the target disk and inode number.

  [email protected]:/home/lab/Malware Repository# istat /dev/sda1 937005

  inode: 937005

  Allocated

Group: 114

Generation Id: 838891941

uid / gid: 1000 / 1000

mode: rrwxr-xr--

Flags:

size: 39326

num of links: 1

Inode Times:

Accessed:      Sat Sep 21 17:42:07 2013

File Modified:   Sat Sep 21 17:33:57 2013

Inode Modified:   Sat Sep 21 19:19:05 2013

Direct Blocks:

127754 0 0 136110 0 0 0 0

Lastly, use debugfs, the native Linux ext2/ext3/ext4 file system debugger, with the –R switch (causing debugfs to execute the single command, “request”) in conjunction with the stat command, target inode and disk—and the crtime is revealed.

[email protected]:/home/lab/Malware Repository# debugfs -R 'stat <937005>' /dev/sda1

Inode: 937005  Type: regular Mode: 0754  Flags: 0x80000

Generation: 838891941 Version: 0x00000000:00000001

User: 1000  Group: 1000  Size: 39326

File ACL: 0 Directory ACL: 0

Links: 1  Blockcount: 80

Fragment: Address: 0 Number: 0 Size: 0

 ctime: 0x523e5399:b4a14c20 -- Sat Sep 21 19:19:05 2013

 atime: 0x523e3cdf:aab936ec -- Sat Sep 21 17:42:07 2013

 mtime: 0x523e3af5:ae886364 -- Sat Sep 21 17:33:57 2013

crtime: 0x523e39c0:643dc008 -- Sat Sep 21 17:28:48 2013

Size of extra inode fields: 28

EXTENTS:

(0-9):136110-136119

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978159749470000005X

Introducing linux

Doug Abbott, in Linux for Embedded and Real-Time Applications (Fourth Edition), 2018

The Filesystem Hierarchy Standard

A Linux system typically contains a very large number of files. For example, a typical CentOS installation may contain upwards of 30,000 files occupying several GB of disk space. Clearly it is imperative that these files be organized in some consistent, coherent manner. That is the motivation behind the Filesystem Hierarchy Standard (FHS). The standard allows both users and software developers to “predict the location of installed files and directories”4. FHS is by no means specific to Linux. It applies to Unix-like operating systems in general.

The directory structure of a Linux file system always begins at the root, identified as “/.” FHS specifies several directories and their contents directly subordinate to the root. This is illustrated in Fig. 3.10. The FHS starts by characterizing files along two independent axes:

What are the features of Linux file system?

Figure 3.10. File system hierarchy.

Sharable versus nonsharable. A networked system may be able to mount certain directories through Network File System, such that multiple users can share executables. On the other hand, some information is unique to a specific computer, and is thus not sharable.

Static versus. variable. Many of the files in a Linux system are executables that do not change, they are static. But the files that users create or acquire, by downloading or e-mail for example, are variable. These two classes of files should be cleanly separated.

Here is a description of the directories defined by FHS:

/bin Contains binary executables of commands used both by users and the system administrator. FHS specifies what files /bin must contain. These include, among other things, the command shell and basic file utilities. /bin files are static and sharable.

/boot Contains everything required for the boot process except configuration files and the map installer. In addition to the kernel executable image, /boot contains data that is used before the kernel begins executing user-mode programs. /boot files are static and nonsharable.

/etc Contains host-specific configuration files and directories. With the exception of mtab, which contains dynamic information about file systems, /etc files are static. FHS identifies three optional subdirectories of /etc:

/opt Configuration files for add-on application packages contained in /opt.

/sgml Configuration files for SGML and XML

/X11 Configuration files for X windows.

In practice, most Linux distributions have many more subdirectories of /etc representing optional startup and configuration requirements.

/home (Optional) Contains user home directories. Each user has a subdirectory under home, with the same name as his/her user name. Although FHS calls this optional, in fact it is almost universal among Unix systems. The contents of subdirectories under /home is, of course, variable.

/lib Contains those shared library images needed to boot the system and run the commands in the root file system, i.e., the binaries in /bin and /sbin. In Linux systems /lib has a subdirectory, /modules, that contains kernel loadable modules.

/media Mount point for removable media. When a removable medium is auto-mounted, the mount point is usually the name of the volume.

/mnt Provides a convenient place to temporarily mount a file system.

/opt Contains optional add-in software packages. Each package has its own subdirectory under /opt.

/run Data relevant to running processes.

/root Home directory for the root user5. This is not a requirement of FHS, but is normally accepted practice and highly recommended.

/sbin Contains binaries of utilities essential for system administration such as booting, recovering, restoring, or repairing the system. These utilities are normally only used by the system administrator, and normal users should not need /sbin in their path.

/tmp Temporary files.

/usr Secondary hierarchy, see below.

/var Variable data. Includes spool directories and files, administrative and logging data, and transient and temporary files. Basically, system-wide data that changes during the course of operation. There are a number of subdirectories under /var.

The /usr hierarchy

/usr is a secondary hierarchy that contains user-oriented files. Fig. 3.11 shows the subdirectories under /usr. Several of these subdirectories mirror functionality at the root. Perhaps the most interesting subdirectory of /usr is /src for source code. This is where the Linux source is generally installed. You may in fact have sources for several Linux kernels installed in /src under subdirectories with names of the form:

What are the features of Linux file system?

Figure 3.11. /usr hierarchy.

linux--ext

You would then have a logical link named linux pointing to the kernel version you are currently working with.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128112779000031

What are the features of a file system?

Along with the file itself, file systems contain information such as the size of the file, as well as its attributes, location and hierarchy in the directory in the metadata. Metadata can also identify free blocks of available storage on the drive and how much space is available.

What are the 3 main file types in a Linux file system?

In Linux there are basically three types of files: Ordinary/Regular files. Special files. Directories.