In: Computer Science
(Operating System CSE)
Consider a file system in which a file can be deleted and its disk space reclaimed while links to that file still exist. What problems may occur if a new file is created in the same storage area or with the same absolute path name? How can these problems be avoided?2)Consider a file systemthat uses a modifiedcontiguous-allocation scheme with support for extents.A file is a collection of extents, with each extent corresponding to a contiguous set of blocks. A key issue in such systems is the degree of variability in the size of the extents. What are the advantages and disadvantages of the following schemes? a. All extents are of the same size, and the size is predetermined. b. Extents can be of any size and are allocated dynamically. c. Extents can be of a few fixed sizes, and these sizes are predetermined.3)If all the access rights to an object are deleted, the object can no longer be accessed. At this point the object should also be deleted, and the space it occupies should be returned to the system. Suggest an efficient implementation of this scheme.
Obviously, the filesystem code has a rather severe bug. The only fix is to FIX THE BUG.
Unix fixed this in the mid 1980s (BSD developed) by extending a design from the 1970s - using “reference counting”.
Each file on disk is represented by an inode. The inode has a reference count, and the file is not deleted until the reference count goes to zero. Every time a link is made, the reference count is incremented. Every time a link is removed, the reference count is decremented.
Easy to know when a file is to be deleted. Even if the system crashes, a file is only deleted if the reference count is 0. It COULD have “orphan” files where the deletion couldn’t finish before a crash - these “lost” files are those where the reference count isn’t zero, but there are no links. Thus a file system corruption (minor) occurs - and a filesystem check pass was used to identify these and enter them into “lost+found” directory so the admin can decide what to do. Journaling filesystems covered this by recording what was going on in the filesystem at the time, thus a crash recovery just replays the journal, and finishes the delete.
Late 1980s through 1991s extended this reference counting to include any open file - thus a two level count, where processes with open files get counted as a reference. File systems still tracked the links, but when the link goes to zero the filesystem is prevented from deleting the file as the open file count may still be non-zero, thus the file doesn’t get deleted.
This added feature made it MUCH easier to update the system as executables and shared libraries could now be removed and replaced without causing problems to the running system.
Linux just used the same system when released around 1993 as the technique had become an industry standard.
Symbolic links are explicitly NOT part of this as they can cross filesystem boundaries. These links are interpreted by kernel software and are not the same kind of link, though they can act in a similar way. Having dangling symbolic links is not a fault of the filesystem, as what is left may be in another filesystem (which might not be mounted)… So it is not an error in either filesystem. The symbolic link itself also identifies WHAT is being linked to, so there is still data available about the link, and the admin/user can decide what to do about it.
I dont know much about file systems but i wrote it all which i thought was correct