Question

In: Computer Science

using the header: #include <pthread.h> // This is a header file for a Read/Right Lock Library....

using the header:

#include <pthread.h>

// This is a header file for a Read/Right Lock Library. Your C code
//SHOULD access your routines using these exact function
// prototypes

typedef struct RW_lock_s
{
} RW_lock_t;


void RW_lock_init(RW_lock_t *lock);
/* This routine should be called on a pointer to a struct variable of RW_lock_t
to initialize it and ready it for use. */

void RW_read_lock(RW_lock_t *lock);
/* This routine should be called at the beginning of a READER critical section */

void RW_read_unlock(RW_lock_t *lock);
/* This routine should be called at the end of a READER critical section */

void RW_write_lock(RW_lock_t *lock);
/* This routine should be called at the beginning of a WRITER critical section */

void RW_write_unlock(RW_lock_t *lock);
/* This routine should be called at the end of a WRITER critical section */

and class:

#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
#include "RW_lock.h"


void RW_lock_init(RW_lock_t *lock)
{

}

void RW_read_lock(RW_lock_t *lock)
{
}

void RW_read_unlock(RW_lock_t *lock)
{
}

void RW_write_lock(RW_lock_t *lock)
{ pthread_mutex_lock(&(lock->g));
}

void RW_write_unlock(RW_lock_t *lock)
{
}


implement a reader writer lock in C

Solutions

Expert Solution

IMPLEMENTING READER WRITER LOCK

When writing a code, a lock is one of the most critical tools programmers have in their toolbox. For the majority of applications adding a lock to protect access to some data is all you need. However, for some performance, critical pieces of code, a lock may introduce significant overhead and its use has to be carefully optimized;

For example, lock granularity is frequently tweaked to make sure threads hold locks for the minimal amount of time necessary.

A common optimization for locks is using reader-writer locks. Reader-writer locks were created from the observation that multiple threads can read shared data concurrently, as long as no one is modifying that data while it's being read. Regular locks don't distinguish between "lock for reading" and "lock for writing", so when multiple threads are reading data, each will still have to lock it, producing needless serialization. In other words, nesessary exclusion between readers and writers leads to unnecessary exlusion between multiple readers.

A simple implementation

Start with a simple implementation that uses a counter. All RW locks in this post will implement this interface:

type RWLocker interface 
{
    RLock()
    RUnlock()
    WLock()
    WUnlock()
}

Having a single interface is very useful for testing. The names of the methods in the interface should be self-explanatory: reader lock/unlock, writer lock/unlock. The simple counter-based implementation uses a mutex and a counter:

type ReaderCountRWLock struct 
{
  m sync.Mutex
    readerCount int
}

The counter keeps track of the number of readers that are holding the lock. The reader locking/unlocking is straightforward :

func (l *ReaderCountRWLock) RLock() {
    l.m.Lock()
    l.readerCount++
    l.m.Unlock()
}

func (l *ReaderCountRWLock) RUnlock() {
    l.m.Lock()
    l.readerCount--
    l.m.Unlock()
}

The writer is a bit trickier. A writer attempting to take the lock will have to wait until there are no readers inside. Here's a simple, though inefficient, way to do this:

func (l *ReaderCountRWLock) WLock() {
    for {
        l.m.Lock()
        if l.readerCount > 0 {
            l.m.Unlock()
        } else {
            break
        }
    }
}

The writer locks the mutex and checks if there are readers inside the lock. If there are, it releases the mutex and tries again - this is called spinning. If there are no readers, WLock returns with the mutex still locked, so readers won't be able to take the lock. A writer unlock is trivial:

func (l *ReaderCountRWLock) WUnlock() {
    l.m.Unlock()
}

This implementation is the simplest I could come up with. Its performance could be better, though. If readers are inside the lock, a writer will spin taking and releasing the mutex and "burn" CPU cycles. We could optimize this lock significantly if only we had some way for a writer to wait more efficiently.

Reader preference vs. Writer preference

Imagine there are 2 active readers and a waiting writer; the writer waits for both readers to release the lock, but while it's waiting other readers can come in and grab it. It's like waiting at a stop sign to cross a very busy street where every car has the right of way over you - it may take you quite a while. This is called reader preference, or - more dramatically - writer starvation. It's often not what we want since it can add significant delay to updates in the system. It's great to have many readers working concurrently without blocking each other, but it's not as appealing if they are mostly reading stale data.

Let's now look at a couple of writer-preferring implementations.

A simple writer-preferring RW lock

type WritePreferRWLock struct

{

readerCount int

hasWriter bool

c *sync.Cond

}

func NewWritePreferRWLock() *WritePreferRWLock

{

return &WritePreferRWLock{0, false, sync.NewCond(new(sync.Mutex))

}

}

Here readerCount is still the number of readers holding the lock, but we're adding a new field - hasWriter; this one is true whenever there's a writer waiting to take the lock. c is a condition variable and an associated mutex that implement the actual exclusion as well as efficient waiting. Let's start with the reader:

func (l *WritePreferRWLock) RLock() {
    l.c.L.Lock()
    for l.hasWriter {
        l.c.Wait()
    }
    l.readerCount++
    l.c.L.Unlock()
}

func (l *WritePreferRWLock) RUnlock() {
    l.c.L.Lock()
    l.readerCount--
    if l.readerCount == 0 {
        l.c.Broadcast()
    }
    l.c.L.Unlock()
}

It's critical for correctness to only examine the shared data when a mutex is held. When a reader comes in, it first checks if writers are waiting to grab the lock; if yes, it will wait on the condition variable. This is where the writer preference is embodied - unlike previous implementations, in this one readers give writers the right of way. When no writers are waiting, the reader increments the reader count.

When releasing the lock, the last reader out will broadcast l.c to wake up any writers that could be waiting.

Here's the writer side:

func (l *WritePreferRWLock) WLock() {
    l.c.L.Lock()
    for l.hasWriter {
        l.c.Wait()
    }
    l.hasWriter = true
    for l.readerCount > 0 {
        l.c.Wait()
    }
    l.c.L.Unlock()
}

func (l *WritePreferRWLock) WUnlock() {
    l.c.L.Lock()
    l.hasWriter = false
    l.c.Broadcast()
    l.c.L.Unlock()
}

A writer starts by checking that no other writer is active. Note that unlike in previous implementations, here writers don't hold the mutex between WLock and WUnlock; instead, the mutex is only used to control access to the shared struct, and the hasWriter field plays the double role of "writer waiting to get lock" and "writer is using lock". Once there are no more writers, it flips hasWriter and waits for existing readers to clear out. Recall that setting hasWriter to true guarantees that no new readers will grab the lock.

In WUnlock, the writer unmarks hasWriter and broadcasts on the condition variable. We have to use Broadcast rather than Signal here because multiple readers can be waiting and we want to release all of them.

A more efficient writer-preferring RW lock

This is the most complicated implementation, so don't be discouraged if you don't get it on the first try. Feel free to experiment with the code and add some logging if you want to get a better grasp of what's going on.

The goal is to make lock taking - especially in readers - as fast as possible, while at the same time giving writers the right of way.

As usual, we'll start with the struct and constructor:

type WritePreferFastRWLock struct {
    w sync.Mutex
    writerWait chan struct{}
    readerWait chan struct{}
    numPending int32
    readersDeparting int32
}

const maxReaders int32 = 1 << 30

func NewWritePreferFastRWLock() *WritePreferFastRWLock {
    var l WritePreferFastRWLock
    l.writerWait = make(chan struct{})
    l.readerWait = make(chan struct{})
    return &l
}

Now let's look present the reader methods and explain the relevant fields along the way:

func (l *WritePreferFastRWLock) RLock() {
    if atomic.AddInt32(&l.numPending, 1) < 0 {
        <-l.readerWait
    }
}

func (l *WritePreferFastRWLock) RUnlock() {
    if r := atomic.AddInt32(&l.numPending, -1); r < 0 {
        if atomic.AddInt32(&l.readersDeparting, -1) == 0 {
            l.writerWait <- struct{}{}
        }
    }
}

The mutex w is not used by readers at all. Its sole purpose is to provide mutual exclusion between writers, so we'll get to it later. The most critical field in this implementation is numPending. It's used to mark the number of readers that are using the lock, but is sneakily used by writers as well. Writers subtract maxReaders from this field, so a negative value means a writer is using the lock. Access to the field is done with atomics - so no locking required.

The reader adds 1 to numPending (atomically). If the value of numPending is not negative, there are no waiting/active writers and the reader can proceed. This path is expected to be the most common; hence, it's extremely quick - only a single atomic add followed by a branch. If numPending is negative it means a writer is waiting on the lock or using it, so the reader will give the writer the right of way. This is done by waiting on an unbuffered channel.

When a reader is done, it decrements numPending. If there are no writers, it's done - again, very quick for the common path. If writers are waiting, the readersDeparting field is used to ensure that only a single reader releases one waiting writer. readersDeparting is set by a waiting writer.

Writer lock:

func (l *WritePreferFastRWLock) WLock() {
    l.w.Lock()
    r := atomic.AddInt32(&l.numPending, -maxReaders) + maxReaders
    if r != 0 && atomic.AddInt32(&l.readersDeparting, r) != 0 {
        <-l.writerWait
    }
}

The w mutex is used here to ensure that only one writer is inside the lock at any given time. The second line of the function is a bit tricky, since it does two things:

  1. It announces to readers that a writer is pending by decrementing maxReaders from numPending.
  2. It calculates the number of active readers by adding maxReaders back into the local variable r.

Then, if there are any active readers (r != 0), it sets their number into readersDeparting - this lets readers know how many of them are there before releasing writerWait. This function will return (holding the lock) when the last departing reader is done. At this point the writer holds the lock exclusively (other writers are excluded by w, other readers wait until numPending turns positive again).

Writer unlock:

func (l *WritePreferFastRWLock) WUnlock() {
    r := atomic.AddInt32(&l.numPending, maxReaders)
    for i := 0; i < int(r); i++ {
        l.readerWait <- struct{}{}
    }
    l.w.Unlock()
}

Once again, the clever usage of numPending takes a bit of time to decipher here. By adding maxReaders, the writer tells future readers that there are no more writers using the lock. The remaining r is the number of readers that accumulated waiting for the lock while this writer was active (review RLock again). The writer now releases all of them by sending r dummy objects into readerWait. Finally it unlocks the writer exclusion lock.


Related Solutions

Read previously created binary file using ObjectInputStream Create an appropriate instance of HashMap from Java library...
Read previously created binary file using ObjectInputStream Create an appropriate instance of HashMap from Java library Populate the HashMap with data from binary file Display the information read in a user friendly format Problem: Given: Words.dat – a binary file consisting of all the words in War and Peace Goal: Read the words in from the binary file and figure out how many times each word appears in the file. Display the results to the user. Use a HashMap with...
**C++ program** Please provide a header file(bubble_sort.h) and a program file(bubble_sort.cpp). Also include notes to explain...
**C++ program** Please provide a header file(bubble_sort.h) and a program file(bubble_sort.cpp). Also include notes to explain code and output screenshots. Please use Bubble Sort code provided. Write a program for sorting a list of integers in ascending order using the bubble sort algorithm. Requirements Implement the following functions: Implement a function called readData int readData( int *arr) arr is a pointer for storing the integers. The function returns the number of integers. The function readData reads the list of integers...
Using C++ Create the UML, the header, the cpp, and the test file for an ArtWork...
Using C++ Create the UML, the header, the cpp, and the test file for an ArtWork class. The features of an ArtWork are: Artist name (e.g. Vincent vanGogh or Stan Getz) Medium (e.g. oil painting or music composition) Name of piece (e.g. Starry Night or Late night blues) Year (e.g. 1837 or 1958)
HW_6b - Read and write a text file. Include the following in the main.cpp file. Add...
HW_6b - Read and write a text file. Include the following in the main.cpp file. Add code so that the numbers that are read from the data.txt file are written to an output           file named:   results.txt After the numbers are written to the file, the following message should be displayed:    The data has been written to the file. Open the results.txt file and verify that the file was written successfully. Please make the code based on C++,...
Create a Namespaces.h header file containing a namespace declaration yourname. The declaration should include: a method...
Create a Namespaces.h header file containing a namespace declaration yourname. The declaration should include: a method with the signature void message (string, ostream &) that prints a string to the output stream. a method with the signature void message (double, ostream &) that prints a double to the output stream. Create a testNamespaces.cpp that uses the yourname namespace, and invokes the message() string method with a message of your choice and cout as input parameters, and invokes the message double...
IN C PROGRAMMING LINUX how to use makefile to include .a static library and .h file...
IN C PROGRAMMING LINUX how to use makefile to include .a static library and .h file from another directory in C? I have a header file myheader.h and a static library libmylib.a file in directory1. In directory2, I'm writing a program which uses them. Suppose I have main.c in directory2 which uses myheader.h and libmylib.a. How do I create a Makefile to compile and link them? LIBB = -L/../directory1/libmylib.a HEADER = -L/../directory1/myheader.h main: main.o gcc $(HEADER) $(LIBB) main.o: main.c gcc...
Using OOP, write a C++ program that will read in a file of names. The file...
Using OOP, write a C++ program that will read in a file of names. The file is called Names.txt and should be located in the current directory of your program. Read in and store the names into an array of 30 names. Sort the array using the selection sort or the bubblesort code found in your textbook. List the roster of students in ascending alphabetical order. Projects using global variables or not using a class and object will result in...
Problem Statement: Implement the MyString class using a header and implementation file named MyString.h and MyString.cpp...
Problem Statement: Implement the MyString class using a header and implementation file named MyString.h and MyString.cpp respectively. Make sure to properly test your code on your own by creating a test driver that tests every function created in the MyString class. Deliverables: proj3-MyString.h proj3-MyString.cpp proj3-testMain.cpp Memory Requirements: Your MyString should start with 10 bytes of allocated memory and should grow in size by doubling. So, we should be able to predict the capacity of your MyString as acquiring a patten...
C++ Programming Simply explain what the difference between "including" a header file and using a compiled...
C++ Programming Simply explain what the difference between "including" a header file and using a compiled library file during linking? Where are the C++ header files and compiled C++ library files on your computer?
Using C++ In a separate header file: Create a new type called "Patient" - you must...
Using C++ In a separate header file: Create a new type called "Patient" - you must use a  class. Give your "Patient" type at least five (5) member elements of your choosing, and at least one member function. You should have member elements to hold patient name, and visitReason (which can change), and other items of your choosing. In your cpp file: Create at least one instance of Patient type (example: CurrentPatient ). Create a menu-driven program OF YOUR OWN DESIGN...
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT