Principles of Message-Passing
Programming:--
- The logical view of a machine supporting the message-passing
paradigm consists of p processes, each with its own exclusive
address space.
- Each data element must belong to one of the partitions of the
space; hence, data must be explicitly partitioned and placed.
- All interactions (read-only or read/write) require cooperation
of two processes – the process that has the data and the process
that wants to access the data.
- These two constraints, while onerous, make underlying costs
very explicit to the programmer.
Principles of Message-Passing
Programming:--
- Message-passing programs are often written using the
asynchronous or loosely synchronous paradigms.
- In the asynchronous paradigm, all concurrent tasks execute
asynchronously.
- In the loosely synchronous model, tasks or subsets of tasks
synchronize to perform interactions. Between these interactions,
tasks execute completely asynchronously.
- Most message-passing programs are written using the single
program multiple data (SPMD) model.
The Building Blocks: Send and Receive
Operations:--
- The prototypes of these operations are as follows: send(void
*sendbuf, int nelems, int dest) receive(void *recvbuf, int nelems,
int source)
- Consider the following code segments: P0 P1 a = 100;
receive(&a, 1, 0) send(&a, 1, 1); printf("%d\n", a); a =
0;
- The semantics of the send operation require that the value
received by process P1 must be 100 as opposed to 0.
- This motivates the design of the send and receive
protocols.
Buffered Blocking Message Passing
Operations:-
Bounded buffer sizes can have significant impact on
performance
P0
for (i = 0; i < 1000; i++)
{ produce_data(&a);
send(&a, 1, 1);
}
P1
{ for (i = 0; i < 1000; i++)
receive(&a, 1, 0);
consume_data(&a);
}
What if consumer was much slower than producer?