|
Next: SYSTEM CALL: semctl() Up: 6.4.3 Semaphores Previous: SYSTEM CALL: semget()
SYSTEM CALL: semop()
SYSTEM CALL: semop(); PROTOTYPE: int semop ( int semid, struct sembuf *sops, unsigned nsops); RETURNS: 0 on success (all operations performed) -1 on error: errno = E2BIG (nsops greater than max number of ops allowed atomically) EACCESS (permission denied) EAGAIN (IPC_NOWAIT asserted, operation could not go through) EFAULT (invalid address pointed to by sops argument) EIDRM (semaphore set was removed) EINTR (Signal received while sleeping) EINVAL (set doesn't exist, or semid is invalid) ENOMEM (SEM_UNDO asserted, not enough memory to create the undo structure necessary) ERANGE (semaphore value out of range) NOTES: The first argument to semget() is the key value (in our case returned by a call to semget). The second argument (sops) is a pointer to an array of operations to be performed on the semaphore set, while the third argument (nsops) is the number of operations in that array. The sops argument points to an array of type sembuf. This structure is declared in linux/sem.h as follows:
/* semop system call takes an array of these */ struct sembuf { ushort sem_num;/* semaphore index in array */ short sem_op; /* semaphore operation */ short sem_flg;/* operation flags */ };
If sem_op is negative, then its value is subtracted from the semaphore. This correlates with obtaining resources that the semaphore controls or monitors access of. If IPC_NOWAIT is not specified, then the calling process sleeps until the requested amount of resources are available in the semaphore (another process has released some). If sem_op is positive, then it's value is added to the semaphore. This correlates with returning resources back to the application's semaphore set. Resources should always be returned to a semaphore set when they are no longer needed! Finally, if sem_op is zero (0), then the calling process will sleep() until the semaphore's value is 0. This correlates to waiting for a semaphore to reach 100% utilization. A good example of this would be a daemon running with superuser permissions that could dynamically adjust the size of the semaphore set if it reaches full utilization. In order to explain the semop call, let's revisit our print room scenario. Let's assume only one printer, capable of only one job at a time. We create a semaphore set with only one semaphore in it (only one printer), and initialize that one semaphore to a value of one (only one job at a time). Each time we desire to send a job to this printer, we need to first make sure that the resource is available. We do this by attempting to obtain one unit from the semaphore. Let's load up a sembuf array to perform the operation:
struct sembuf sem_lock = { 0, -1, IPC_NOWAIT }; Translation of the above initialized structure dictates that a value of ``-1'' will be added to semaphore number 0 in the semaphore set. In other words, one unit of resources will be obtained from the only semaphore in our set (0th member). IPC_NOWAIT is specified, so the call will either go through immediately, or fail if another print job is currently printing. Here is an example of using this initialized sembuf structure with the semop system call:
if((semop(sid, &sem_lock, 1) == -1) perror("semop"); The third argument (nsops) says that we are only performing one (1) operation (there is only one sembuf structure in our array of operations). The sid argument is the IPC identifier for our semaphore set. When our print job has completed, we must return the resources back to the semaphore set, so that others may use the printer.
struct sembuf sem_unlock = { 0, 1, IPC_NOWAIT }; Translation of the above initialized structure dictates that a value of ``1'' will be added to semaphore number 0 in the semaphore set. In other words, one unit of resources will be returned to the set.
next up previous contents Next: SYSTEM CALL: semctl() Up: 6.4.3 Semaphores Previous: SYSTEM CALL: semget() Converted on: Fri Mar 29 14:43:04 EST 1996 |
|||||||||||||||||
With any suggestions or questions please feel free to contact us |