How Can Deadlock Be Solved Assignment


  1. Abraham Silberschatz, Greg Gagne, and Peter Baer Galvin, "Operating System Concepts, Ninth Edition ", Chapter 7

7.1 System Model

  • For the purposes of deadlock discussion, a system can be modeled as a collection of limited resources, which can be partitioned into different categories, to be allocated to a number of processes, each having different needs.
  • Resource categories may include memory, printers, CPUs, open files, tape drives, CD-ROMS, etc.
  • By definition, all the resources within a category are equivalent, and a request of this category can be equally satisfied by any one of the resources in that category. If this is not the case ( i.e. if there is some difference between the resources within a category ), then that category needs to be further divided into separate categories. For example, "printers" may need to be separated into "laser printers" and "color inkjet printers".
  • Some categories may have a single resource.
  • In normal operation a process must request a resource before using it, and release it when it is done, in the following sequence:
    1. Request - If the request cannot be immediately granted, then the process must wait until the resource(s) it needs become available. For example the system calls open( ), malloc( ), new( ), and request( ).
    2. Use - The process uses the resource, e.g. prints to the printer or reads from the file.
    3. Release - The process relinquishes the resource. so that it becomes available for other processes. For example, close( ), free( ), delete( ), and release( ).
  • For all kernel-managed resources, the kernel keeps track of what resources are free and which are allocated, to which process they are allocated, and a queue of processes waiting for this resource to become available. Application-managed resources can be controlled using mutexes or wait( ) and signal( ) calls, ( i.e. binary or counting semaphores. )
  • A set of processes is deadlocked when every process in the set is waiting for a resource that is currently allocated to another process in the set ( and which can only be released when that other waiting process makes progress. )

7.2 Deadlock Characterization

New Sidebar in Ninth Edition

7.2.1 Necessary Conditions

  • There are four conditions that are necessary to achieve deadlock:
    1. Mutual Exclusion - At least one resource must be held in a non-sharable mode; If any other process requests this resource, then that process must wait for the resource to be released.
    2. Hold and Wait - A process must be simultaneously holding at least one resource and waiting for at least one resource that is currently being held by some other process.
    3. No preemption - Once a process is holding a resource ( i.e. once its request has been granted ), then that resource cannot be taken away from that process until the process voluntarily releases it.
    4. Circular Wait - A set of processes { P0, P1, P2, . . ., PN } must exist such that every P[ i ] is waiting for P[ ( i + 1 ) % ( N + 1 ) ]. ( Note that this condition implies the hold-and-wait condition, but it is easier to deal with the conditions if the four are considered separately. )

7.2.2 Resource-Allocation Graph

  • In some cases deadlocks can be understood more clearly through the use of Resource-Allocation Graphs, having the following properties:
    • A set of resource categories, { R1, R2, R3, . . ., RN }, which appear as square nodes on the graph. Dots inside the resource nodes indicate specific instances of the resource. ( E.g. two dots might represent two laser printers. )
    • A set of processes, { P1, P2, P3, . . ., PN }
    • Request Edges - A set of directed arcs from Pi to Rj, indicating that process Pi has requested Rj, and is currently waiting for that resource to become available.
    • Assignment Edges - A set of directed arcs from Rj to Pi indicating that resource Rj has been allocated to process Pi, and that Pi is currently holding resource Rj.
    • Note that a request edge can be converted into an assignment edge by reversing the direction of the arc when the request is granted. ( However note also that request edges point to the category box, whereas assignment edges emanate from a particular instance dot within the box. )
    • For example:

Figure 7.1 - Resource allocation graph

  • If a resource-allocation graph contains no cycles, then the system is not deadlocked. ( When looking for cycles, remember that these are directed graphs. ) See the example in Figure 7.2 above.
  • If a resource-allocation graph does contain cycles AND each resource category contains only a single instance, then a deadlock exists.
  • If a resource category contains more than one instance, then the presence of a cycle in the resource-allocation graph indicates the possibility of a deadlock, but does not guarantee one. Consider, for example, Figures 7.3 and 7.4 below:

Figure 7.2 - Resource allocation graph with a deadlock

Figure 7.3 - Resource allocation graph with a cycle but no deadlock

7.3 Methods for Handling Deadlocks

  • Generally speaking there are three ways of handling deadlocks:
    1. Deadlock prevention or avoidance - Do not allow the system to get into a deadlocked state.
    2. Deadlock detection and recovery - Abort a process or preempt some resources when deadlocks are detected.
    3. Ignore the problem all together - If deadlocks only occur once a year or so, it may be better to simply let them happen and reboot as necessary than to incur the constant overhead and system performance penalties associated with deadlock prevention or detection. This is the approach that both Windows and UNIX take.
  • In order to avoid deadlocks, the system must have additional information about all processes. In particular, the system must know what resources a process will or may request in the future. ( Ranging from a simple worst-case maximum to a complete resource request and release plan for each process, depending on the particular algorithm. )
  • Deadlock detection is fairly straightforward, but deadlock recovery requires either aborting processes or preempting resources, neither of which is an attractive alternative.
  • If deadlocks are neither prevented nor detected, then when a deadlock occurs the system will gradually slow down, as more and more processes become stuck waiting for resources currently held by the deadlock and by other waiting processes. Unfortunately this slowdown can be indistinguishable from a general system slowdown when a real-time process has heavy computing needs.

7.4 Deadlock Prevention

  • Deadlocks can be prevented by preventing at least one of the four required conditions:

7.4.1 Mutual Exclusion

  • Shared resources such as read-only files do not lead to deadlocks.
  • Unfortunately some resources, such as printers and tape drives, require exclusive access by a single process.

7.4.2 Hold and Wait

  • To prevent this condition processes must be prevented from holding one or more resources while simultaneously waiting for one or more others. There are several possibilities for this:
    • Require that all processes request all resources at one time. This can be wasteful of system resources if a process needs one resource early in its execution and doesn't need some other resource until much later.
    • Require that processes holding resources must release them before requesting new resources, and then re-acquire the released resources along with the new ones in a single new request. This can be a problem if a process has partially completed an operation using a resource and then fails to get it re-allocated after releasing it.
    • Either of the methods described above can lead to starvation if a process requires one or more popular resources.

7.4.3 No Preemption

  • Preemption of process resource allocations can prevent this condition of deadlocks, when it is possible.
    • One approach is that if a process is forced to wait when requesting a new resource, then all other resources previously held by this process are implicitly released, ( preempted ), forcing this process to re-acquire the old resources along with the new resources in a single request, similar to the previous discussion.
    • Another approach is that when a resource is requested and not available, then the system looks to see what other processes currently have those resources and are themselves blocked waiting for some other resource. If such a process is found, then some of their resources may get preempted and added to the list of resources for which the process is waiting.
    • Either of these approaches may be applicable for resources whose states are easily saved and restored, such as registers and memory, but are generally not applicable to other devices such as printers and tape drives.

7.4.4 Circular Wait

  • One way to avoid circular wait is to number all resources, and to require that processes request resources only in strictly increasing ( or decreasing ) order.
  • In other words, in order to request resource Rj, a process must first release all Ri such that i >= j.
  • One big challenge in this scheme is determining the relative ordering of the different resources

7.5 Deadlock Avoidance

  • The general idea behind deadlock avoidance is to prevent deadlocks from ever happening, by preventing at least one of the aforementioned conditions.
  • This requires more information about each process, AND tends to lead to low device utilization. ( I.e. it is a conservative approach. )
  • In some algorithms the scheduler only needs to know the maximum number of each resource that a process might potentially use. In more complex algorithms the scheduler can also take advantage of the schedule of exactly what resources may be needed in what order.
  • When a scheduler sees that starting a process or granting resource requests may lead to future deadlocks, then that process is just not started or the request is not granted.
  • A resource allocation state is defined by the number of available and allocated resources, and the maximum requirements of all processes in the system.

7.5.1 Safe State

  • A state is safe if the system can allocate all resources requested by all processes ( up to their stated maximums ) without entering a deadlock state.
  • More formally, a state is safe if there exists a safe sequence of processes { P0, P1, P2, ..., PN } such that all of the resource requests for Pi can be granted using the resources currently allocated to Pi and all processes Pj where j < i. ( I.e. if all the processes prior to Pi finish and free up their resources, then Pi will be able to finish also, using the resources that they have freed up. )
  • If a safe sequence does not exist, then the system is in an unsafe state, which MAY lead to deadlock. ( All safe states are deadlock free, but not all unsafe states lead to deadlocks. )

Figure 7.6 - Safe, unsafe, and deadlocked state spaces.

  • For example, consider a system with 12 tape drives, allocated as follows. Is this a safe state? What is the safe sequence?
 Maximum Needs Current Allocation









  • What happens to the above table if process P2 requests and is granted one more tape drive?
  • Key to the safe state approach is that when a request is made for resources, the request is granted only if the resulting allocation state is a safe one.

7.5.2 Resource-Allocation Graph Algorithm

  • If resource categories have only single instances of their resources, then deadlock states can be detected by cycles in the resource-allocation graphs.
  • In this case, unsafe states can be recognized and avoided by augmenting the resource-allocation graph with claim edges, noted by dashed lines, which point from a process to a resource that it may request in the future.
  • In order for this technique to work, all claim edges must be added to the graph for any particular process before that process is allowed to request any resources. ( Alternatively, processes may only make requests for resources for which they have already established claim edges, and claim edges cannot be added to any process that is currently holding resources. )
  • When a process makes a request, the claim edge Pi->Rj is converted to a request edge. Similarly when a resource is released, the assignment reverts back to a claim edge.
  • This approach works by denying requests that would produce cycles in the resource-allocation graph, taking claim edges into effect.
  • Consider for example what happens when process P2 requests resource R2:

Figure 7.7 - Resource allocation graph for deadlock avoidance

  • The resulting resource-allocation graph would have a cycle in it, and so the request cannot be granted.

Figure 7.8 - An unsafe state in a resource allocation graph

7.5.3 Banker's Algorithm

  • For resource categories that contain more than one instance the resource-allocation graph method does not work, and more complex ( and less efficient ) methods must be chosen.
  • The Banker's Algorithm gets its name because it is a method that bankers could use to assure that when they lend out resources they will still be able to satisfy all their clients. ( A banker won't loan out a little money to start building a house unless they are assured that they will later be able to loan out the rest of the money to finish the house. )
  • When a process starts up, it must state in advance the maximum allocation of resources it may request, up to the amount available on the system.
  • When a request is made, the scheduler determines whether granting the request would leave the system in a safe state. If not, then the process must wait until the request can be granted safely.
  • The banker's algorithm relies on several key data structures: ( where n is the number of processes and m is the number of resource categories. )
    • Available[ m ] indicates how many resources are currently available of each type.
    • Max[ n ][ m ] indicates the maximum demand of each process of each resource.
    • Allocation[ n ][ m ] indicates the number of each resource category allocated to each process.
    • Need[ n ][ m ] indicates the remaining resources needed of each type for each process. ( Note that Need[ i ][ j ] = Max[ i ][ j ] - Allocation[ i ][ j ] for all i, j. )
  • For simplification of discussions, we make the following notations / observations:
    • One row of the Need vector, Need[ i ], can be treated as a vector corresponding to the needs of process i, and similarly for Allocation and Max.
    • A vector X is considered to be <= a vector Y if X[ i ] <= Y[ i ] for all i. Safety Algorithm

  • In order to apply the Banker's algorithm, we first need an algorithm for determining whether or not a particular state is safe.
  • This algorithm determines if the current state of a system is safe, according to the following steps:
    1. Let Work and Finish be vectors of length m and n respectively.
      • Work is a working copy of the available resources, which will be modified during the analysis.
      • Finish is a vector of booleans indicating whether a particular process can finish. ( or has finished so far in the analysis. )
      • Initialize Work to Available, and Finish to false for all elements.
    2. Find an i such that both (A) Finish[ i ] == false, and (B) Need[ i ] < Work. This process has not finished, but could with the given available working set. If no such i exists, go to step 4.
    3. Set Work = Work + Allocation[ i ], and set Finish[ i ] to true. This corresponds to process i finishing up and releasing its resources back into the work pool. Then loop back to step 2.
    4. If finish[ i ] == true for all i, then the state is a safe state, because a safe sequence has been found.
  • ( JTB's Modification:
    1. In step 1. instead of making Finish an array of booleans initialized to false, make it an array of ints initialized to 0. Also initialize an int s = 0 as a step counter.
    2. In step 2, look for Finish[ i ] == 0.
    3. In step 3, set Finish[ i ] to ++s. S is counting the number of finished processes.
    4. For step 4, the test can be either Finish[ i ] > 0 for all i, or s >= n. The benefit of this method is that if a safe state exists, then Finish[ ] indicates one safe sequence ( of possibly many. ) ) Resource-Request Algorithm ( The Bankers Algorithm )

  • Now that we have a tool for determining if a particular state is safe or not, we are now ready to look at the Banker's algorithm itself.
  • This algorithm determines if a new request is safe, and grants it only if it is safe to do so.
  • When a request is made ( that does not exceed currently available resources ), pretend it has been granted, and then see if the resulting state is a safe one. If so, grant the request, and if not, deny the request, as follows:
    1. Let Request[ n ][ m ] indicate the number of resources of each type currently requested by processes. If Request[ i ] > Need[ i ] for any process i, raise an error condition.
    2. If Request[ i ] > Available for any process i, then that process must wait for resources to become available. Otherwise the process can continue to step 3.
    3. Check to see if the request can be granted safely, by pretending it has been granted and then seeing if the resulting state is safe. If so, grant the request, and if not, then the process must wait until its request can be granted safely.The procedure for granting a request ( or pretending to for testing purposes ) is:
      • Available = Available - Request
      • Allocation = Allocation + Request
      • Need = Need - Request An Illustrative Example

  • Consider the following situation:

  • And now consider what happens if process P1 requests 1 instance of A and 2 instances of C. ( Request[ 1 ] = ( 1, 0, 2 ) )

  • What about requests of ( 3, 3,0 ) by P4? or ( 0, 2, 0 ) by P0? Can these be safely granted? Why or why not?

7.6 Deadlock Detection

  • If deadlocks are not avoided, then another approach is to detect when they have occurred and recover somehow.
  • In addition to the performance hit of constantly checking for deadlocks, a policy / algorithm must be in place for recovering from deadlocks, and there is potential for lost work when processes must be aborted or have their resources preempted.

7.6.1 Single Instance of Each Resource Type

  • If each resource category has a single instance, then we can use a variation of the resource-allocation graph known as a wait-for graph.
  • A wait-for graph can be constructed from a resource-allocation graph by eliminating the resources and collapsing the associated edges, as shown in the figure below.
  • An arc from Pi to Pj in a wait-for graph indicates that process Pi is waiting for a resource that process Pj is currently holding.

Figure 7.9 - (a) Resource allocation graph. (b) Corresponding wait-for graph

  • As before, cycles in the wait-for graph indicate deadlocks.
  • This algorithm must maintain the wait-for graph, and periodically search it for cycles.

7.6.2 Several Instances of a Resource Type

  • The detection algorithm outlined here is essentially the same as the Banker's algorithm, with two subtle differences:
    • In step 1, the Banker's Algorithm sets Finish[ i ] to false for all i. The algorithm presented here sets Finish[ i ] to false only if Allocation[ i ] is not zero. If the currently allocated resources for this process are zero, the algorithm sets Finish[ i ] to true. This is essentially assuming that IF all of the other processes can finish, then this process can finish also. Furthermore, this algorithm is specifically looking for which processes are involved in a deadlock situation, and a process that does not have any resources allocated cannot be involved in a deadlock, and so can be removed from any further consideration.
    • Steps 2 and 3 are unchanged
    • In step 4, the basic Banker's Algorithm says that if Finish[ i ] == true for all i, that there is no deadlock. This algorithm is more specific, by stating that if Finish[ i ] == false for any process Pi, then that process is specifically involved in the deadlock which has been detected.
  • ( Note: An alternative method was presented above, in which Finish held integers instead of booleans. This vector would be initialized to all zeros, and then filled with increasing integers as processes are detected which can finish. If any processes are left at zero when the algorithm completes, then there is a deadlock, and if not, then the integers in finish describe a safe sequence. To modify this algorithm to match this section of the text, processes with allocation = zero could be filled in with N, N - 1, N - 2, etc. in step 1, and any processes left with Finish = 0 in step 4 are the deadlocked processes. )
  • Consider, for example, the following state, and determine if it is currently deadlocked:

  • Now suppose that process P2 makes a request for an additional instance of type C, yielding the state shown below. Is the system now deadlocked?

7.6.3 Detection-Algorithm Usage

  • When should the deadlock detection be done? Frequently, or infrequently?
  • The answer may depend on how frequently deadlocks are expected to occur, as well as the possible consequences of not catching them immediately. ( If deadlocks are not removed immediately when they occur, then more and more processes can "back up" behind the deadlock, making the eventual task of unblocking the system more difficult and possibly damaging to more processes. )
  • There are two obvious approaches, each with trade-offs:
    1. Do deadlock detection after every resource allocation which cannot be immediately granted. This has the advantage of detecting the deadlock right away, while the minimum number of processes are involved in the deadlock. ( One might consider that the process whose request triggered the deadlock condition is the "cause" of the deadlock, but realistically all of the processes in the cycle are equally responsible for the resulting deadlock. ) The down side of this approach is the extensive overhead and performance hit caused by checking for deadlocks so frequently.
    2. Do deadlock detection only when there is some clue that a deadlock may have occurred, such as when CPU utilization reduces to 40% or some other magic number. The advantage is that deadlock detection is done much less frequently, but the down side is that it becomes impossible to detect the processes involved in the original deadlock, and so deadlock recovery can be more complicated and damaging to more processes.
    3. ( As I write this, a third alternative comes to mind: Keep a historical log of resource allocations, since that last known time of no deadlocks. Do deadlock checks periodically ( once an hour or when CPU usage is low?), and then use the historical log to trace through and determine when the deadlock occurred and what processes caused the initial deadlock. Unfortunately I'm not certain that breaking the original deadlock would then free up the resulting log jam. )

7.7 Recovery From Deadlock

  • There are three basic approaches to recovery from deadlock:
    1. Inform the system operator, and allow him/her to take manual intervention.
    2. Terminate one or more processes involved in the deadlock
    3. Preempt resources.

7.7.1 Process Termination

  • Two basic approaches, both of which recover resources allocated to terminated processes:
    • Terminate all processes involved in the deadlock. This definitely solves the deadlock, but at the expense of terminating more processes than would be absolutely necessary.
    • Terminate processes one by one until the deadlock is broken. This is more conservative, but requires doing deadlock detection after each step.
  • In the latter case there are many factors that can go into deciding which processes to terminate next:
    1. Process priorities.
    2. How long the process has been running, and how close it is to finishing.
    3. How many and what type of resources is the process holding. ( Are they easy to preempt and restore? )
    4. How many more resources does the process need to complete.
    5. How many processes will need to be terminated
    6. Whether the process is interactive or batch.
    7. ( Whether or not the process has made non-restorable changes to any resource. )

7.7.2 Resource Preemption

  • When preempting resources to relieve deadlock, there are three important issues to be addressed:
    1. Selecting a victim - Deciding which resources to preempt from which processes involves many of the same decision criteria outlined above.
    2. Rollback - Ideally one would like to roll back a preempted process to a safe state prior to the point at which that resource was originally allocated to the process. Unfortunately it can be difficult or impossible to determine what such a safe state is, and so the only safe rollback is to roll back all the way back to the beginning. ( I.e. abort the process and make it start over. )
    3. Starvation - How do you guarantee that a process won't starve because its resources are constantly being preempted? One option would be to use a priority system, and increase the priority of a process every time its resources get preempted. Eventually it should get a high enough priority that it won't get preempted any more.

7.8 Summary

This article is about the computer science concept. For other uses, see Deadlock (disambiguation).

In concurrent computing, a deadlock is a state in which each member of a group is waiting for some other member to take action, such as sending a message or more commonly releasing a lock.[1] Deadlock is a common problem in multiprocessing systems, parallel computing, and distributed systems, where software and hardware locks are used to handle shared resources and implement process synchronization.[2]

In an operating system, a deadlock occurs when a process or thread enters a waiting state because a requested system resource is held by another waiting process, which in turn is waiting for another resource held by another waiting process. If a process is unable to change its state indefinitely because the resources requested by it are being used by another waiting process, then the system is said to be in a deadlock.[3]

In a communications system, deadlocks occur mainly due to lost or corrupt signals rather than resource contention.[4]

Necessary conditions[edit]

A deadlock situation on a resource can arise if and only if all of the following conditions hold simultaneously in a system:[5]

  1. Mutual exclusion: The resources involved must be unshareable; otherwise, the processes would not be prevented from using the resource when necessary. Only one process can use the resource at any given instant of time.[6]
  2. Hold and wait or resource holding: a process is currently holding at least one resource and requesting additional resources which are being held by other processes.
  3. No preemption: a resource can be released only voluntarily by the process holding it.
  4. Circular wait: each process must be waiting for a resource which is being held by another process, which in turn is waiting for the first process to release the resource. In general, there is a set of waiting processes, P = {P1, P2, …, PN}, such that P1 is waiting for a resource held by P2, P2 is waiting for a resource held by P3 and so on until PN is waiting for a resource held by P1.[3][7]

These four conditions are known as the Coffman conditions from their first description in a 1971 article by Edward G. Coffman, Jr.[7]

Deadlock handling[edit]

Most current operating systems cannot prevent deadlocks.[8] When a deadlock occurs, different operating systems respond to them in different non-standard manners. Most approaches work by preventing one of the four Coffman conditions from occurring, especially the fourth one.[9] Major approaches are as follows.

Ignoring deadlock[edit]

In this approach, it is assumed that a deadlock will never occur. This is also an application of the Ostrich algorithm.[9][10] This approach was initially used by MINIX and UNIX.[7] This is used when the time intervals between occurrences of deadlocks are large and the data loss incurred each time is tolerable.


Under the deadlock detection, deadlocks are allowed to occur. Then the state of the system is examined to detect that a deadlock has occurred and subsequently it is corrected. An algorithm is employed that tracks resource allocation and process states, it rolls back and restarts one or more of the processes in order to remove the detected deadlock. Detecting a deadlock that has already occurred is easily possible since the resources that each process has locked and/or currently requested are known to the resource scheduler of the operating system.[10]

After a deadlock is detected, it can be corrected by using one of the following methods:[citation needed]

  1. Process termination: one or more processes involved in the deadlock may be aborted. One could choose to abort all competing processes involved in the deadlock. This ensures that deadlock is resolved with certainty and speed.[citation needed] But the expense is high as partial computations will be lost. Or, one could choose to abort one process at a time until the deadlock is resolved. This approach has high overhead because after each abort an algorithm must determine whether the system is still in deadlock.[citation needed] Several factors must be considered while choosing a candidate for termination, such as priority and age of the process.[citation needed]
  2. Resource preemption: resources allocated to various processes may be successively preempted and allocated to other processes until the deadlock is broken.[11]


Main article: Deadlock prevention algorithms

Deadlock prevention works by preventing one of the four Coffman conditions from occurring.

  • Removing the mutual exclusion condition means that no process will have exclusive access to a resource. This proves impossible for resources that cannot be spooled. But even with spooled resources, deadlock could still occur. Algorithms that avoid mutual exclusion are called non-blocking synchronization algorithms.
  • The hold and wait or resource holding conditions may be removed by requiring processes to request all the resources they will need before starting up (or before embarking upon a particular set of operations). This advance knowledge is frequently difficult to satisfy and, in any case, is an inefficient use of resources. Another way is to require processes to request resources only when it has none. Thus, first they must release all their currently held resources before requesting all the resources they will need from scratch. This too is often impractical. It is so because resources may be allocated and remain unused for long periods. Also, a process requiring a popular resource may have to wait indefinitely, as such a resource may always be allocated to some process, resulting in resource starvation.[12] (These algorithms, such as serializing tokens, are known as the all-or-none algorithms.)
  • The no preemption condition may also be difficult or impossible to avoid as a process has to be able to have a resource for a certain amount of time, or the processing outcome may be inconsistent or thrashing may occur. However, inability to enforce preemption may interfere with a priority algorithm. Preemption of a "locked out" resource generally implies a rollback, and is to be avoided, since it is very costly in overhead. Algorithms that allow preemption include lock-free and wait-free algorithms and optimistic concurrency control. If a process holding some resources and requests for some another resource(s) that cannot be immediately allocated to it, the condition may be removed by releasing all the currently being held resources of that process.
  • The final condition is the circular wait condition. Approaches that avoid circular waits include disabling interrupts during critical sections and using a hierarchy to determine a partial ordering of resources. If no obvious hierarchy exists, even the memory address of resources has been used to determine ordering and resources are requested in the increasing order of the enumeration.[3]Dijkstra's solution can also be used.


A livelock is similar to a deadlock, except that the states of the processes involved in the livelock constantly change with regard to one another, none progressing.

The term was defined formally at some time during the 1970s. An early sighting in the published literature is in Babich's 1979 article on program correctness.[13] Livelock is a special case of resource starvation; the general definition only states that a specific process is not progressing.[14]

Livelock is a risk with some algorithms that detect and recover from deadlock. If more than one process takes action, the deadlock detection algorithm can be repeatedly triggered. This can be avoided by ensuring that only one process (chosen arbitrarily or by priority) takes action.[15]

Distributed deadlock[edit]

Distributed deadlocks can occur in distributed systems when distributed transactions or concurrency control is being used. Distributed deadlocks can be detected either by constructing a global wait-for graph from local wait-for graphs at a deadlock detector or by a distributed algorithm like edge chasing.

Phantom deadlocks are deadlocks that are falsely detected in a distributed system due to system internal delays but don't actually exist. For example, if a process releases a resource R1 and issues a request for R2, and the first message is lost or delayed, a coordinator (detector of deadlocks) could falsely conclude a deadlock (if the request for R2 while having R1 would cause a deadlock).

See also[edit]


  1. ^Coulouris, George (2012). Distributed Systems Concepts and Design. Pearson. p. 716. ISBN 978-0-273-76059-7. 
  2. ^Padua, David (2011). Encyclopedia of Parallel Computing. Springer. p. 524. ISBN 9780387097657. Retrieved 28 January 2012. 
  3. ^ abcSilberschatz, Abraham (2006). Operating System Principles (7th ed.). Wiley-India. p. 237. ISBN 9788126509621. Retrieved 29 January 2012. 
  4. ^Schneider, G. Michael (2009). Invitation to Computer Science. Cengage Learning. p. 271. ISBN 0324788592. Retrieved 28 January 2012. 
  5. ^Silberschatz, Abraham (2006). Operating System Principles (7 ed.). Wiley-India. p. 239. ISBN 9788126509621. Retrieved 29 January 2012. 
  6. ^
  7. ^ abcShibu, K. (2009). Intro To Embedded Systems (1st ed.). Tata McGraw-Hill Education. p. 446. ISBN 9780070145894. Retrieved 28 January 2012. 
  8. ^Silberschatz, Abraham (2006). Operating System Principles (7 ed.). Wiley-India. p. 237. ISBN 9788126509621. Retrieved 29 January 2012. 
  9. ^ abStuart, Brian L. (2008). Principles of operating systems (1st ed.). Cengage Learning. p. 446. Retrieved 28 January 2012. 
  10. ^ abTanenbaum, Andrew S. (1995). Distributed Operating Systems (1st ed.). Pearson Education. p. 117. Retrieved 28 January 2012. 
  11. ^
  12. ^Silberschatz, Abraham (2006). Operating System Principles (7 ed.). Wiley-India. p. 244. Retrieved 29 January 2012. 
  13. ^Babich, A.F. (1979). "Proving Total Correctness of Parallel Programs". IEEE Transactions on Software Engineering. SE–5 (6): 558–574. doi:10.1109/tse.1979.230192. 
  14. ^Anderson, James H.; Yong-jik Kim (2001). "Shared-memory mutual exclusion: Major research trends since 1986". 
  15. ^Zöbel, Dieter (October 1983). "The Deadlock problem: a classifying bibliography". ACM SIGOPS Operating Systems Review. 17 (4): 6–15. doi:10.1145/850752.850753. ISSN 0163-5980. 

Further reading[edit]

  • Kaveh, Nima; Emmerich, Wolfgang. "Deadlock Detection in Distributed Object Systems"(PDF). London: University College London. 
  • Bensalem, Saddek; Fernandez, Jean-Claude; Havelund, Klaus; Mounier, Laurent (2006). "Confirmation of deadlock potentials detected by runtime analysis". Proceedings of the 2006 workshop on Parallel and distributed systems: Testing and debugging. ACM: 41–50. doi:10.1145/1147403.1147412. ISBN 1595934146. 
  • Coffman, Edward G., Jr.; Elphick, Michael J.; Shoshani, Arie (1971). "System Deadlocks"(PDF). ACM Computing Surveys. 3 (2): 67–78. doi:10.1145/356586.356588. 
  • Mogul, Jeffrey C.; Ramakrishnan, K. K. (1997). "Eliminating receive livelock in an interrupt-driven kernel". ACM Transactions on Computer Systems. 15 (3): 217–252. doi:10.1145/263326.263335. ISSN 0734-2071. 
  • Havender, James W. (1968). "Avoiding deadlock in multitasking systems". IBM Systems Journal. 7 (2): 74. doi:10.1147/sj.72.0074. 
  • Holliday, JoAnne L.; El Abbadi, Amr. "Distributed Deadlock Detection". Encyclopedia of Distributed Computing. Kluwer Academic Publishers. 
  • Knapp, Edgar (1987). "Deadlock detection in distributed databases". ACM Computing Surveys. 19 (4): 303–328. doi:10.1145/45075.46163. ISSN 0360-0300. 
  • Ling, Yibei; Chen, Shigang; Chiang, Jason (2006). "On Optimal Deadlock Detection Scheduling". IEEE Transactions on Computers. 55 (9): 1178–1187. doi:10.1109/tc.2006.151. 

External links[edit]

Both processes need resources to continue execution. P1 requires additional resource R1 and is in possession of resource R2, P2 requires additional resource R2 and is in possession of R1; neither process can continue.
Four processes (blue lines) compete for one resource (grey circle), following a right-before-left policy. A deadlock occurs when all processes lock the resource simultaneously (black lines). The deadlock can be resolved by breaking the symmetry.
Two processes competing for two resources in opposite order.
(A) A single process goes through.
(B) The later process has to wait.
(C) A deadlock occurs when the first process locks the first resource at the same time as the second process locks the second resource.
(D) The deadlock can be resolved by cancelling and restarting the first process.
(A) Two processes competing for one resource, following a first-come, first-served policy. (B) Deadlock occurs when both processes lock the resource simultaneously. (C) The deadlock can be resolved by breaking the symmetry of the locks. (D) The deadlock can be prevented by breaking the symmetry of the locking mechanism.

0 Replies to “How Can Deadlock Be Solved Assignment”

Lascia un Commento

L'indirizzo email non verrà pubblicato. I campi obbligatori sono contrassegnati *