1. What is a deadlock in an operating system?
A deadlock occurs when two or more processes are unable to proceed because each is waiting for a resource held by another process in the set. None can continue. This is a fundamental concurrency issue.
Get the Preplance app for a seamless learning experience. Practice offline, get daily streaks, and stay ahead with real-time interview updates.
Get it on
Google Play
4.9/5 Rating on Store
OS · Question Set
Deadlocks & Concurrency interview questions for placements and exams.
Questions
11
Included in this set
Subject
OS
Explore more sets
Difficulty
Mixed
Level of this set
Go through each question and its explanation. Use this set as a focused practice pack for OS.
A deadlock occurs when two or more processes are unable to proceed because each is waiting for a resource held by another process in the set. None can continue. This is a fundamental concurrency issue.
For complete preparation, combine this set with full subject-wise practice for OS. You can also explore other subjects and sets from the links below.
The Banker’s algorithm can be used for deadlock avoidance via safety checks by simulating allocations to see if system remains in a safe state. Though originally for avoidance, the concept underpins detection of safe/unsafe states.
A correct solution to the critical-section problem must satisfy mutual exclusion (only one process at a time), progress (decide which process enters if none is in critical section) and bounded waiting (each process must wait only a bounded number of times). Infinite waiting (starvation) is not acceptable. This is a standard concurrency property.
A mutex (mutual exclusion) is a locking mechanism that allows only one thread or process to access a critical section at a time. A semaphore is a signalling mechanism that can allow multiple threads (if count >1) or one thread (binary semaphore) to access a shared resource. A semaphore also supports operations like wait (P) and signal (V). Using semaphores you can implement various synchronization patterns including producer-consumer, but you must take care of issues like deadlock or priority inversion. Understanding this distinction is common in interviews.
Aging is a technique used in scheduling and concurrency to gradually increase the priority of a process that has been waiting for a long time so as to prevent starvation. This ensures fairness. Interviewers expect you to know both starvation risk and aging mitigation.
The reader-writer problem is a classic concurrency scenario where multiple threads/processes can read a shared data resource simultaneously, but writes must be exclusive. The OS or synchronization library must ensure that while writers are active no readers read, and vice versa, or else consistency might break. For read-heavy workloads you might favour many concurrent readers and delay writers (reader-preference), but this risks writer starvation; for write-heavy you might favour writers or use priorities. Solutions include read/write locks, semaphores, or monitors. In interviews, showing awareness of trade-offs (fairness vs performance) is valuable.
In a resource allocation graph where each resource has a single instance, a directed cycle implies that a set of processes are waiting in a circular chain for resources held by each other — this indicates a deadlock condition. Recognising graph-based detection is a common interview point.
There are four classic conditions that must simultaneously hold for a deadlock: mutual exclusion (resources cannot be shared and only one process may use a resource at a time), hold and wait (a process holds at least one resource and is waiting to acquire additional resources held by others), no preemption (resources cannot be forcibly taken from a process; they must be released voluntarily), and circular wait (a closed chain of processes exists where each process holds a resource needed by another process in the chain). If you break any one of these, you can prevent deadlock.
An OS can prevent deadlock by ensuring at least one of the four necessary conditions never holds. Two common techniques are: 1) Resource ordering – assign a global ordering to resources and require all processes to request resources in that order, thereby avoiding circular wait. 2) Preemption – allow the OS to preempt a resource from a process (breaking the no-preemption condition). For example, if a process holds some resources and requests others and cannot get them, the OS may force it to release its held resources. The trade-offs: ordering can be rigid and complex for many resource types; preemption can lead to wasted work or inconsistent state if resources are forcibly removed. In interviews, stating trade-offs shows deeper understanding.
A livelock is a situation where processes or threads continuously change their state in response to one another but none makes progress. Unlike deadlock (complete standstill because of waiting), livelock is active but unproductive. For example, two processes repeatedly yield to each other’s resource requests but neither gets the resource. In interviews mentioning contrast with deadlock, detection difficulty and prevention strategies adds depth.
A mutex lock is designed to provide mutual exclusion by allowing only one thread to hold the lock and enter a critical section at a time, thus preventing data races on shared data. Barriers wait for all threads, spin-locks busy-wait and semaphores may allow more than one holder unless used as binary; so understanding the subtle differences is important for interviews.