1. Which of these is a type of operating system?
Operating systems can be classified by how they manage tasks: batch, time-sharing, real-time, etc. A batch operating system schedules jobs without manual intervention.
Get the Preplance app for a seamless learning experience. Practice offline, get daily streaks, and stay ahead with real-time interview updates.
Get it on
Google Play
4.9/5 Rating on Store
Wipro · OS
Practice OS questions specifically asked in Wipro interviews – ideal for online test preparation, technical rounds and final HR discussions.
Questions
32
Tagged for this company + subject
Company
Wipro
View company-wise questions
Subject
OS
Explore topic-wise practice
Go through each question and its explanation. Use this page for targeted revision just before your Wipro OS round.
Operating systems can be classified by how they manage tasks: batch, time-sharing, real-time, etc. A batch operating system schedules jobs without manual intervention.
For complete preparation, combine this company + subject page with full company-wise practice and subject-wise practice. You can also explore other companies and topics from the links below.
Typical process states are running (using CPU), ready (waiting for CPU), waiting or blocked (waiting for I O or event). "Interrupted" is not normally listed as a standard state in the lifecycle model.
In a time-sharing system the CPU switches among many users’ interactive tasks rapidly, giving the illusion that each user has their own machine. The OS allocates short time slices to tasks and context switches quickly. It improves responsiveness for many users.
The bootstrap program is the initial code run when a machine starts (after BIOS or firmware). It loads the operating system kernel into memory and starts execution of the OS.
Standard process states are Running (executing on CPU), Ready (waiting for CPU), and Waiting/Blocked (waiting for I/O or event). "Suspended" is used in some systems as an additional state when the process is swapped out, but it is not part of the classic simple lifecycle model.
The dispatcher is the component of the operating system that gives control of the CPU to the process selected by the scheduler. It performs the context switch, switching from kernel mode to user mode, and jumps to the correct place in the user program to restart it. Its responsibilities include switching context, jumping to user code, and switching to user mode. The time the dispatcher takes (dispatch latency) affects how quickly a ready process begins execution, and therefore influences system responsiveness.
External fragmentation happens when the free memory is split into small non-contiguous blocks and so a large request cannot be met although the sum of free memory would suffice. Internal fragmentation is wasted space inside an allocated block. Understanding fragmentation is key to memory management design. :contentReference[oaicite:1]{index=1}
Segmentation divides a process’s memory into logical segments such as code, data, stack or other modules. Each segment may be of variable size and can grow or shrink independently. The key difference from paging is that segments are logical units meaningful to the program, while pages are fixed size and meaningless to the program’s logic. Segmentation can suffer from external fragmentation because segments are variable sized. On the other hand paging reduces external fragmentation but may suffer from internal fragmentation, and segments require more complex management. :contentReference[oaicite:3]{index=3}
The Least Recently Used algorithm replaces the page that has been idle longest based on past usage. It approximates the optimal algorithm and helps reduce page faults. FIFO is simpler but may cause more faults; optimal is theoretical; MRU is usually worse in general. Interviewers expect you to know the trade-offs in overhead and complexity. :contentReference[oaicite:6]{index=6}
In linked allocation, each file is a linked list of disk blocks and the directory holds a pointer to the first block. This method avoids external fragmentation but random access is slow because you must traverse links. It’s one of the basic allocation strategies asked in interviews. :contentReference[oaicite:1]{index=1}
In Linux system directories, /etc is the standard directory where system configuration files are kept. This question tests familiarity with file system structure, which often shows up in interviews. :contentReference[oaicite:3]{index=3}
In Unix-style systems, to remove files from a directory the user must have write permission (to modify the directory contents) AND execute permission (to access the directory). A directory without execute bit might prevent access. Knowledge of file and directory permissions is a common interview topic.
Spooling stands for Simultaneous Peripheral Operations On-Line. It uses an intermediate buffer or queue so that data destined to a slower device is stored temporarily and the system can move on. This improves efficiency for devices like printers.
A device driver is software (often part of the OS kernel or loaded module) that abstracts hardware specifics and provides a uniform interface for higher layers of the OS. It handles initialization, data transfer, interrupts and cleanup for a device. This is often asked in interviews to test understanding of I O subsystems.
The Banker’s algorithm can be used for deadlock avoidance via safety checks by simulating allocations to see if system remains in a safe state. Though originally for avoidance, the concept underpins detection of safe/unsafe states.
A mutex (mutual exclusion) is a locking mechanism that allows only one thread or process to access a critical section at a time. A semaphore is a signalling mechanism that can allow multiple threads (if count >1) or one thread (binary semaphore) to access a shared resource. A semaphore also supports operations like wait (P) and signal (V). Using semaphores you can implement various synchronization patterns including producer-consumer, but you must take care of issues like deadlock or priority inversion. Understanding this distinction is common in interviews.
In a resource allocation graph where each resource has a single instance, a directed cycle implies that a set of processes are waiting in a circular chain for resources held by each other — this indicates a deadlock condition. Recognising graph-based detection is a common interview point.
In an operating system the CPU can operate in at least two modes: user mode and kernel mode. In user mode applications execute with limited privileges and cannot execute critical instructions or access sensitive hardware directly. In kernel mode the OS core and trusted code run with full privileges and can manage hardware, memory, interrupt handling and system resources. This separation is important for security because it prevents user‐level code (which may be malicious or buggy) from harming the system, accessing other processes’ memory or performing privileged operations. By restricting risky operations to kernel mode the OS reduces its attack surface.
A rootkit is a stealthy type of malicious software that embeds itself deep inside the operating system — often at kernel level — to gain elevated privileges, hide itself, intercept system calls or manipulate kernel data structures. Defending against rootkits requires OS features like integrity checking of kernel code and data, secure boot to verify initial code, memory protection (e.g., W^X), mandatory access control, module signing and monitoring suspicious behaviour. Also, isolating kernel modules and reducing trusted code (trusted computing base) limits the risk. Explaining how OS architecture helps mitigate rootkits shows strong interview understanding.
In virtualisation architectures a Type 1 (bare-metal) hypervisor runs directly on the host machine’s hardware, offering better performance and efficiency. A Type 2 hypervisor runs on a host operating system as an application layer, and then hosts guest operating systems. Knowing this distinction is a common interview point.
Live migration enables moving a virtual machine that is currently running from one physical host to another with minimal or no downtime. It is used for load balancing, maintenance, high availability. Interview questions often test how and why this works.
OS-level virtualisation is the technique where multiple isolated user-space instances (containers or zones) run on a single operating system kernel, sharing that kernel and underlying hardware. For example, technologies like Solaris Zones or OpenVZ implement OS-level virtualisation. Advantages include lightweight operation, near-native performance, fast startup, efficient resource use. Limitations include less isolation compared to full virtual machines (since kernel is shared), and inability to run different OS kernels or completely different operating systems. Demonstrating both sides (advantages and limitations) is key for interview depth.
In practice live migration is used to move running workloads off a host for maintenance without stopping services. While hardware compatibility matters, the key interview point is that migration enables high availability and service continuity. Explaining host compatibility, shared storage, network dependencies adds depth.
A device driver is a software component (often part of the OS kernel or a loadable module) that provides an interface between the operating system’s generic I/O framework and the specific hardware device. It handles initialization of the device, data transfer, interrupt handling, power management and clean-up. The OS kernel calls driver APIs to perform I/O, route interrupts to the driver, and manage resources like DMA channels or device memory. For interviews it is useful to highlight how drivers provide abstraction, manage concurrency and ensure safe access to hardware.
When multiple processes need I/O or device access simultaneously the OS must coordinate resource sharing. It does so by maintaining queues (ready queue, I/O queue), using device-queues, scheduling I/O requests, applying locking or semaphores to protect device registers, managing priorities, and ensuring fairness and avoidance of starvation. For example, a print spooler may queue all print requests and let the driver service them one by one. The OS might also use priority or time-slicing for devices. In interviews mention trade-offs: fairness vs throughput, device contention, prevention of starvation, and impact on CPU scheduling.
In OS scheduling and design trade-offs often include choosing between faster response time (good for interactive tasks) and higher throughput (good for batch tasks). A policy that favours one may hurt the other. Recognising trade-offs, as well as how real-world systems balance them, is a strong sign in interviews. :contentReference[oaicite:2]{index=2}
A context switch is the process of saving the state of the currently running process (such as register values, stack pointer, program counter) into its PCB and then loading the state of another process so it can run on the CPU. This lets the OS switch the CPU from one process to another. Minimising context switch time is important because each switch incurs overhead—saving and loading registers, updating memory maps, flushing translation lookaside buffers (TLB), and switching caches. High context‐switch overhead reduces CPU efficiency and can degrade overall system performance.
Programmed I O is the simplest I O method where the CPU repeatedly checks device status and then transfers data, causing the CPU to be busy with I O. Interrupt-driven I O improves efficiency: the device signals the CPU when it is ready and the CPU handles the transfer only then. Direct Memory Access (DMA) is the most efficient: the device controller or DMA hardware moves data between memory and device without busy CPU, freeing the CPU for other tasks. The trade-off is added complexity in hardware and handling synchronization of memory and bus access.
Operating systems protect I O devices and device data by controlling access via device files (in Unix) or driver interfaces, ensuring user processes cannot perform illegal I O instructions directly. This is an important security and resource management concept.
A mutex lock is designed to provide mutual exclusion by allowing only one thread to hold the lock and enter a critical section at a time, thus preventing data races on shared data. Barriers wait for all threads, spin-locks busy-wait and semaphores may allow more than one holder unless used as binary; so understanding the subtle differences is important for interviews.
The Principle of Least Common Privilege (often called Least Privilege) states that processes, users or system components should operate using the minimum privileges necessary. In OS design it means separating functionality into modules, isolating drivers, reducing kernel mode code, limiting permissions of user processes, and enforcing strong access controls (e.g., MAC). This reduces the risk that a compromised component causes widespread system damage. Implementing it might involve microkernel architecture, sandboxing, containerization or capability-based security models. Showing this design mindset adds depth in interviews.
A real-time operating system is designed to guarantee that tasks meet timing constraints (deadlines), not just logical correctness. In a hard RTOS missing a deadline is a failure. It typically offers deterministic scheduling, minimal latency, priority inversion protection, and often supports preemption of nearly any code. The trade-offs are that RTOS may sacrifice throughput or flexibility for predictability, may use static memory and simpler APIs, and often run fewer services. For interview answers mention examples (embedded systems, autopilot, medical devices) and contrast with general-purpose OS which target max utilisation, flexibility and feature-richness.