1. Which of these is a type of operating system?
Operating systems can be classified by how they manage tasks: batch, time-sharing, real-time, etc. A batch operating system schedules jobs without manual intervention.
Get the Preplance app for a seamless learning experience. Practice offline, get daily streaks, and stay ahead with real-time interview updates.
Get it on
Google Play
4.9/5 Rating on Store
Accenture · OS
Practice OS questions specifically asked in Accenture interviews – ideal for online test preparation, technical rounds and final HR discussions.
Questions
26
Tagged for this company + subject
Company
Accenture
View company-wise questions
Subject
OS
Explore topic-wise practice
Go through each question and its explanation. Use this page for targeted revision just before your Accenture OS round.
Operating systems can be classified by how they manage tasks: batch, time-sharing, real-time, etc. A batch operating system schedules jobs without manual intervention.
For complete preparation, combine this company + subject page with full company-wise practice and subject-wise practice. You can also explore other companies and topics from the links below.
Typical process states are running (using CPU), ready (waiting for CPU), waiting or blocked (waiting for I O or event). "Interrupted" is not normally listed as a standard state in the lifecycle model.
In a time-sharing system the CPU switches among many users’ interactive tasks rapidly, giving the illusion that each user has their own machine. The OS allocates short time slices to tasks and context switches quickly. It improves responsiveness for many users.
The bootstrap program is the initial code run when a machine starts (after BIOS or firmware). It loads the operating system kernel into memory and starts execution of the OS.
Standard process states are Running (executing on CPU), Ready (waiting for CPU), and Waiting/Blocked (waiting for I/O or event). "Suspended" is used in some systems as an additional state when the process is swapped out, but it is not part of the classic simple lifecycle model.
In contiguous memory allocation a process is allocated one single contiguous block of memory addresses. This makes address translation very simple and fast because the OS just adds an offset to a base address. The major advantage is low overhead in translation and good spatial locality. However the drawbacks include difficulty in fitting processes into memory when free blocks are fragmented (external fragmentation), and limited flexibility—process size must be known ahead and cannot easily grow dynamically. For interview success you can mention techniques like compaction which the OS may use to reduce external fragmentation.
In linked allocation, each file is a linked list of disk blocks and the directory holds a pointer to the first block. This method avoids external fragmentation but random access is slow because you must traverse links. It’s one of the basic allocation strategies asked in interviews. :contentReference[oaicite:1]{index=1}
Free-space management is how a file system tracks which blocks are free for allocation. Two common techniques are a free-list (linked list of free blocks) and a bitmap (array of bits where each bit indicates free or used). The free-list is simple and efficient for sequential allocation but may fragment and traverse slowly for large free lists. A bitmap allows quick access and compact representation, and supports efficient block allocation decisions, but requires scanning and may incur overhead for very large disks. Highlighting trade-offs is important in interview responses.
A device driver is software (often part of the OS kernel or loaded module) that abstracts hardware specifics and provides a uniform interface for higher layers of the OS. It handles initialization, data transfer, interrupts and cleanup for a device. This is often asked in interviews to test understanding of I O subsystems.
The Banker’s algorithm can be used for deadlock avoidance via safety checks by simulating allocations to see if system remains in a safe state. Though originally for avoidance, the concept underpins detection of safe/unsafe states.
A mutex (mutual exclusion) is a locking mechanism that allows only one thread or process to access a critical section at a time. A semaphore is a signalling mechanism that can allow multiple threads (if count >1) or one thread (binary semaphore) to access a shared resource. A semaphore also supports operations like wait (P) and signal (V). Using semaphores you can implement various synchronization patterns including producer-consumer, but you must take care of issues like deadlock or priority inversion. Understanding this distinction is common in interviews.
The Principle of Least Privilege means that every process, user or system component should be granted only the privileges needed to complete its work and no more. Applying this principle leads to fewer unwanted side‐effects, less malware risk and better containment of faults. Interviewers often ask this to check security awareness in OS design.
Live migration enables moving a virtual machine that is currently running from one physical host to another with minimal or no downtime. It is used for load balancing, maintenance, high availability. Interview questions often test how and why this works.
Nested virtualisation occurs when a virtual machine itself hosts another virtualisation layer (a VM inside a VM). This can be useful for development, testing or cloud provider architectures. Challenges include increased overhead, more complex hardware and hypervisor support (CPU extensions like VT-x/AMD-V must be forwarded), more layers of translation (guest to host), greater resource contention and reduced performance. Discussing real-world scenarios or trade-offs strengthens your answer in interviews.
Load balancing in multi-processor or multi-server OS architectures aims to spread tasks so that no single node becomes a bottleneck, resources are utilised evenly, system throughput is maximised and performance stays predictable. This is particularly relevant to modern OS in data-centres and cloud settings.
For an operating system to support hot-plug devices (such as USB drives or external peripherals) it must allow device drivers (often kernel modules) to load and unload dynamically at runtime. This ensures that the system recognizes new devices and removes them safely without rebooting. This topic demonstrates system internals knowledge in interviews.
In a NUMA architecture each processor has local memory which it can access more quickly; accessing another processor’s memory or a shared region may incur higher latency. Modern OS kernels must be NUMA-aware when scheduling tasks or managing memory to optimise performance on large servers. Recognising this shows advanced OS knowledge. :contentReference[oaicite:1]{index=1}
A context switch is the process of saving the state of the currently running process (such as register values, stack pointer, program counter) into its PCB and then loading the state of another process so it can run on the CPU. This lets the OS switch the CPU from one process to another. Minimising context switch time is important because each switch incurs overhead—saving and loading registers, updating memory maps, flushing translation lookaside buffers (TLB), and switching caches. High context‐switch overhead reduces CPU efficiency and can degrade overall system performance.
Thrashing occurs when the system spends the majority of time swapping pages in and out of memory instead of executing actual processes. It happens when too many processes are active and there are insufficient frames to hold their working sets. As a result the CPU utilisation drops. Prevention techniques include reducing the degree of multiprogramming, using working set model to determine how many pages a process needs, allocating enough frames, or implementing local page replacement rather than global. Recognising thrashing and advocating prevention shows maturity in interview responses. :contentReference[oaicite:7]{index=7}
Operating systems protect I O devices and device data by controlling access via device files (in Unix) or driver interfaces, ensuring user processes cannot perform illegal I O instructions directly. This is an important security and resource management concept.
A livelock is a situation where processes or threads continuously change their state in response to one another but none makes progress. Unlike deadlock (complete standstill because of waiting), livelock is active but unproductive. For example, two processes repeatedly yield to each other’s resource requests but neither gets the resource. In interviews mentioning contrast with deadlock, detection difficulty and prevention strategies adds depth.
The Principle of Least Common Privilege (often called Least Privilege) states that processes, users or system components should operate using the minimum privileges necessary. In OS design it means separating functionality into modules, isolating drivers, reducing kernel mode code, limiting permissions of user processes, and enforcing strong access controls (e.g., MAC). This reduces the risk that a compromised component causes widespread system damage. Implementing it might involve microkernel architecture, sandboxing, containerization or capability-based security models. Showing this design mindset adds depth in interviews.
When migrating VMs at scale, challenges include compatibility of CPU features between source and target hosts, needing shared storage or having to transfer large amounts of memory state, network connectivity and IP continuity, performance hit and downtime, dependency on hypervisor versions, resource contention during migration and ensuring minimal service disruption. Mitigation includes live migration planning, using abstraction layers, homogeneous hardware clusters or CPU feature masking, shared storage or replication, and scheduling migration during low-utilisation windows. Mentioning real-world constraints and steps to address them enhances your answer.
Buffering is the technique of using a temporary memory area (buffer) to hold data while it is transferred between two devices or between device and memory. It helps decouple producer and consumer speeds (for example when a fast CPU writes to a slower peripheral). Buffering improves throughput, smooths bursts of data and reduces I/O invocation overhead. On the flip side, buffering consumes memory, can introduce latency (data waits in buffer), may cause inconsistency if not flushed correctly, and adds complexity (flush logic, overflow, memory management). Interviewers like it when you discuss both the benefit and trade-offs.
In a multi-tenant or shared environment the OS must allocate devices fairly and securely among different virtual machines or processes. For example a hypervisor or OS may use IOMMU to partition DMA access, assign virtual functions (SR-IOV) for network cards, isolate GPUs via hardware virtualization, and use quotas or scheduling for access. Challenges include performance isolation (one tenant hogging device), security isolation (preventing DMA attacks or shared memory leakage), scheduling of I/O bandwidth, interrupt sharing, driver compatibility, and load balancing. In interview responses mention these challenges and how OS design or hypervisor can mitigate them — e.g., using hardware partitioning, IO-virtualization, and monitoring of usage.
A real-time operating system is designed to guarantee that tasks meet timing constraints (deadlines), not just logical correctness. In a hard RTOS missing a deadline is a failure. It typically offers deterministic scheduling, minimal latency, priority inversion protection, and often supports preemption of nearly any code. The trade-offs are that RTOS may sacrifice throughput or flexibility for predictability, may use static memory and simpler APIs, and often run fewer services. For interview answers mention examples (embedded systems, autopilot, medical devices) and contrast with general-purpose OS which target max utilisation, flexibility and feature-richness.