1. What best describes Infrastructure as Code?
Difficulty: EasyType: MCQTopic: IaC Basics
- Manually configuring infrastructure through dashboards
- Automating infrastructure provisioning using code or scripts
- Outsourcing infrastructure setup to vendors
- Running applications without servers
Infrastructure as Code, or IaC, is a DevOps practice that automates infrastructure provisioning through machine-readable configuration files or code. Instead of manual setup, engineers define and manage environments programmatically, ensuring consistency, scalability, and reproducibility across all deployments.
Correct Answer: Automating infrastructure provisioning using code or scripts
2. Which statement correctly differentiates declarative and imperative approaches in IaC?
Difficulty: MediumType: MCQTopic: IaC Basics
- Declarative defines what final state should be, imperative defines how to reach it
- Declarative defines how to reach the state, imperative defines the desired state
- Both are identical in concept
- Imperative is only used in scripting languages
Declarative IaC tools describe the desired end state of infrastructure, letting the system handle the process to reach it (for example Terraform or CloudFormation). Imperative tools describe step-by-step procedures to create resources (for example Ansible or shell scripts). Understanding this difference helps choose tools based on control versus simplicity.
Correct Answer: Declarative defines what final state should be, imperative defines how to reach it
3. Why is idempotency important in Infrastructure as Code?
Difficulty: MediumType: MCQTopic: IaC Basics
- It allows scripts to run faster
- It ensures that running code multiple times produces the same infrastructure state
- It automatically handles database migrations
- It allows only one execution of the code
Idempotency ensures that executing IaC repeatedly results in the same final infrastructure without unwanted changes. This makes automation safe and predictable. Tools like Terraform and Ansible enforce idempotency to prevent resource duplication or accidental modification of existing configurations.
Correct Answer: It ensures that running code multiple times produces the same infrastructure state
4. What does immutability mean in Infrastructure as Code?
Difficulty: MediumType: MCQTopic: IaC Basics
- Infrastructure resources are updated in place when needed
- Existing infrastructure is replaced rather than modified
- Infrastructure code can be changed by anyone
- Infrastructure is always mutable and flexible
Immutability means infrastructure components are not modified after creation. Instead, a new resource is built and swapped in. This reduces configuration drift and ensures reliable rollback, improving stability and predictability across deployments. Immutable infrastructure simplifies debugging and improves auditability.
Correct Answer: Existing infrastructure is replaced rather than modified
5. Why is Git commonly used with Infrastructure as Code?
Difficulty: EasyType: MCQTopic: Version Control
- It provides visual dashboards for infrastructure
- It allows storing, reviewing, and tracking infrastructure configurations as code
- It is mandatory for Terraform to run
- It automatically applies infrastructure changes
Version control systems like Git are essential in IaC because they allow collaboration, history tracking, code review, and rollback of configuration changes. This makes infrastructure changes auditable and aligns infrastructure management with software development practices.
Correct Answer: It allows storing, reviewing, and tracking infrastructure configurations as code
6. What is configuration drift in the context of IaC?
Difficulty: MediumType: MCQTopic: Drift Mgmt
- When servers are decommissioned automatically
- When infrastructure diverges from the declared configuration over time
- When multiple versions of Terraform are used
- When Git branches are out of sync
Configuration drift occurs when the actual infrastructure state changes without being reflected in the IaC definitions. It often happens due to manual changes or failed updates. Detecting and correcting drift ensures reliability and consistency across environments.
Correct Answer: When infrastructure diverges from the declared configuration over time
7. What is GitOps in relation to Infrastructure as Code?
Difficulty: MediumType: MCQTopic: GitOps
- Running Git inside infrastructure servers
- Using Git as the single source of truth for declarative infrastructure and deployments
- Deploying applications manually through Git
- Writing IaC only in YAML format
GitOps extends IaC principles by managing infrastructure and application deployments through Git repositories. Every change to infrastructure is done through pull requests and automatically applied by CI/CD pipelines, providing traceability, security, and consistent automation.
Correct Answer: Using Git as the single source of truth for declarative infrastructure and deployments
8. Which of the following is NOT an Infrastructure as Code tool?
Difficulty: EasyType: MCQTopic: IaC Tools
- Terraform
- Ansible
- AWS CloudFormation
- Jenkins
Jenkins is a continuous integration and delivery tool, not an IaC tool. Terraform, Ansible, and CloudFormation are examples of Infrastructure as Code tools used to define and manage infrastructure resources programmatically.
Correct Answer: Jenkins
9. Explain the main benefits of using Infrastructure as Code in a DevOps environment.
Difficulty: MediumType: SubjectiveTopic: IaC Basics
Infrastructure as Code brings automation, consistency, scalability, and traceability to infrastructure management. By treating infrastructure definitions as code, teams can version control configurations, automate provisioning, reduce human error, and ensure identical environments across development, staging, and production. It also enables faster disaster recovery, reduces manual intervention, and integrates seamlessly with CI/CD workflows, supporting true DevOps culture.
10. What are the key principles that guide Infrastructure as Code design?
Difficulty: MediumType: SubjectiveTopic: IaC Basics
The key principles of Infrastructure as Code include declarative definition, idempotency, immutability, modularity, and version control. Declarative definitions describe the desired end state of infrastructure. Idempotency ensures consistency across repeated executions. Immutability reduces configuration drift by replacing rather than mutating infrastructure. Modularity promotes reuse and maintainability. Finally, version control ensures traceability and collaboration across teams.
11. Why is state management critical in IaC tools like Terraform, and what problems can occur if it is mishandled?
Difficulty: HardType: SubjectiveTopic: Terraform State
State management records the current infrastructure configuration so that IaC tools know what exists, what to create, and what to destroy. In Terraform, the state file tracks resource metadata and dependencies. If state is mishandled or corrupted, it can cause drift between actual and expected infrastructure, lead to accidental resource deletions, or break dependency chains. Secure and consistent management of state—such as using remote backends with locking—is essential for collaboration and reliability.
12. How can teams detect and fix configuration drift in Infrastructure as Code environments?
Difficulty: MediumType: SubjectiveTopic: Drift Mgmt
Teams detect drift by comparing the declared configuration files against the live infrastructure state. Tools like Terraform and CloudFormation have built-in commands to show drift or plan differences. Fixing drift involves reapplying the IaC code, enforcing immutable builds, or eliminating unauthorized manual changes. Implementing continuous compliance checks and version-controlled workflows helps maintain consistency and prevent future drift.
13. Describe a real-world scenario where Infrastructure as Code improved operational efficiency or reliability.
Difficulty: HardType: SubjectiveTopic: Real World
A typical scenario is when an organization used Terraform and Ansible to automate multi-environment cloud provisioning. Before IaC, each environment was manually configured, causing delays and inconsistencies. After implementing IaC, the same configurations were version-controlled, peer-reviewed, and applied automatically through pipelines. This reduced provisioning time from days to minutes, eliminated drift, improved compliance, and enabled quick rollback or scaling during incidents. Such outcomes demonstrate the tangible value of IaC in enterprise DevOps workflows.
14. What is Terraform primarily used for in DevOps?
Difficulty: EasyType: MCQTopic: IaC Basics
- Managing application code versions
- Automating infrastructure provisioning and management
- Monitoring server health
- Handling continuous integration builds
Terraform is an open-source tool developed by HashiCorp used for defining, provisioning, and managing cloud infrastructure using code.
It allows teams to define infrastructure resources such as networks, virtual machines, and databases in configuration files, which can then be versioned, reviewed, and applied repeatedly in a predictable way.
Correct Answer: Automating infrastructure provisioning and management
15. In Terraform, what is a provider?
Difficulty: MediumType: MCQTopic: Terraform Core
- A configuration variable
- A plugin that allows Terraform to interact with a specific platform or API
- A security feature for encrypting files
- A YAML template
Providers are the foundation of Terraform’s plugin architecture.
They define resources and data sources for specific cloud services or APIs such as AWS, Azure, GCP, or Kubernetes. Terraform uses providers to create, read, update, and delete infrastructure resources using the APIs of these platforms.
Correct Answer: A plugin that allows Terraform to interact with a specific platform or API
16. In Terraform, what does a 'resource' block represent?
Difficulty: EasyType: MCQTopic: Terraform Core
- A reusable Terraform module
- A single piece of infrastructure managed by Terraform
- A comment block in code
- An environment variable
A resource block defines a specific piece of infrastructure, such as a virtual machine, a subnet, or a storage bucket.
Each resource block tells Terraform what infrastructure to create and how to configure it. Resources are the basic building blocks of a Terraform configuration.
Correct Answer: A single piece of infrastructure managed by Terraform
17. What is the purpose of Terraform state files?
Difficulty: MediumType: MCQTopic: Terraform State
- They store user passwords for remote systems
- They keep track of infrastructure deployed and its current status
- They serve as backup copies of Terraform code
- They contain application logs
The Terraform state file acts as a source of truth for the deployed infrastructure.
It maps resources defined in configuration files to real-world infrastructure resources. The state file enables Terraform to detect changes, plan updates, and perform drift detection accurately. It should be secured and often stored remotely in S3 or similar backends.
Correct Answer: They keep track of infrastructure deployed and its current status
18. What does the command 'terraform init' do?
Difficulty: EasyType: MCQTopic: Terraform Flow
- Applies all configurations to the target infrastructure
- Initializes a working directory, downloads plugins, and sets up backend
- Deletes all infrastructure resources
- Validates the configuration files
The 'terraform init' command initializes the Terraform working directory.
It downloads the required provider plugins, sets up backend configuration if defined, and prepares the environment for running Terraform commands. It must be executed before any other Terraform operation like plan or apply.
Correct Answer: Initializes a working directory, downloads plugins, and sets up backend
19. What does the command 'terraform plan' display?
Difficulty: MediumType: MCQTopic: Terraform Flow
- The list of applied resources
- The difference between desired and current infrastructure state
- The debug logs of Terraform run
- The list of all providers installed
The 'terraform plan' command shows what Terraform will do before making any changes.
It compares the current state file with the configuration files and generates an execution plan. This preview ensures that teams can review and verify infrastructure changes before applying them.
Correct Answer: The difference between desired and current infrastructure state
20. What happens when you run 'terraform apply'?
Difficulty: MediumType: MCQTopic: Terraform Flow
- It destroys the infrastructure
- It shows the planned actions only
- It executes the plan and provisions or updates resources
- It uploads the state file to Git
The 'terraform apply' command executes the actions outlined in the plan.
It creates, updates, or deletes infrastructure resources to match the desired configuration. The command also updates the state file to reflect the latest deployed infrastructure state after a successful run.
Correct Answer: It executes the plan and provisions or updates resources
21. What is the use of variables in Terraform configuration?
Difficulty: MediumType: MCQTopic: Terraform Core
- To hardcode sensitive data in files
- To make configurations dynamic and reusable
- To comment code sections
- To store local Terraform commands
Variables in Terraform allow configuration to be parameterized and reused.
They make code more flexible by externalizing values that can change across environments, such as instance types or region names. Variables can be defined in separate files, environment variables, or input prompts.
Correct Answer: To make configurations dynamic and reusable
22. Explain the role of the 'terraform.tfstate' file and why it must be handled carefully.
Difficulty: MediumType: SubjectiveTopic: Terraform State
The 'terraform.tfstate' file stores the current state of infrastructure managed by Terraform.
It maps Terraform resources to their real cloud counterparts and tracks metadata such as resource IDs. Mishandling this file can cause Terraform to lose track of existing resources, potentially leading to accidental deletion or duplication.
For collaboration, it should be stored remotely using secure backends like AWS S3 or Terraform Cloud with proper locking and encryption enabled.
23. What is a remote backend in Terraform and what are its advantages?
Difficulty: MediumType: SubjectiveTopic: Terraform Backend
A remote backend in Terraform is a configuration that stores the state file on an external service such as AWS S3, Azure Storage, or Terraform Cloud.
It enables team collaboration by providing shared access to the same state file, supports state locking to avoid race conditions, and improves security through encryption. Using a remote backend is essential in production environments to ensure consistency and data safety across multiple users.
24. What are Terraform modules and how do they promote reusability?
Difficulty: HardType: SubjectiveTopic: Terraform Modules
Modules in Terraform are reusable units of infrastructure code that group related resources together.
They enable teams to define standard patterns such as VPCs, EC2 instances, or security groups once and reuse them across multiple projects. This promotes consistency, simplifies maintenance, and improves collaboration. Modules can be local or sourced from public registries such as the Terraform Registry.
25. Describe how Terraform handles resource lifecycle management and dependency ordering.
Difficulty: HardType: SubjectiveTopic: Terraform Core
Terraform automatically manages the lifecycle of infrastructure resources based on dependencies it infers from the configuration.
When one resource references another, Terraform creates a dependency graph to ensure proper creation and destruction order. The lifecycle meta-argument allows customization of behavior, such as creating before destroying or ignoring certain changes. This precise orchestration makes Terraform safe and reliable for managing complex infrastructure.
26. Explain a typical Terraform workflow in a team environment from code writing to deployment.
Difficulty: HardType: SubjectiveTopic: IaC Workflow
A typical Terraform workflow begins with writing configuration files in HCL to define resources.
The team initializes the working directory using 'terraform init', creates an execution plan using 'terraform plan', and reviews the changes through code review. Once approved, 'terraform apply' provisions the resources, and the state file is updated in a remote backend. The workflow often integrates with CI/CD pipelines for automated validation and deployment, ensuring traceability, collaboration, and compliance with DevOps practices.
27. Why are Terraform modules used in large-scale infrastructure projects?
Difficulty: MediumType: MCQTopic: Terraform Modules
- To add comments in code
- To group reusable configurations and maintain consistency
- To improve execution speed only
- To store credentials
Modules in Terraform help encapsulate reusable infrastructure components like networks, security groups, or compute instances.
They simplify large configurations, enforce best practices, and ensure consistency across multiple environments. Teams can publish modules internally or use verified public modules from the Terraform Registry.
Correct Answer: To group reusable configurations and maintain consistency
28. What is the primary advantage of using remote backends in Terraform?
Difficulty: MediumType: MCQTopic: Terraform Backend
- They enable multiple users to share and lock state files safely
- They make Terraform run faster
- They allow offline execution of Terraform commands
- They automatically delete old resources
Remote backends allow teams to store state files in a shared location like AWS S3 or Terraform Cloud.
They enable collaboration by preventing concurrent updates through state locking and ensure secure, versioned, and centralized management of Terraform state. This avoids conflicts and inconsistencies during team operations.
Correct Answer: They enable multiple users to share and lock state files safely
29. What is the role of Terraform workspaces?
Difficulty: MediumType: MCQTopic: Workspaces
- They provide separate state files for different environments like dev, staging, and prod
- They are used to store logs of Terraform runs
- They encrypt Terraform variables
- They define provider credentials
Terraform workspaces allow managing multiple environments from the same configuration code by maintaining separate state files.
This approach simplifies environment isolation without duplicating code, making it easier to manage lifecycle changes and reduce errors in environment promotion pipelines.
Correct Answer: They provide separate state files for different environments like dev, staging, and prod
30. What are Terraform provisioners used for?
Difficulty: HardType: MCQTopic: Terraform Core
- To execute scripts or actions on resources during creation or destruction
- To manage Terraform provider updates
- To generate documentation automatically
- To encrypt the state file
Provisioners in Terraform allow executing scripts or actions on resources after they are created or destroyed.
For example, you can use the remote-exec or local-exec provisioner to install software or configure settings post-deployment. However, best practice discourages heavy use of provisioners because they introduce procedural logic into declarative IaC workflows.
Correct Answer: To execute scripts or actions on resources during creation or destruction
31. Which of the following lifecycle arguments can prevent accidental deletion of resources?
Difficulty: HardType: MCQTopic: Terraform Core
- ignore_changes
- create_before_destroy
- prevent_destroy
- force_replace
The 'prevent_destroy' lifecycle argument in Terraform ensures that critical resources cannot be deleted accidentally.
When this argument is set to true, any attempt to delete the resource will fail unless explicitly overridden. It is commonly used for protecting production databases or key infrastructure components.
Correct Answer: prevent_destroy
32. What does the 'depends_on' argument do in Terraform?
Difficulty: MediumType: MCQTopic: Terraform Core
- Defines variable values
- Specifies explicit dependencies between resources
- Locks the state file
- Creates new workspaces
The 'depends_on' meta-argument is used when Terraform cannot automatically infer dependencies from code.
It explicitly defines the order in which resources should be created or destroyed. This ensures Terraform builds infrastructure in a safe and predictable manner, avoiding resource conflicts.
Correct Answer: Specifies explicit dependencies between resources
33. What does the 'terraform import' command do?
Difficulty: MediumType: MCQTopic: Terraform Import
- Deletes resources from the cloud
- Adds existing infrastructure into Terraform state management
- Generates new provider plugins
- Creates default workspaces automatically
The 'terraform import' command allows you to bring existing manually created resources under Terraform management.
It imports the resource into the state file without recreating it, enabling teams to transition legacy infrastructure into codified, managed environments safely.
Correct Answer: Adds existing infrastructure into Terraform state management
34. How does Terraform detect drift between configuration and actual infrastructure?
Difficulty: MediumType: MCQTopic: Drift Mgmt
- By comparing the state file with the configuration and cloud provider API data
- By checking the Git commit history
- By using machine learning
- By relying on manual logs
Terraform detects drift by comparing the state file (what Terraform believes exists) with the live infrastructure fetched through provider APIs.
If any changes are detected, Terraform displays them in the plan output, allowing teams to review and reconcile differences before applying updates.
Correct Answer: By comparing the state file with the configuration and cloud provider API data
35. Explain Terraform state locking and why it is important in team environments.
Difficulty: MediumType: SubjectiveTopic: State Locking
Terraform state locking prevents multiple users or processes from modifying the same state file simultaneously.
When a user runs a command that modifies infrastructure, Terraform locks the state file to avoid concurrent writes. Without locking, two users could apply conflicting changes, leading to corrupted state and unpredictable results. Remote backends like AWS S3 with DynamoDB or Terraform Cloud provide automatic state locking for safety in multi-user teams.
36. How should sensitive data such as passwords or keys be handled securely in Terraform?
Difficulty: MediumType: SubjectiveTopic: Secrets Mgmt
Sensitive data should never be hardcoded directly into Terraform configuration files.
Instead, teams should use environment variables, encrypted variable files, or secret management services like AWS Secrets Manager or HashiCorp Vault. Terraform also supports the 'sensitive' attribute to mask values in logs and outputs. These practices help ensure compliance and prevent accidental data leaks.
37. What advantages does Terraform Cloud provide for enterprise IaC workflows?
Difficulty: MediumType: SubjectiveTopic: Terraform Cloud
Terraform Cloud offers a managed environment for running Terraform securely at scale.
It provides features such as remote state storage, secure variable management, policy enforcement, collaboration controls, and integration with CI/CD pipelines. It simplifies team workflows by offering role-based access and centralized visibility into infrastructure changes across environments.
38. Describe how 'Policy as Code' integrates with Terraform in enterprise governance.
Difficulty: HardType: SubjectiveTopic: Policy Code
Policy as Code tools such as Sentinel or Open Policy Agent integrate with Terraform to enforce compliance and security policies programmatically.
They allow defining rules like approved instance types, region restrictions, or tag requirements that must pass before infrastructure is applied. This ensures that all deployments align with corporate governance, security, and cost management standards without manual reviews.
39. Explain how you would design a multi-environment setup in Terraform for dev, staging, and production.
Difficulty: HardType: SubjectiveTopic: Workspaces
A multi-environment Terraform setup typically uses workspaces or separate state files for each environment.
Configurations are modularized with environment-specific variable files that define distinct parameters such as instance size, region, or scaling limits. The same base modules are reused across environments, ensuring consistency. Remote backends are configured for each environment to isolate state and reduce risk, while CI/CD pipelines automate promotion from dev to production with approval gates.
40. What is Ansible primarily used for in DevOps?
Difficulty: EasyType: MCQTopic: IaC Basics
- Monitoring server performance
- Automating configuration management, provisioning, and deployment
- Developing web applications
- Handling version control
Ansible is an open-source automation tool used for configuration management, software provisioning, and application deployment.
It allows teams to define desired infrastructure and system states through YAML-based playbooks, ensuring consistent, repeatable, and error-free operations across multiple servers.
Correct Answer: Automating configuration management, provisioning, and deployment
41. Why is Ansible called an 'agentless' automation tool?
Difficulty: MediumType: MCQTopic: Ansible Basics
- It uses no configuration files
- It does not require agents to be installed on target systems
- It runs only on Windows machines
- It does not need network access
Ansible is agentless because it connects to target systems using standard protocols like SSH or WinRM instead of installing and maintaining separate agents.
This simplifies setup, reduces maintenance overhead, and improves security by minimizing running processes on managed nodes.
Correct Answer: It does not require agents to be installed on target systems
42. In Ansible, what is a playbook?
Difficulty: MediumType: MCQTopic: Ansible Playbooks
- A collection of YAML files defining tasks to be executed on managed hosts
- A shell script used to install Ansible
- A command-line debugging tool
- A storage location for log files
Playbooks are YAML-formatted files that describe the automation tasks Ansible performs on hosts.
They contain plays, which map groups of hosts to roles, and tasks, which define the steps required to reach the desired system state. Playbooks are human-readable and form the foundation of Ansible automation.
Correct Answer: A collection of YAML files defining tasks to be executed on managed hosts
43. What is the purpose of the Ansible inventory file?
Difficulty: MediumType: MCQTopic: Ansible Inventory
- It stores user credentials
- It lists the managed hosts and groups on which Ansible will run tasks
- It defines playbook syntax
- It stores Ansible logs
The inventory file is a key component in Ansible that defines which hosts or groups of hosts the automation will target.
It can be static (written in INI or YAML) or dynamic (generated from cloud APIs). Inventory files make it easy to manage servers at scale and apply playbooks selectively.
Correct Answer: It lists the managed hosts and groups on which Ansible will run tasks
44. What does idempotency mean in Ansible?
Difficulty: EasyType: MCQTopic: IaC Basics
- Tasks always run multiple times regardless of state
- Running a task multiple times has no additional effect if the system is already in the desired state
- Tasks execute randomly to balance load
- Ansible stops execution after one failure
Idempotency ensures that running the same Ansible task multiple times does not change the system after it reaches the target state.
This guarantees predictability and safety, as repeated executions of playbooks will not break or reconfigure systems unnecessarily.
Correct Answer: Running a task multiple times has no additional effect if the system is already in the desired state
45. What is the purpose of roles in Ansible?
Difficulty: MediumType: MCQTopic: Ansible Roles
- To organize related tasks, variables, and handlers into reusable units
- To store secret data securely
- To manage playbook execution speed
- To configure cloud providers
Roles help structure Ansible projects by organizing related automation content.
Each role contains tasks, variables, templates, files, and handlers. Roles promote modularity and reusability, allowing teams to share and maintain clean, maintainable code bases across multiple playbooks or environments.
Correct Answer: To organize related tasks, variables, and handlers into reusable units
46. In Ansible, what are handlers used for?
Difficulty: MediumType: MCQTopic: Ansible Roles
- To execute actions only when notified by other tasks
- To manage playbook dependencies
- To store environment variables
- To generate dynamic inventories
Handlers are special tasks in Ansible that run only when notified by other tasks.
They are typically used to restart or reload services after a configuration change. This mechanism ensures that updates happen only when necessary, improving efficiency and reliability of deployments.
Correct Answer: To execute actions only when notified by other tasks
47. What is Ansible Galaxy used for?
Difficulty: MediumType: MCQTopic: Ansible Roles
- To store playbook logs
- To share, download, and reuse community roles and collections
- To execute playbooks in the cloud
- To manage user authentication
Ansible Galaxy is a public repository and command-line tool for discovering and sharing roles and collections.
It allows teams to reuse trusted community content, saving time and ensuring consistent automation standards. Developers can publish or download reusable automation modules through the Galaxy hub.
Correct Answer: To share, download, and reuse community roles and collections
48. Explain the role of templates and Jinja2 in Ansible.
Difficulty: MediumType: SubjectiveTopic: Ansible Templates
Templates in Ansible are files that use the Jinja2 templating engine to dynamically generate configuration files based on variables and logic.
They allow inserting variables, loops, and conditional statements into files before deployment. For example, you can use templates to create Nginx configuration files dynamically for different environments. This makes playbooks flexible and environment-aware while maintaining automation consistency.
49. What is Ansible Vault, and why is it important?
Difficulty: MediumType: SubjectiveTopic: Ansible Vault
Ansible Vault is a security feature that allows you to encrypt sensitive data such as passwords, keys, or API tokens.
It ensures that confidential information is protected even when stored in version control. Vault files can be decrypted only with a password or key at runtime. This helps organizations maintain security compliance while automating infrastructure.
50. How can you handle errors and failed tasks in Ansible playbooks?
Difficulty: MediumType: SubjectiveTopic: Debugging
Ansible provides several mechanisms for error handling, including 'ignore_errors', 'failed_when', and 'block/rescue/always' blocks.
You can allow playbooks to continue execution even after a task fails or trigger specific recovery actions. For critical tasks, conditions can be set to define what constitutes failure. These options make Ansible playbooks more resilient and production-ready.
51. Describe how dynamic inventory works in Ansible and where it is used.
Difficulty: HardType: SubjectiveTopic: Ansible Inventory
Dynamic inventory allows Ansible to fetch host information directly from external sources like AWS, Azure, or Kubernetes rather than using static files.
It uses scripts or plugins that query APIs to retrieve up-to-date lists of instances and their metadata. This is particularly useful in cloud or container environments where servers frequently change, ensuring accurate targeting of hosts during automation runs.
52. Explain a real-world use case where Ansible improved DevOps automation efficiency.
Difficulty: HardType: SubjectiveTopic: Real World
A common real-world example is automating configuration management for hundreds of web servers using Ansible.
Before automation, teams manually applied patches and configuration updates, which was error-prone and time-consuming. Using Ansible playbooks, updates were applied consistently across environments within minutes. Combined with Ansible Tower, this enabled centralized control, role-based access, and automated rollback, drastically improving reliability and deployment speed in production systems.
53. What is AWS CloudFormation primarily used for?
Difficulty: EasyType: MCQTopic: IaC Basics
- Automating server performance tuning
- Defining and provisioning AWS infrastructure as code
- Monitoring AWS services in real time
- Managing AWS billing and costs
AWS CloudFormation is a service that enables users to define and provision AWS resources in a declarative way using templates.
It automates resource creation, updates, and deletion, ensuring consistency and repeatability across environments. Templates can be written in JSON or YAML and represent the desired infrastructure state.
Correct Answer: Defining and provisioning AWS infrastructure as code
54. In AWS CloudFormation, what is a 'template'?
Difficulty: MediumType: MCQTopic: Ansible Templates
- A monitoring dashboard for AWS services
- A declarative file that defines AWS resources and configurations
- A log file containing deployment history
- A JSON file used only for EC2 provisioning
A CloudFormation template is a declarative JSON or YAML file that defines the structure and configuration of AWS resources.
It describes parameters, resources, mappings, conditions, and outputs to create complete infrastructure stacks in a consistent, automated manner.
Correct Answer: A declarative file that defines AWS resources and configurations
55. What is an AWS CloudFormation stack?
Difficulty: MediumType: MCQTopic: CloudFormation
- A group of related AWS resources managed together
- A version of the CloudFormation CLI
- A single EC2 instance
- A configuration file used in Terraform
A CloudFormation stack is a collection of AWS resources that are created, updated, and deleted as a single unit.
Stacks simplify lifecycle management by linking related resources through one template. Updating or deleting a stack ensures consistent changes across the defined environment.
Correct Answer: A group of related AWS resources managed together
56. What is the purpose of 'parameters' in CloudFormation templates?
Difficulty: MediumType: MCQTopic: CloudFormation
- To store logs from CloudWatch
- To pass dynamic input values into templates for flexibility
- To define AWS service pricing
- To configure IAM permissions automatically
Parameters in CloudFormation templates make templates dynamic and reusable.
They allow users to provide input values during stack creation, such as instance types, region names, or key pairs. This promotes reusability and reduces duplication of templates across environments.
Correct Answer: To pass dynamic input values into templates for flexibility
57. What is the role of the 'Outputs' section in CloudFormation?
Difficulty: MediumType: MCQTopic: CloudFormation
- To generate monitoring alerts
- To display useful information such as resource IDs or URLs after stack creation
- To list all CloudFormation templates used
- To store backup copies of templates
The Outputs section in CloudFormation defines key information that users might need after stack deployment.
For example, it can output a load balancer DNS name or an S3 bucket URL. Outputs help integrate stacks or provide references for dependent infrastructure components.
Correct Answer: To display useful information such as resource IDs or URLs after stack creation
58. What does a CloudFormation StackSet allow you to do?
Difficulty: HardType: MCQTopic: CloudFormation
- Create a single stack on one AWS account
- Deploy the same stack across multiple AWS accounts and regions
- Monitor logs for CloudFormation operations
- Create dynamic pricing models
CloudFormation StackSets enable centralized deployment of stacks across multiple accounts and regions.
They allow consistent infrastructure configuration at scale and simplify enterprise-wide governance. StackSets are especially valuable for multi-account or multi-region AWS architectures.
Correct Answer: Deploy the same stack across multiple AWS accounts and regions
59. How does Pulumi differ from Terraform or CloudFormation?
Difficulty: MediumType: MCQTopic: Pulumi
- It uses general-purpose programming languages for defining infrastructure
- It only supports AWS Cloud
- It cannot be version-controlled
- It has no support for modules
Pulumi is a modern IaC tool that lets developers define infrastructure using general-purpose languages such as TypeScript, Python, or Go.
Unlike Terraform or CloudFormation, which rely on declarative syntax, Pulumi embraces imperative programming constructs, making it attractive for developer-centric workflows.
Correct Answer: It uses general-purpose programming languages for defining infrastructure
60. Which statement correctly describes Chef and Puppet?
Difficulty: MediumType: MCQTopic: CM Tools
- They are monitoring tools for cloud resources
- They are configuration management tools using declarative or imperative models
- They are only used for container orchestration
- They are Git-based version control systems
Chef and Puppet are configuration management tools that automate the setup, deployment, and management of servers.
They define system states either declaratively (Puppet) or through recipes (Chef), ensuring consistent environments and reducing manual configuration drift.
Correct Answer: They are configuration management tools using declarative or imperative models
61. Compare Terraform and CloudFormation in terms of platform support, language, and flexibility.
Difficulty: MediumType: SubjectiveTopic: IaC Tools
Terraform and CloudFormation are both powerful Infrastructure as Code tools, but they differ in scope and flexibility.
Terraform is cloud-agnostic and supports multiple providers such as AWS, Azure, and GCP using HashiCorp Configuration Language (HCL). CloudFormation, on the other hand, is AWS-specific and uses YAML or JSON templates. Terraform offers greater modularity, reusability, and portability across platforms, while CloudFormation provides deeper native integration with AWS services.
62. What is drift detection in CloudFormation and why is it important?
Difficulty: MediumType: SubjectiveTopic: Drift Mgmt
Drift detection in CloudFormation identifies differences between the actual state of resources and their declared configuration in the template.
It ensures infrastructure remains consistent with the defined specifications. Drift can occur due to manual changes or external updates. Detecting and correcting drift maintains compliance, reliability, and predictable behavior of automated deployments.
63. How do Chef and Puppet differ in their approach to configuration management?
Difficulty: MediumType: SubjectiveTopic: CM Tools
Chef uses a procedural approach through 'recipes' and 'cookbooks' written in Ruby, describing how systems should be configured step by step.
Puppet follows a declarative model, where the desired system state is defined, and the engine determines how to achieve it. Puppet's model-driven design is often easier to maintain at scale, while Chef offers more control and flexibility for complex workflows.
64. Explain the benefits and challenges of using Pulumi for Infrastructure as Code.
Difficulty: HardType: SubjectiveTopic: Pulumi
Pulumi allows engineers to define infrastructure using real programming languages like TypeScript or Python.
Benefits include strong integration with existing development workflows, use of familiar programming constructs, and enhanced testability. However, challenges include a steeper learning curve for operations teams and less simplicity compared to purely declarative tools like Terraform. Pulumi is well-suited for DevOps teams with strong software engineering backgrounds.
65. In large enterprises, how can multiple IaC tools like Terraform, CloudFormation, and Ansible coexist effectively?
Difficulty: HardType: SubjectiveTopic: IaC Tools
Enterprises often combine IaC tools based on their strengths and ecosystem compatibility.
Terraform can be used for provisioning multi-cloud resources, CloudFormation for AWS-specific environments, and Ansible for configuration management after resource creation. By defining clear boundaries between provisioning, configuration, and deployment, teams maintain flexibility while ensuring unified governance and automation consistency.
66. How does Infrastructure as Code integrate with CI/CD pipelines?
Difficulty: EasyType: MCQTopic: IaC Basics
- By automating manual approvals only
- By enabling automated testing and deployment of infrastructure along with application code
- By replacing version control systems
- By managing only runtime logging
Infrastructure as Code integrates into CI/CD pipelines by automating the provisioning, validation, and deployment of infrastructure components.
This ensures that both application and infrastructure changes are version-controlled, tested, and deployed together, achieving full end-to-end automation and environment consistency.
Correct Answer: By enabling automated testing and deployment of infrastructure along with application code
67. What is a common use of Jenkins in an IaC workflow?
Difficulty: MediumType: MCQTopic: CI/CD
- Building Docker images only
- Automating Terraform and Ansible workflows through pipeline stages
- Storing encrypted secrets
- Managing cloud billing
Jenkins is widely used to automate Infrastructure as Code workflows.
Pipelines can execute Terraform commands such as init, plan, and apply, or run Ansible playbooks for configuration management. Jenkins integrates with version control and approval gates, enabling automated, reliable, and auditable infrastructure delivery.
Correct Answer: Automating Terraform and Ansible workflows through pipeline stages
68. What is the role of GitHub Actions in IaC pipelines?
Difficulty: MediumType: MCQTopic: CI/CD
- It allows developers to automate IaC validation and deployment directly from Git repositories
- It replaces Terraform CLI entirely
- It manages AWS accounts
- It is used for code linting only
GitHub Actions enable automation of Infrastructure as Code tasks directly in the repository.
Developers can trigger workflows on pull requests or commits to validate Terraform plans, deploy CloudFormation stacks, or run Ansible playbooks. This provides a lightweight CI/CD solution integrated with version control.
Correct Answer: It allows developers to automate IaC validation and deployment directly from Git repositories
69. How does GitLab CI/CD improve IaC automation?
Difficulty: MediumType: MCQTopic: CI/CD
- By introducing manual configuration steps
- By automating Terraform and Ansible execution through pipeline stages with approvals and runners
- By hosting Ansible playbooks as static files
- By requiring local execution only
GitLab CI/CD automates IaC workflows using YAML-defined pipelines that execute Terraform or Ansible commands.
It supports pipeline stages for validation, planning, approval, and apply, enabling policy control and audit trails. GitLab Runners execute jobs in isolated environments, maintaining security and reliability.
Correct Answer: By automating Terraform and Ansible execution through pipeline stages with approvals and runners
70. Why is Terraform plan validation important in CI/CD pipelines?
Difficulty: MediumType: MCQTopic: CI/CD
- It allows faster pipeline execution
- It helps detect configuration errors before applying changes
- It removes the need for state management
- It automatically approves all changes
Terraform plan validation in CI/CD pipelines ensures infrastructure changes are reviewed before they are applied.
The plan step compares current and desired states, helping teams catch misconfigurations early. This improves quality control and reduces production risks during automated deployments.
Correct Answer: It helps detect configuration errors before applying changes
71. How should sensitive secrets be handled in CI/CD pipelines for IaC?
Difficulty: MediumType: MCQTopic: Secrets Mgmt
- By hardcoding them into Terraform variables
- By storing them securely in vaults or encrypted CI/CD variables
- By committing them to Git repositories
- By sending them over email
Secrets such as API keys and credentials should never be stored in plaintext or version control.
Instead, they should be managed using secure secret vaults like HashiCorp Vault, AWS Secrets Manager, or encrypted CI/CD variables. This ensures secure automation while maintaining compliance with organizational policies.
Correct Answer: By storing them securely in vaults or encrypted CI/CD variables
72. What is 'policy as code' in CI/CD IaC pipelines?
Difficulty: HardType: MCQTopic: Policy Code
- Manually reviewing infrastructure changes
- Automating security and compliance checks through policies written as code
- Documenting CI/CD procedures
- Adding static IPs to resources
Policy as Code enforces security, compliance, and operational rules within CI/CD pipelines automatically.
Tools like Open Policy Agent or HashiCorp Sentinel evaluate Terraform plans against pre-defined rules before deployment. This ensures that only compliant and secure configurations are applied to production.
Correct Answer: Automating security and compliance checks through policies written as code
73. Which step in a CI/CD pipeline typically applies infrastructure changes?
Difficulty: MediumType: MCQTopic: CI/CD
- Plan stage
- Apply or deploy stage
- Build stage
- Validate stage
In CI/CD pipelines, the apply or deploy stage is responsible for executing Terraform apply or equivalent commands.
It performs actual infrastructure changes after successful validation and approvals, ensuring predictable and automated deployment workflows.
Correct Answer: Apply or deploy stage
74. Describe a typical CI/CD pipeline flow for deploying infrastructure using Terraform.
Difficulty: MediumType: SubjectiveTopic: CI/CD
A typical CI/CD pipeline for Infrastructure as Code follows several automated stages.
The first stage checks out code from version control and runs syntax validation. The next stage executes 'terraform plan' to generate an execution plan, followed by automated testing or peer review. Once approved, the 'apply' stage provisions infrastructure. Post-deployment, the pipeline may run drift detection, policy checks, and notifications. This structure ensures repeatable, auditable, and secure infrastructure delivery.
75. How can rollback strategies be implemented for IaC pipelines?
Difficulty: MediumType: SubjectiveTopic: CI/CD
Rollback in Infrastructure as Code pipelines can be managed by storing previous configurations in version control and using automated redeployments.
If an update fails, the pipeline reverts to the last known good configuration by applying a previous commit or state version. Additionally, Terraform workspaces or CloudFormation change sets help safely roll back infrastructure with minimal downtime.
76. What types of testing are commonly used in CI/CD pipelines for Infrastructure as Code?
Difficulty: MediumType: SubjectiveTopic: CI/CD
Testing Infrastructure as Code ensures reliability and consistency before deployment.
Common tests include syntax validation, policy enforcement, unit tests for modules, integration testing using test environments, and post-deployment verification using monitoring tools. Automated testing in pipelines reduces human error and ensures compliance with desired configurations.
77. Explain how IaC can be integrated with CI/CD to achieve continuous delivery of both infrastructure and application code.
Difficulty: HardType: SubjectiveTopic: CI/CD
Integrating IaC with CI/CD allows infrastructure and application code to evolve together in a synchronized, automated workflow.
Infrastructure changes are defined in code repositories and undergo the same lifecycle as application updates—build, test, plan, and deploy. Pipelines automatically provision infrastructure before deploying application builds. This ensures consistency, reduces drift, and enables blue-green or canary deployments with minimal downtime.
78. Why is compliance automation crucial in CI/CD pipelines using IaC?
Difficulty: HardType: SubjectiveTopic: Compliance
Compliance automation ensures that infrastructure deployments follow organizational, security, and regulatory policies without manual intervention.
By embedding automated checks in pipelines—such as enforcing encryption standards, tagging policies, or network restrictions—organizations can prevent misconfigurations before production deployment. This leads to faster, safer, and audit-ready infrastructure delivery aligned with DevSecOps principles.
79. Why is security a critical concern in Infrastructure as Code workflows?
Difficulty: EasyType: MCQTopic: IaC Basics
- Because IaC can increase manual errors
- Because IaC automates infrastructure creation and can misconfigure sensitive resources if not controlled
- Because IaC does not support encryption
- Because IaC tools always store secrets in plain text
IaC automates the deployment of large infrastructure environments, which includes security-sensitive resources such as networks, IAM roles, and databases.
Without proper governance, automated configurations can accidentally expose data, weaken access controls, or create unmonitored resources. Therefore, integrating security into every stage of the IaC lifecycle is essential.
Correct Answer: Because IaC automates infrastructure creation and can misconfigure sensitive resources if not controlled
80. What is the best way to handle sensitive data such as passwords in IaC?
Difficulty: MediumType: MCQTopic: Secrets Mgmt
- Hardcode credentials directly in the IaC files
- Store secrets in secure vaults or encrypted variables
- Use unencrypted environment variables
- Send secrets by email to the deployment team
Sensitive data must never be stored in plaintext or version control.
Instead, organizations use secret management tools such as HashiCorp Vault, AWS Secrets Manager, or Azure Key Vault. These systems store credentials securely and integrate with IaC pipelines to inject secrets dynamically at runtime, maintaining security compliance.
Correct Answer: Store secrets in secure vaults or encrypted variables
81. How can Identity and Access Management (IAM) be enforced through IaC?
Difficulty: MediumType: MCQTopic: IaC Security
- By defining IAM roles, policies, and permissions within code templates
- By creating roles manually after deployment
- By storing passwords in Terraform variables
- By removing access restrictions from IaC files
IAM configurations such as users, roles, and policies can be codified directly in IaC definitions.
By version-controlling IAM definitions, teams can standardize access permissions, track changes, and ensure least privilege principles are automatically enforced during deployment.
Correct Answer: By defining IAM roles, policies, and permissions within code templates
82. What is the purpose of Policy as Code in IaC security?
Difficulty: MediumType: MCQTopic: Policy Code
- To automate infrastructure testing
- To codify security and compliance rules that must be checked before deployment
- To generate documentation
- To manage virtual machines
Policy as Code ensures that security and compliance requirements are automated and consistent.
Tools like HashiCorp Sentinel or Open Policy Agent evaluate infrastructure plans against organizational rules, such as enforcing encryption, disallowing public buckets, or requiring tags. This helps prevent misconfigurations before reaching production.
Correct Answer: To codify security and compliance rules that must be checked before deployment
83. What is Open Policy Agent (OPA) used for in IaC governance?
Difficulty: MediumType: MCQTopic: Policy Code
- To monitor cloud costs
- To enforce custom compliance rules as code within CI/CD and IaC workflows
- To manage Terraform states
- To store access logs
Open Policy Agent is an open-source framework for writing and enforcing policies across systems.
OPA integrates with IaC tools like Terraform and Kubernetes to evaluate configurations against predefined compliance and security policies. It allows teams to automate enforcement of security standards during build or deployment stages.
Correct Answer: To enforce custom compliance rules as code within CI/CD and IaC workflows
84. Why is configuration drift a security risk in IaC-managed systems?
Difficulty: MediumType: MCQTopic: Drift Mgmt
- It allows faster deployments
- It creates inconsistencies between declared and actual states, possibly leaving insecure resources active
- It improves availability automatically
- It helps reduce audit complexity
Configuration drift occurs when manual or external changes alter the infrastructure outside of IaC control.
This can create security vulnerabilities if outdated or unauthorized configurations remain active. Regular drift detection ensures infrastructure remains compliant with declared, secure baselines.
Correct Answer: It creates inconsistencies between declared and actual states, possibly leaving insecure resources active
85. Which best practice enhances data protection in IaC deployments?
Difficulty: MediumType: MCQTopic: IaC Security
- Storing files without encryption
- Enabling encryption for storage, state files, and network traffic
- Allowing public access to S3 buckets
- Disabling TLS certificates
Encryption ensures that data at rest and in transit remains secure.
In IaC workflows, encrypting Terraform state files, cloud storage, and communication channels prevents unauthorized access to configuration data or credentials. This is essential for meeting enterprise compliance standards.
Correct Answer: Enabling encryption for storage, state files, and network traffic
86. Explain how governance frameworks are implemented within IaC pipelines.
Difficulty: MediumType: SubjectiveTopic: Compliance
Governance frameworks in IaC pipelines ensure that every infrastructure change complies with organizational and regulatory policies.
This is achieved through automated validations such as Policy as Code, access control checks, naming standards, and tagging enforcement. CI/CD pipelines include pre-deployment checks to block non-compliant changes. Centralized logging and audit trails ensure accountability and traceability of all infrastructure modifications.
87. Why are audit logs important in IaC governance, and how can they be managed?
Difficulty: MediumType: SubjectiveTopic: Compliance
Audit logs record all actions taken by IaC tools and users, providing visibility into changes and access patterns.
They are vital for detecting unauthorized activities, ensuring accountability, and maintaining compliance. Centralized log storage and integrations with systems like AWS CloudTrail or Azure Monitor help organizations correlate changes with users, commits, and deployments for end-to-end traceability.
88. What are common compliance standards that influence IaC security policies?
Difficulty: MediumType: SubjectiveTopic: Compliance
Infrastructure as Code practices are often governed by compliance frameworks such as ISO 27001, SOC 2, GDPR, and PCI DSS.
These frameworks dictate how data should be handled, who can access systems, and how configurations must be secured. IaC automation helps enforce these policies by embedding compliance checks directly into the deployment pipeline.
89. Describe how DevSecOps practices enhance IaC security and governance.
Difficulty: HardType: SubjectiveTopic: DevSecOps
DevSecOps integrates security at every phase of the DevOps lifecycle, including IaC automation.
By embedding scanning tools, policy enforcement, and secret management into pipelines, DevSecOps ensures that infrastructure is continuously validated for compliance and vulnerabilities. Teams shift security left, allowing faster remediation and proactive governance instead of relying on post-deployment audits.
90. How should incident response be integrated into IaC-managed environments?
Difficulty: HardType: SubjectiveTopic: Incident Response
Incident response in IaC environments must be automated and codified for speed and consistency.
Infrastructure definitions include response automation such as quarantining compromised instances, rotating credentials, and restoring secure configurations from version control. Integrating monitoring alerts with IaC pipelines allows immediate corrective actions and rollback of affected environments, reducing recovery time and limiting exposure.
91. Explain the role of centralized policy frameworks in large-scale IaC governance.
Difficulty: HardType: SubjectiveTopic: Policy Code
Centralized policy frameworks unify governance across multiple IaC tools, teams, and environments.
They define global security baselines, tagging rules, resource quotas, and compliance validations applied automatically across all pipelines. This reduces fragmentation, enforces consistency, and simplifies audits while empowering teams to innovate safely within approved boundaries.
92. What is a key benefit of using IaC in multi-cloud environments?
Difficulty: MediumType: MCQTopic: Multi Cloud
- Manual control of all infrastructure configurations
- Consistency and portability of deployments across different cloud providers
- Vendor lock-in with a single cloud provider
- Reduced automation capability
IaC enables organizations to define infrastructure configurations once and deploy them across multiple cloud environments consistently.
This reduces vendor lock-in and improves flexibility by standardizing provisioning processes across AWS, Azure, and GCP, ensuring portability and reproducibility.
Correct Answer: Consistency and portability of deployments across different cloud providers
93. How does Terraform interact with AWS resources?
Difficulty: MediumType: MCQTopic: Terraform AWS
- By directly modifying the AWS console
- Through AWS SDK integration without authentication
- By using the AWS provider plugin and API calls
- By running AWS CLI commands manually
Terraform interacts with AWS using the AWS provider, which communicates via AWS APIs.
The provider plugin translates Terraform configuration into API requests that create, update, or delete resources in AWS, ensuring accurate and secure provisioning through programmatic interfaces.
Correct Answer: By using the AWS provider plugin and API calls
94. What are Azure Resource Manager (ARM) templates used for?
Difficulty: MediumType: MCQTopic: azure arm
- To deploy applications in Kubernetes
- To define and deploy Azure resources using JSON-based IaC templates
- To configure AWS instances
- To manage Git repositories
Azure Resource Manager (ARM) templates are JSON files that declaratively define Azure resources and configurations.
They enable consistent, repeatable deployments of cloud infrastructure and integrate with Azure DevOps pipelines for automation and governance.
Correct Answer: To define and deploy Azure resources using JSON-based IaC templates
95. Which IaC tool is native to Google Cloud Platform for defining infrastructure?
Difficulty: MediumType: MCQTopic: gcp deployment manager
- Deployment Manager
- CloudFormation
- Ansible Tower
- Chef
Google Cloud Deployment Manager is the native IaC tool for Google Cloud.
It uses YAML or Jinja templates to define, configure, and deploy infrastructure resources declaratively across GCP projects and regions.
Correct Answer: Deployment Manager
96. In Kubernetes, what is the purpose of YAML configuration files?
Difficulty: EasyType: MCQTopic: K8s Manifests
- To manually edit Kubernetes logs
- To declaratively define cluster resources such as pods, services, and deployments
- To store passwords
- To execute shell commands
Kubernetes uses YAML files to define the desired state of cluster resources.
These configurations describe what resources should exist, their replicas, labels, and specifications. Kubernetes controllers then reconcile the current state to match the desired state automatically.
Correct Answer: To declaratively define cluster resources such as pods, services, and deployments
97. What is the main function of Helm in Kubernetes IaC management?
Difficulty: MediumType: MCQTopic: Helm Charts
- It provides monitoring dashboards
- It manages Kubernetes manifests using versioned templates called charts
- It stores container logs
- It replaces the Kubernetes API server
Helm is a package manager for Kubernetes that simplifies deployment and management of applications.
Helm uses versioned templates called charts to define complex Kubernetes applications as reusable, parameterized packages, improving maintainability and deployment speed.
Correct Answer: It manages Kubernetes manifests using versioned templates called charts
98. What does Kustomize allow you to do in Kubernetes IaC?
Difficulty: MediumType: MCQTopic: K8s Manifests
- To monitor pods in real time
- To customize existing Kubernetes YAML files without forking or duplicating them
- To compile Docker images
- To replace Helm charts completely
Kustomize allows developers to reuse and modify existing Kubernetes YAML files without creating separate copies.
By layering configurations, it supports environment-specific customization (for example, dev, staging, production) while maintaining a single source of truth for base templates.
Correct Answer: To customize existing Kubernetes YAML files without forking or duplicating them
99. Explain how Infrastructure as Code supports container-based deployments.
Difficulty: MediumType: SubjectiveTopic: Container Infra
Infrastructure as Code provides the automation backbone for provisioning, scaling, and managing containerized environments.
It automates setup of container orchestration tools like Kubernetes, Docker Swarm, and ECS, as well as network and storage infrastructure. IaC ensures that container clusters are consistent, version-controlled, and repeatable, making deployments fast and reliable across environments.
100. How can Terraform or Ansible manage Kubernetes clusters such as EKS, GKE, and AKS?
Difficulty: MediumType: SubjectiveTopic: Managed K8s
Terraform and Ansible can provision and configure managed Kubernetes clusters such as EKS (AWS), GKE (Google Cloud), and AKS (Azure).
Terraform handles cluster creation and networking through cloud provider APIs, while Ansible configures workloads, namespaces, and security policies post-deployment. This combination ensures full lifecycle automation from cluster creation to application delivery.
101. What challenges arise when managing IaC across multiple clouds, and how can they be mitigated?
Difficulty: MediumType: SubjectiveTopic: Multi Cloud
Managing IaC across multiple clouds introduces challenges like tool fragmentation, inconsistent APIs, and differing resource naming conventions.
To mitigate these, organizations adopt cloud-agnostic tools such as Terraform, use modular templates, maintain consistent naming and tagging standards, and enforce governance policies through central CI/CD pipelines. This provides uniform control and reduces operational complexity.
102. Describe how GitOps applies to cloud-native IaC and container management.
Difficulty: HardType: SubjectiveTopic: GitOps
GitOps extends IaC principles by managing infrastructure and application deployments through version-controlled Git repositories.
In cloud-native environments, GitOps automates Kubernetes cluster management using tools like Argo CD or Flux, which continuously reconcile Git configurations with the running state. This ensures auditability, reproducibility, and automated rollbacks of both infrastructure and workloads.
103. How does IaC facilitate hybrid cloud architecture management?
Difficulty: HardType: SubjectiveTopic: Hybrid Cloud
IaC provides a unified, code-driven approach to manage infrastructure spanning on-premises and cloud environments.
By using tools like Terraform or Ansible, organizations can automate provisioning of both local servers and cloud resources with consistent policies and configurations. This ensures smooth integration, simplified scaling, and better compliance across hybrid infrastructures.
104. Explain how IaC contributes to resilience and disaster recovery in containerized cloud environments.
Difficulty: HardType: SubjectiveTopic: Resilience
IaC enhances resilience by codifying recovery procedures and infrastructure states.
In containerized environments, it allows automatic redeployment of clusters and workloads in new regions or zones during failures. Combined with backup automation, replicated state management, and immutable configurations, IaC enables rapid disaster recovery and minimal downtime across multi-cloud or hybrid setups.
105. Which command helps debug Terraform execution details?
Difficulty: MediumType: MCQTopic: Debugging
- terraform validate
- terraform plan
- terraform apply -debug
- terraform fmt
The '-debug' flag provides detailed logs of Terraform's execution process.
It helps developers trace variable evaluations, provider interactions, and API requests. This is useful when diagnosing issues with resource dependencies or unexpected provisioning behavior.
Correct Answer: terraform apply -debug
106. What does the 'terraform validate' command do?
Difficulty: EasyType: MCQTopic: Debugging
- Applies changes to infrastructure
- Checks configuration syntax and logical structure before execution
- Compares live infrastructure state
- Creates a state backup
The 'terraform validate' command checks whether configuration files are syntactically correct and internally consistent.
It helps catch simple mistakes before running a plan or apply, preventing runtime errors and failed deployments.
Correct Answer: Checks configuration syntax and logical structure before execution
107. What should you do when the Terraform state file becomes inconsistent with real infrastructure?
Difficulty: MediumType: MCQTopic: Drift Mgmt
- Ignore the issue
- Use 'terraform refresh' or recreate the state using import commands
- Delete the state file and reapply blindly
- Manually edit the state file without backup
When the Terraform state diverges from actual infrastructure, running 'terraform refresh' updates it to match real-world conditions.
If resources were created manually, they can be imported using 'terraform import'. This ensures Terraform's understanding of the environment stays accurate and consistent.
Correct Answer: Use 'terraform refresh' or recreate the state using import commands
108. Which Ansible feature helps print variable values and debug tasks during playbook runs?
Difficulty: MediumType: MCQTopic: Debugging
- debug module
- copy module
- include module
- template module
The Ansible 'debug' module prints variables or custom messages during playbook execution.
It helps troubleshoot variable values, task outputs, and logical conditions, making it easier to diagnose playbook behavior and fix configuration issues.
Correct Answer: debug module
109. Which block in Ansible is used for handling task failures?
Difficulty: MediumType: MCQTopic: Debugging
- block/rescue/always
- try/except
- on_error
- catch/finally
Ansible uses 'block', 'rescue', and 'always' blocks to handle errors gracefully.
Tasks in the block are executed normally; if a failure occurs, the rescue section runs recovery steps. The always section executes regardless of success or failure, ensuring predictable cleanup operations.
Correct Answer: block/rescue/always
110. Why is centralized logging important in IaC operations?
Difficulty: MediumType: MCQTopic: Logging
- To reduce network costs
- To maintain traceability, auditing, and easier debugging of automation runs
- To store logs for local debugging only
- To disable pipeline tracking
Centralized logging aggregates outputs from Terraform, Ansible, and CI/CD pipelines.
This helps identify the cause of failures quickly, provides audit trails for governance, and ensures consistent monitoring of automation workflows across environments.
Correct Answer: To maintain traceability, auditing, and easier debugging of automation runs
111. Why should IaC code always be stored in version control systems like Git?
Difficulty: EasyType: MCQTopic: Version Control
- To execute faster pipelines
- To enable tracking, rollback, and collaborative change management
- To remove the need for backups
- To increase infrastructure cost
Storing Infrastructure as Code in version control enables collaboration, code review, and history tracking.
It ensures every infrastructure change is auditable and reversible, aligning infrastructure management with software development best practices.
Correct Answer: To enable tracking, rollback, and collaborative change management
112. Why is modular design recommended for IaC projects?
Difficulty: MediumType: SubjectiveTopic: Best Practices
Modular design breaks large configurations into smaller, reusable components.
This promotes maintainability, reduces duplication, and improves readability. In Terraform, modules encapsulate resource logic; in Ansible, roles serve a similar purpose. Modularization enables consistent patterns, easier debugging, and faster onboarding for new team members.
113. Describe effective rollback strategies for IaC environments.
Difficulty: MediumType: SubjectiveTopic: CI/CD
Rollback strategies in IaC rely on version control, immutable infrastructure, and automated redeployments.
If a deployment fails, previous configurations can be restored by reapplying earlier commits or state snapshots. Tools like Terraform Cloud and CloudFormation change sets simplify controlled rollbacks, minimizing downtime and risk during failure recovery.
114. What testing practices help ensure IaC quality and reliability?
Difficulty: MediumType: SubjectiveTopic: CI/CD
IaC testing includes syntax validation, policy enforcement, and automated integration testing.
Tools like Terratest and Molecule simulate infrastructure deployments, validating expected outputs before production. Continuous testing ensures early detection of errors, compliance with policies, and confidence in every code change.
115. How should teams collaborate effectively when managing shared IaC repositories?
Difficulty: MediumType: SubjectiveTopic: Collaboration
Teams collaborate on IaC by using branching strategies, code reviews, and automated validation pipelines.
Each change should pass peer review and CI validation before merging. Clear documentation, naming conventions, and tagging policies promote consistency and reduce merge conflicts. This approach aligns infrastructure updates with software engineering discipline.
116. Explain techniques to prevent configuration drift in IaC environments.
Difficulty: HardType: SubjectiveTopic: Drift Mgmt
Configuration drift is prevented by enforcing IaC as the single source of truth.
All infrastructure changes must go through code pipelines rather than manual edits. Automated drift detection tools, continuous compliance scans, and immutable infrastructure principles ensure environments remain synchronized with declared configurations.
117. Why is idempotency essential for reliable IaC automation, and how can it be maintained?
Difficulty: HardType: SubjectiveTopic: IaC Basics
Idempotency ensures repeated executions of IaC scripts produce the same result without unwanted side effects.
It enables safe re-runs during automation failures and promotes consistency across environments. Idempotency is maintained through declarative IaC design, careful use of conditionals, and avoiding state-dependent logic within automation scripts.
118. List key best practices to follow when implementing Infrastructure as Code at scale.
Difficulty: HardType: SubjectiveTopic: Best Practices
Key IaC best practices include version control, modular design, idempotency, policy enforcement, and secure secret management.
Organizations should implement automated testing, use remote state backends, and integrate with CI/CD pipelines. Regular audits, drift detection, and documentation ensure sustainable and secure infrastructure automation at enterprise scale.