1. What does IAM stand for in cloud security?
IAM controls who can access which resources in the cloud. It enforces authentication (who you are) and authorization (what you can do).
Get the Preplance app for a seamless learning experience. Practice offline, get daily streaks, and stay ahead with real-time interview updates.
Get it on
Google Play
4.9/5 Rating on Store
AWS · Cloud Basics
Practice Cloud Basics questions specifically asked in AWS interviews – ideal for online test preparation, technical rounds and final HR discussions.
Questions
58
Tagged for this company + subject
Company
AWS
View company-wise questions
Subject
Cloud Basics
Explore topic-wise practice
Go through each question and its explanation. Use this page for targeted revision just before your AWS Cloud Basics round.
IAM controls who can access which resources in the cloud. It enforces authentication (who you are) and authorization (what you can do).
For complete preparation, combine this company + subject page with full company-wise practice and subject-wise practice. You can also explore other companies and topics from the links below.
A Distributed Denial-of-Service (DDoS) attack overwhelms a server with excessive requests. Cloud providers offer built-in DDoS protection and rate limiting to defend against it.
A multi-cloud strategy uses services from more than one cloud vendor. It helps improve resilience, avoid vendor lock-in, and optimize performance by picking the best tools from each provider.
High availability comes from distributing workloads across multiple Availability Zones. If one zone fails, traffic automatically shifts to healthy instances through a load balancer.
Auto Scaling monitors metrics like CPU utilization and scales resources up or down. It ensures consistent performance while minimizing cost.
Combining instance types gives flexibility and savings. Critical services use on-demand or reserved capacity, while background tasks use cheaper spot instances.
Active-active setups run workloads simultaneously in multiple regions. They provide near-zero downtime during outages but cost more to operate.
Failure testing, often called chaos engineering, reveals weak points in infrastructure. By practicing recovery, teams ensure faster response times and more resilient systems.
Monitoring tracks predefined metrics, while observability allows deeper analysis from logs, traces, and metrics combined. Observability tools help understand unknown failures and complex microservice behaviors.
Apply least-privilege IAM roles, encrypt data, enable logging, and segment networks. Also ensure patching automation and enforce multi-factor authentication for critical access.
First, restore service using rollback or backup. Then conduct a post-mortem to identify the root cause, update runbooks, and improve monitoring to prevent recurrence.
AWS offers VPC Flow Logs, Azure has Network Watcher, and GCP uses Cloud Armor for traffic analysis and DDoS protection. These tools track traffic patterns and detect suspicious activities automatically.
IaC allows you to provision and manage cloud resources using declarative code. This ensures repeatable, version-controlled, and auditable infrastructure management.
Stateless apps make scaling easy because any instance can handle any request. Session data is stored in external caches or databases instead of local memory.
A Security Group acts as a virtual firewall for your instance. It controls inbound and outbound traffic based on defined port, protocol, and IP rules.
Automating backups ensures data protection even if an entire region fails. Cross-region replication adds extra durability and disaster recovery coverage.
Monitor CPU, memory, disk I/O, and network utilization to ensure healthy performance. Also track latency, error rates, and request counts to detect bottlenecks early.
Cost Explorer visualizes historical and forecasted spending across AWS accounts. It helps identify cost trends, unused resources, and opportunities to use savings plans or reserved instances.
Automation ensures repeatable, error-free configurations and faster deployments. It eliminates manual intervention for tasks like provisioning, scaling, and patching.
Automation tools eliminate repetitive manual steps and ensure consistency. This leads to faster provisioning, predictable outcomes, and reduced operational costs.
A hospital might keep patient data on-prem for compliance but use public cloud for analytics. This mix ensures both security and flexibility without moving sensitive data outside control.
Amazon CloudWatch monitors resource metrics like CPU, memory, and storage. You can set alarms and visualize trends for better operational insight.
The Shared Responsibility Model defines who manages which part of security. The provider secures the infrastructure (data centers, hardware), while users secure their data, identities, and configurations.
Serverless computing automatically handles scaling and billing per execution. This reduces costs for infrequent or unpredictable workloads.
An AWS Region is a cluster of data centers called availability zones. Each region is independent to support redundancy and disaster recovery.
CloudWatch monitors performance metrics like CPU, memory, and disk utilization. CloudTrail records user activity and API calls. CloudWatch is about system health; CloudTrail is about accountability and security.
DynamoDB is a key-value and document-based NoSQL database. It provides fast and predictable performance with seamless scalability.
AWS Storage Gateway connects on-prem applications with cloud storage. It provides hybrid data access for backups, archives, or file sharing.
Alerts notify teams when metrics exceed defined limits. They help respond quickly to downtime or resource overuse.
Amazon RDS automates setup, patching, and backups for relational databases. It supports multiple engines like MySQL, PostgreSQL, MariaDB, Oracle, and SQL Server.
Lifecycle policies automatically move or delete data based on age or access patterns. They help save costs and manage long-term data retention efficiently.
Cloud providers offer encryption at rest using managed keys or customer-managed keys. Data in transit is secured with SSL or TLS to prevent interception and ensure privacy.
Managed databases handle routine operations like patching, scaling, backups, and failover. This saves time for DevOps teams and ensures consistent performance and security.
Cold storage is ideal for data that is rarely accessed but must be retained for compliance or analysis. It costs much less than hot storage but retrieval times are slower.
Implement unified IAM policies, encrypt data at rest and in transit, and use secure VPN or private links. Continuous monitoring and centralized logging keep visibility across all environments.
AWS CloudFormation allows you to define and provision infrastructure using JSON or YAML templates. It treats infrastructure as code, ensuring consistency across environments.
AWS CodePipeline automates build, test, and deployment phases. It integrates with other AWS developer tools or third-party CI/CD systems.
Terraform is a multi-cloud open-source tool that uses HCL syntax. CloudFormation is AWS-native and uses JSON or YAML. Terraform is more flexible across providers, while CloudFormation offers deeper AWS integration.
A common example is using Jenkins to trigger Terraform scripts after a Git push. This pipeline automatically provisions cloud servers, deploys code, and notifies the team via Slack upon completion.
Vendor lock-in happens when applications rely heavily on one provider’s tools and APIs. Switching providers becomes expensive and complex due to compatibility issues.
Global load balancers manage incoming requests across multiple cloud regions or vendors. This improves fault tolerance and reduces latency for global applications.
Asynchronous replication copies data between environments without blocking operations. It ensures near-real-time synchronization between cloud and on-prem systems.
AWS Direct Connect and Azure ExpressRoute create dedicated private links. They offer better security and lower latency compared to public internet connections.
VM Import/Export allows transferring existing VMs to and from AWS. Similar tools exist in Azure (Migrate) and GCP (Migrate for Compute Engine).
Monitoring helps track system health, usage, and errors. It ensures services stay available and allows early detection of performance problems.
Amazon CloudWatch gathers logs, metrics, and alarms for AWS resources. It helps visualize performance and trigger alerts automatically.
AWS CloudTrail logs every API call, including who made it and from where. It’s essential for auditing, compliance, and incident investigations.
The AWS Billing Dashboard shows detailed spending reports and forecasted costs. It helps teams track budgets and detect abnormal spending early.
Budgets allow users to define monthly or project-based cost limits. Email or SNS alerts trigger when actual or forecasted costs exceed thresholds.
Centralized logging aggregates logs from multiple services into one location. It simplifies troubleshooting, security analysis, and compliance auditing by providing a single source of truth.
Object storage keeps data as objects with metadata and a unique ID. It is ideal for storing files, logs, media, and backups at scale.
Cloud providers handle the security *of* the cloud — like data centers and networks. Users manage security *in* the cloud — like access control and data protection.
Regions represent physical geographic areas like 'us-east-1'. Each region contains multiple availability zones for redundancy and high availability.
Elasticity means automatically adjusting computing resources based on demand. For example, a web app can scale up during high traffic and scale down when idle — saving cost while maintaining performance.
S3 has classes like Standard, Infrequent Access, Glacier, and Intelligent-Tiering. Standard is for frequent access, IA for backups, and Glacier for archiving long-term cold data at very low cost.
Amazon S3 (Simple Storage Service) is designed for storing and retrieving any amount of data. It’s highly durable, scalable, and accessible via the web.
A VPC is a logically isolated section of a cloud provider’s network. It lets you define subnets, routing tables, and firewalls to control communication between resources.
NACLs provide an additional security layer at the subnet level. They allow or deny traffic based on IP, protocol, and port, and apply before Security Groups.