Launched back in 2006, AWS has become the leading provider of on-demand cloud computing services. The cloud computing services provider secures a staggering 32% of the cloud computing market share up until the last quarter of 2018.
Every aspiring developer looking to make it big in the cloud computing ecosphere must have a good understanding of AWS. If you’re eyeing the role of an AWS Developer, then the AWS interview questions we list here will help you prepare for that future.
The interview questions on AWS here are divided into basic and advanced questions, and should give you a strong, well-rounded understanding of the cloud computing provider.
Basic AWS Interview Questions
1. What is AWS?
Amazon Web Services (AWS) is a platform that offers safe cloud services, data storage facilities, computing platforms, content delivery, and various other associated services to the users.
2. What are the various types of AWS cloud products?
There are mainly three kinds of cloud service types that AWS products offer. These are:
- Computing: Auto-scaling, EC2, Lightsat, Elastic Beanstalk, and Lambda
- Storage: S3, Elastic File System, Elastic Block Storage, and Glacier
- Networking: VPC, Route53, and Amazon CloudFront
3. What is Auto-scaling?
Auto-scaling is a function that supports the provision and launch of new instances after recognizing demand. This way it offers the users the ability to increase or decrease the resource capacity as demand changes.
4. Is there a difference between region and availability zone?
Yes. Regions are different geographical locations like the United States-West or North California, and Asia South or Mumbai. An availability zone is a part of these regions, and are mostly isolated zones that can replicate itself when the need arises.
5. What do you understand by geo-targeting in CloudFront?
Geo-targeting in the CloudFront supports the creation of customized content for a target audience as suggested by demand and the needs of a specific geographical area. This helps businesses showcase their personalized content to the target audience in different geographic locations without changing its URL.
6. What are the steps involved in CloudFront?
There are four steps involved in CloudFront. These are:
- Step 1: Creating a CloudFormation template in YAML or JSON format
- Step 2: Saving the code in an S3 bucket so that it serves the repository for the code
- Step 3: Using the AWS CloudFormation to call the bucket and thereby creating a new stack on the template
- Step 4: CloudFormation reads the file and thus understands the services required that are called along with their order details, relationships with services and associated provisions
7. Which AWS tools help you recognize that you’re paying more than required?
There are four such tools available in AWS. These are:
- Checking the top service table
- AWS budgets
- Cost allocation tags
- Cost explorer
8. What is S3 in AWS?
S3 is referred to as Simple Storage Service. It is used to store and retrieve data of any amount at any time from anywhere in the world using the web. To use this service the payment model developed is “Pay As You Go”.
9. What is AMI?
Amazon Machine Image is a template that offers the information required to launch an instance that acts as a copy of AMI running as a virtual server in the cloud. The information provided is about the operating system, applications and the application server itself. Many instances can be launched at one time from different AMIs as per your instructions.
10. What is the relation between AMI and Instance?
Instances can be launched by AMIs. One AMI can launch as many instances as required. An instance type defines the hardware of the host computer including information about computers and its memory abilities. After launching an instance, it works as a traditional host and could be interacted with as with any other computer.
11. What are the inclusions in AMI?
There are three inclusions in AMI which include the following:
- Template for the root volume for the instance
- Block device mapping that helps in determining the volumes after attaching to the instance after launch
- Launch permissions that help decide which AWS account can take the AMI for launching instances
12. Can we send a request to Amazon S3?
Yes, we can send a request to Amazon S3 by using the REST API or the AWS SDK wrapper libraries which wrap the underlying Amazon S3 REST API.
13. What are the main differences between EC2 and S3?
The main differences between EC2 and S3 are stated under.
A cloud web service
A data storage system
Used for hosting the web application
Used for storing data
Works as a huge computer machine
It is a REST interface
It can either run LINUX or Windows and could also handle PHP, Python, Apache and various other kinds of databases
It applies secure authentication keys such as HMAC-SHA1
14. Can buckets be created in AWS accounts?
Yes, buckets can be created in AWS accounts. By default up to 100 buckets can be created in the AWS account.
15. What is a T2 Instance?
A T2 instance is specifically designed to offer moderate baseline performance and the ability to burst into higher performance as demanded by the workload.
16. What are the different kinds of instances?
The different kinds of instances include the following.
- Accelerated Computing Instance
- Memory-Optimized Instance
- Storage Optimized Instance
- Computer Optimized Instance
- General Purpose Instance
17. Does Amazon VPC support the property of broadcast or multicast?
No, Amazon VPC does not support the property of broadcast or multicast.
18. Can we create Elastic IPs in AWS?
Yes, we can create Elastic IPs in AWS. About 5 VPC Elastic IP addresses are allowed under each AWS account.
19. What is a default storage class in S3?
The default storage class in S3 is referred to as the Standard frequently accessed.
20. What are roles in AWS?
Roles in AWS are used to provide permission to the entities that can be trusted within the AWS account. They are similar to users and do not require the creation of a username and password to work along with various other resources in AWS.
21. What are edge locations in AWS?
Edge locations in AWS are data centers that deliver as low a latency as possible, i.e., these data centers are physically close to the client. When a user tries to access content, the searches automatically search in the edge location for the fastest reponses.
22. What is VPC?
The full form of VPC is Virtual Private Cloud. VPC helps in customizing the network configuration process. It acts as a network that is logically isolated from various other networks in the cloud. VPC allows the users to have an IP address range, security groups, subnet and internet gateways.
23. What is a Snowball?
A Snowball in AWS is a data transport option. It uses secure devices to transfer a large amount of data in and out of the cloud. Snowball can be used for the transfer of massive data from one place to another, and helps reduce networking costs.
24. What is Redshift?
Redshift is a fast and powerful data warehouse product that performs data analytics and large scale database migrations. Each Redshift data warehouse contains a set of nodes, arranged in what is called a cluster. Clients can use uncommonly fast querying results to gain insights from large volumes of data. The most common applications are real-time analytics, business intelligence, and log analysis.
25. What is a Subnet?
Subnets are large sections of IP Address which can be divided into chunks. 200 subnets can exist per VPC.
26. What is SQS?
Simple Queues Services offers a distributed queuing service that acts as a mediator for two controllers.
27. What is SimpleDB?
SimpleDB is the data repository structure record that supports data doubts and index S3, and EC2.
28. What is Amazon ElastiCache?
Amazon ElastiCache is a web service that helps in easy deployment, scaling, and storing of data in the cloud.
29. What is AWS Lambda?
AWS Lambda is a computing service offered by Amazon to run code in the AWS cloud without managing the servers.
30. What is Amazon EMR?
Amazon EMR is a survived cluster stage that helps in interpreting the working of the different data structures before the intimation. The various components of Amazon EMR are Apache Hadoop, Apache Spark, Apache Hive and various others. They help in investigating a large amount of data, prepare data analytic goals and market intellect workloads using open-source designs.
31. What is the difference between stopping and terminating an instance?
Both stopping and terminating are states in an EC2 instance:
- Stopping: As soon as an instance is stopped, it performs a normal shutdown and transitions to a stopped state. You can start the instance at a later time and all of its Amazon EBS volumes remain attached. While the instance is in a stopped state, no additional instance hours are incurred.
- Terminating: As soon as an instance is terminated, it performs a normal shutdown and transitions to the terminated state. The attached Amazon EBS volumes are deleted, save for the case when the volume’s deleteOnTermination attribute is set to false. As the instance itself is deleted, it is not possible to start the instance again at some later time.
Advanced AWS Interview Questions
32. How will you use the processor state control feature available on the c4.8xlarge instance?
The processor state control has 2 states, namely:
- The C State: Represents the sleep state. Varies from c0 to c6, where c6 is the deepest sleep state for a processor.
- The P State: Represents the performance state. Varies from p0 to p15, where p15 is the lowest possible frequency.
A processor has multiple cores, and each of them requires thermal headroom for gaining a boost in performance. Hence, the temperature needs to be kept at an optimal level so that the cores can perform at their highest.
When a core is put into the sleep state then it results in a reduction of the overall temperature of the processor. This gives an opportunity to other cores for giving out a better performance. Hence, a strategy can be devised by properly putting some cores to sleep and others in a performance state to get an overall performance boost from the processor.
Instances like the c4.8xlarge allow customizing the C and P states for customizing the processor performance according to the workload.
33. Which instance type can be used for deploying a 4 node cluster of Hadoop in AWS?
While the c4.8xlarge instance will be preferred for the master machine, the i2.large instance seems fit for the slave machine. Another way is to launch the Amazon EMR instance that automatically configures the servers.
Hence, you need not deal with manually configuring the instance and installing Hadoop cluster while using Amazon EMR instance. Simply dump the data to be processed in S3. EMR picks it up from there, processes the same, and then dumps it back into S3.
34. Can you differentiate between a Spot instance and an On-Demand instance?
Both spot instances and on-demand instances are pricing models. A spot instance allows customers to purchase compute capacity with no upfront commitment. Moreover, the hourly rates for a spot instance are usually lower than what has been set for on-demand instances.
The bidding price for a spot instance is known as the spot price. It fluctuates based on the supply and demand for spot instances. In case the spot price gets higher than a customer’s maximum specified price, the EC2 instance will shut down automatically.
35. What are some of the best practices to improve security in Amazon EC2?
The following are some of the best practices to improve security in Amazon EC2:
- Allow only trusted hosts or networks to access ports on your instance
- Control access to the AWS resources with AWS Identity and Access Management (IAM)
- Disable password-based logins for instances launched from the AMI
- Frequently review rules in the security groups
36. Is it possible to use Amazon S3 with EC2 instances? Please elaborate.
Yes, it is possible to use Amazon S3 with EC2 instances. It can be used for instances with root devices backed by the local instance storage. Amazon provides an array of tools to load the AMIs into Amazon S3 and to move them amongst Amazon S3 and Amazon EC2 instances.
With Amazon S3, AWS developers enjoy accessing the same highly fast, reliable, inexpensive, and scalable data storage infrastructure used by Amazon to operate its very own global network of websites and services.
37. How will you speed up data transfer in Amazon Snowball?
Data transfer in Amazon Snowball can be enhanced by:
- Copying from different workstations to the same snowball
- Creating a batch of small files or transferring large files for reducing the encryption overhead
- Eliminating needless hops
- Performing multiple copy operations simultaneously
38. Can you explain the difference between Amazon RDS and Amazon DynamoDB?
Amazon RDS is a database management service for relational databases. It allows automating several relational database-related operations like backup, patching, and upgrading. The service deals with structured data only.
Amazon DynamoDB, on the other hand, is a NoSQL database service. Contrary to the Amazon RDS, it deals with unstructured data only.
Read our guide on NoSQL vs SQL to learn about the important differences between SQL and NoSQL databases.
39. Which AWS services suit the real-time analysis of eCommerce data?
DynamoDB is an appropriate choice for collecting eCommerce data as it is an unstructured form of data. Real-time analysis of the collected eCommerce data can be carried out using Amazon Redshift.
40. What happens to the backups and DB Snapshots if a DB instance is deleted?
While deleting a DB instance, there is an option for creating a final DB snapshot. It can be used later for restoring the database.
The Amazon RDS retains the user-created DB snapshot alongside other manually-created DB snapshots once the instance is deleted. All automated backups are deleted along with the instance.
41. How will you load data to Amazon Redshift from different data sources, such as Amazon EC2, DynamoDB, and Amazon RDS?
There are two ways of loading data to Amazon Redshift from different data sources, namely:
- Using the AWS Data Pipeline: Offers high performance, fault-tolerance, and a reliable way of loading data from a range of AWS data sources. It allows specifying the data source, data transformations, and then executing a pre-written import script for loading data
- Using the COPY command: Load data in parallel directly from Amazon DynamoDB, Amazon EMR, or any other SSH-enabled host
42. How does elasticity differ from scalability?
The ability of a system to handle an increase in the workload by simply adding hardware resources when demand rises, and rolling back scaled resources when there’s no demand, is known as elasticity.
Scalability, on the other hand, is the ability of a system to increase the hardware resources for handling an increase in demand. It can be achieved by either increasing the hardware specs or increasing the number of processing nodes.
43. What is connection draining?
Connection draining is responsible for re-routing the traffic from instances to other, available instances. This happens for instances that are either due to be updated or those that have failed a health check. It is an ELB service that continuously monitors the health of instances.
44. Suppose a user has set up an Auto Scaling group but the group fails to launch a single instance for over 24 hours. In this scenario, what happens to Auto-scaling?
In such a case, Auto-scaling will suspend the scaling process. The Auto-scaling feature allows suspending and resuming one or many processes belonging to the Auto-scaling group.
The feature is extremely useful when a web application needs to be investigated for a configuration or some other issue.
45. How will you transfer an existing domain name registration to Amazon Route 53 without disrupting the extant web traffic?
- Get a list of DNS record data for the domain name. It is typically available in the form of a zone file that can be gained from the extant DNS provider.
- After receiving the DNS record data, use the Route 53 Management Console or the simple web-services interface for creating a hosted zone for storing the DNS records for the domain name and continue the transfer process. Here, you can also include other non-essential steps such as updating nameservers for the domain name to the ones associated with the hosted zone.
- Contact the registrar with whom you have registered the domain name and then follow the transfer process. The DNS queries will start getting answered as soon as the registrar propagates the new name server delegations.
46. What are the ideal cases for using the Classic Load Balancer and the Application Load Balancer?
The Classic Load Balancer is the right option for simple load balancing of traffic across several EC2 instances.
On the contrary, the Application Load Balancer is suitable for container-based or microservices architecture where there is either a requirement for routing traffic to different services or carrying out load balancing across multiple ports on the same EC2 instance.
47. Can you explain how AWS Elastic Beanstalk applies updates?
Before updating the original instance, AWS Elastic Beanstalk readies a duplicate copy of the instance. Thereafter, it routes the traffic to the duplicate instance so as to avoid a scenario where the update application fails.
In case there is a failure in the update process, the AWS Elastic Beanstalk will switch back to the original instance using the very same duplicate copy it created before beginning the update process.
48. What happens if an application stops responding to requests in AWS Elastic Beanstalk?
Even though the underlying infrastructure appears healthy, Beanstalk is able to detect if the application isn’t responding on the custom link. It then logs the situation as an environmental event, which can then be checked in detail and thus, acted upon.
AWS Elastic Beanstalk apps have a built-in system for avoiding underlying infrastructure failures. The Beanstalk uses the Auto Scaling feature to automatically launch a new instance in case an Amazon EC2 instance fails.
49. How is the AWS CloudFormation different from AWS OpsWorks?
Although both AWS CloudFormation and AWS OpsWorks provide support for application modeling, deployment, configuration, and management activities, the two differ in terms of the abstraction level and the areas of focus.
AWS CloudFormation is a building block service that allows managing almost any AWS resource via JSON-based domain-specific language. Even without prescribing a distinct model for development and operations, CloudFormation offers foundational capabilities for the AWS.
With AWS CloudFormation, customers can define templates and then use the same to the provision as well as manage AWS application code, resources, and operating systems.
AWS OpsWorks, on the other hand, is a high-level service focusing on providing a highly reliable and productive DevOps experience for IT admins and ops-oriented developers. OpsWorks features a configuration management model and offers integrated experiences for activities like auto-scaling, automation, deployment, and monitoring.
Compared to CloudFormation, OpsWorks provides support for a smaller number of application-oriented AWS resource types, including Amazon CloudWatch metrics, EBS volumes, EC2 instances, and Elastic IPs.
50. What happens when one of the resources in a stack can’t be created successfully in AWS OpsWorks?
The automatic rollback on error feature is enabled when one of the resources in a stack can’t be created successfully in AWS OpsWorks. The feature results in the deletion of all the successfully created AWS resources until the point of the occurrence of the error.
Doing so ensures that no error-causing data is left behind as well as abiding by the principle that the stacks are either created completely or not created at all.
The automatic rollback on error feature is useful especially in cases where one might unknowingly exceed the limit of the total number of Elastic IP addresses or does not have access to the EC2 AMI.
Focus on These AWS Interview Questions
That sums up the list of top AWS technical interview questions list. These should give you a solid grounding in AWS, though you should read more. We have a collection of the AWS tutorials to help you learn even more about the cloud computing service.
The AWS interview questions and answers is by no means an exhaustive list. That’s why you should do a lot more googling and reading so that you can ace that interview. The interview questions on AWS could be anything, as it is quite a broad topic, so make sure you have as wide an understanding of the subject as possible.
Top courses in AWS Certification and Amazon AWS
People are also reading:
- Best AWS Books
- Best AWS Courses
- Best AWS Certifications
- What is AWS?
- Difference between GCP, AWS, and Azure
- Comparison Between Google Cloud, AWS and Azure