6 Facts about Amazon Web Services That’ll make you consider it without a Second Thought

In theoretical terms, Amazon Web Services (AWS) is a collection of remote computing services (also called web services) that together form a cloud computing platform, offered through the Internet giant Amazon.com. The most central and talked-about services from this platform are Amazon EC2 and Amazon S3.

Basically, AWS is a suite of hosting products that aims to eliminate the headache of traditional hosting solutions. Services like Dropbox and websites like Reddit use AWS, so you can think about the level of security and scalability it offers. In fact, we feel like we’re in a good neighbourhood because of AWS.

AWS is not just for the Dropboxes and Reddits of the world, however. You and I can host a couple of servers in AWS and that too very efficiently. We have recently been using AWS to host the web backend for a business web application that we built for the mortgage services industry, which typically runs in high traffic during office hours and slightly less traffic during downtime.

For transient events like this, AWS makes perfect sense. Traffic is high during the day and then reduced, allowing us to manage the amount of computing server needed to host the backend, without being tied to a one-year contract or pay for the power they do not necessarily need.

I have compiled some of our reasons for choosing AWS and I have explained them here. So let’s dive in and see why AWS is better than the competition, for users of all types; large as well as small.

The free level

The biggest reason many people do not use AWS is a lack of knowledge. EC2 is not like a traditional hosted solution, as it is designed to bring servers online and offline as quickly as necessary. Because of this, many IT professionals were wary of using EC2 (or the rest of the AWS suite) because of the cost associated with “playing” to understand it.

The free level, which provides enough credit to run a micro EC2 24/7 instance throughout the month, solves this. Comes with S3 storage, EC2 computing hours, load balancer time and more. This gives developers the opportunity to test the AWS API in their software, which not only enhances their software but also joins AWS, which benefits Amazon in the long run.


You cannot deny the speed of AWS. Elastic Block Storage is almost as fast as S3, but it offers different features. The EC2 calculation units offer Xeon class performance at an hourly rate. Reliability is better than most private data centres in the world, and if there is a problem, it is usually still online, but with reduced capacity.

This is tested with a beautiful Chaos Monkey application, where by using this random application a component is turned off in its cloud environment. Then you could if your application is still working or if it is brought entirely. So in our case, the chaos monkey brought our database and a web server. The database that was an RDS service immediately changed to another database using the Multi-AZ feature as promised by AWS. In the web server scenario, when a web server was down, another web server was launched using the automatic escalation function, so we finally conclude that AWS offers High Availability Performance as promised by them.

In traditional hosting environments, this would probably have meant downtime and 404 errors, since websites were just getting dark. However, in a truly cloud-hosted environment such as AWS, there is sufficient separation between processing and storage so sites can stay online and continue to generate revenue even with reduced functionality. We received our sites outside of the Northern Virginia group and the Oregon group and had no problems.

But the performance power of AWS is in storage. The distributed nature of EBS and S3 produces millions of input/output operations per second in all instances. Think of it as having a raid array of SSDs attached to a particular computer. Add an incredible bandwidth, and you have a storage system that is capable of scalable, with the reliability of 99.9%.

Deployment speed

If you have ever had to provide a hosted web service, you know this pain very well. Traditional providers take between 48 and 96 hours to provide a server. Then you have to spend a few hours tweaking it and having everything tested.

AWS reduces the deployment time to minutes. If you use your Amazon Machine Images, you may have a machine deployed and ready to accept connections in that short period of time. This is important when, for example, you are running a promotion that generates tonnes of traffic at specific intervals, or you simply need the flexibility to handle the demand when starting a new product.

Cloud-forming templates are an AWS gift that can be used to deploy multiple environments at the click of a button and can also be rolled down at the click of a button when the requirement recedes.


Access to AWS resources can be restricted using the Identity and Access Management (IAM); using the roles in IAM we can define the privileges for the actions of the users that greatly reduce any bad practices.

AWS also provides VPC, which can be used to host our services on a private network that is not accessible from the Internet but can communicate with resources on the same network. This restricts access to resources in a way that any malicious Internet user.

These resources hosted on the private network can be accessed using the Amazon VPN or some open source service such as OpenVPN.


The most important feature of AWS is its flexibility. All services work and communicate with your application to automatically judge the demand and handle it accordingly.

Combined with the fantastic API and Amazon machine images you create, you can have a fully customised solution that provides a server instance in less than 10 minutes and is ready to accept connections once you’re online. You can then quickly close instances when they are no longer needed, making server management a thing of the past.

Purchase prices

Amazon took a refreshing approach to pricing your hosting by launching AWS. Each service is “à la carte”, which means you pay for what you use. This makes a lot of sense for the server infrastructure since the traffic tends to be much faster, especially the larger the site.

Traditional hardware, for the most part, is not used for 90% of its life cycle. AWS helps to deal with this problem by keeping it cheap during slow times.

If you like the blog and want to share your feedback, feel free to post in the comment section.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *