Amazon Web Services - An Introduction

Sections

Introduction to AWS

  • Data centres distributed Worldwide
  • On-demand delivery of IT resources
  • Shared and dedicated resources (isolated at hypervisor level)
  • Benefits:
    • economics of scale
    • accounts isolated at the hypervisor level
    • pay-as-you-go pricing
    • no up front cost
    • reduced maintenance and admin costs (no need to worry about capital expenditure of our own infrastructure)
    • organised into product categories (compute, storage, database, machine learning etc.) The AWS global infrastructure is massive and is divided into geographic rehions and those geographic regions are divided into separate availability zones.
      The AWS GovCloud is located in U.S. West Coast and is specifically for US Government Organizations.
      There is also a secret region specifically for US Government Intelligence Organizations and CIA is also a customer of AWS.
      Choice of region may happen to:
  • optimise latency,
  • minimise costs, or
  • address latency requirements.

Each Region is divided up into atleast two availability zones that are physically isolated from each other (Each AWS Region has multiple physically isolated Availability Zones).
This provides business continuity for our infrastructure if we have it distributed across multiple availability zones if one availability goes down. If the infrastructure in one availability zone geos down, the other availability zone will continue to operate.
The largest region US -EAST North Virginia has six availability zones.
The availability zones are connected to each other with high-speed (fast) private fibre-optic networking.
There aee over 100 edge locations that are used for the CloudFront CDN (Content Delivery Network). CloudFront CDN cacahes content at edge locations for high performance delivery of ontent. Also provides DDOS protection. It distributes that to edge loactions across the globe for high-speed delivery to our end-users no matter where they are located and it’ll do it with very low latency.

AWS Management Console:

  • Web-based user interace for AWS
  • Requires an AWS account
  • Monitor costs
  • AWS Console Mobile App
    One can access AWS Management Console simply by clicking on the My Account Menu from the AWS Website and then selecting the Management Console.

We can also access AWS resources through many Software Developement Kits and Command line Interfaces:

  • Software Development Kits
    • Create applications that use AWS services as Backend.
    • SDKs for javaScript, NodeJS, Java, Pythob, .NET, PHP, Ruby, Go, C++
    • Mobile SDKs for Android iOS, React Native, Unity, Xamarin
    • Application Programming Interface enables access to AWS using http calls.
  • Command line Interface
    • Control multiple AWS services from the command line and automate through scripts.

AWS Websites

Getting started with AWS

  • Go to aws.amazon.in
  • Click on “Sign Up”
  • Complete the sign up process
  • Click on “My Account”
  • Select “AWS Management Console”

Introduction to Storage Services

Cloud Computing Models

  • Infrastructure as a Service (IaaS)
    • Contains the basic building blocks for Cloud IT
    • Examples - VPC, EC2, EBS
  • Paltform as a Service (PaaS)
    • AWS manages the underlying infrastructure (usually hardware and operating sysytems)
    • Examples - RDS, EMR, ElasticSearch
  • Software as a Service
    • Completed product that is run and managed by the service provided. Mostly refers to end-user applications:
    • Examples - Web Based email, Office 365, salesforce.com

Serverless Computing

  • Allows you to build and run applications and services without thinking about servers.
  • Also referred to as Function-as-a-service (FaaS) or Abstracted services
  • Examples
    • Amazon Simple Storage Service (S3)
    • AWS lambda
    • AWS DynamoDB
    • Amazon SNS

In S3 we, create buckets and whatever objects we put, it falls into that bucket. We don’t need to worry what is behind it, be it linux operating systems, hard drives, file servers etc.
AWS lambda is a place where we can run code in cloud without service.
DynamoDM is a nosql database
AWS SNS is used to send notifications.

AWS Storage Services

AWS Simple Storage Service (or S3 in short) is designed to store and access any data over the Internet. Its a serverless service and as such we don’t need to worry what is behind it. We just simple need to create a single thing called bucket and then we upload objects to the bucket and the size of the bucket grows. The size of the bucket is theoretically unlimited and AWS looks after everything for us.
Amazon Glacier is the cheapest storage option on AWS and it is used for long term archiving of data. It is a serverless service like Amazon S3 but it is not readily accessible as S3. So it should be used for content that is to be archived.
We can also setup a lifecycle rule that will automatically migrate old data in Amazon S3 automatically over the Glacier for long tern archiving.
Amazon Elastic Block Store (EBS) is highly available low latency block storage and it is specifically used for attaching to servers that are launched with EC2 Service. It is similar to attaching a hard drive to our computer at home. It works in the same manner. Its block device storage.
Amazon Elastic File Storage (EFS) is a network attached storage and and it is specifically for Amazon EC2. Beacuse it is network attached storage, this allows multiple servers to access a one-data source in a similary to a NAS on our network at home and can be accessed by multiple computers on nthat network.
The AWS Storage Gateway enables hybrid storage between on-premise envirnments and the AWS cloud. It provided a low-latency performance by caching frequently used data on premises while storing the less frequently data in Amazon Cloud Storage.
Amazon Snowball Device is a portable petabytes scale data storage device that can be used to migrate data and large amount of data from on-premise environments over to AWS Cloud. We simply download the data to the Snowball device then we send it to AWS who will then upload that data to AWS Storage services for us.

Storage Example Scenario

Lets consider a AWS Cloud and create a Virtual Private Cloud (VPC) inside it. VPC is an impenetrable fortress against attack and no one will be able ro enter this space without us allowing that to happen. Next we will launch two servers to access to data. So for data requirement we launch two EBS devices to our servers, but to make data avaiable to both servers.
In case if we have similar requirement for a harddrive to be accessed by multiple devices, we attach NAS i.e., Network Attached Storage to our network and then we setup on our operating systemin our desktop computers to have a mount target for that network attached storage.
Similarly in AWS, we can use Mount targets to enable multiple servers to access the one data source.
Storage Introduction
In case if we want to store data like we do in Google Drive, and an automated solution that over timw migrates the data over to something more low cost and more long term for archiving, Amazon S3 and Glacier come into picture.
We can use S3 to craete a bucket to store/delete the objects. We can also setup lifecycle rule on that bucket so that over a period of time, as objects age, they can be migrated over yo an Amazon Glacier vault. It will still be accessible, just not readily accessible as S3.
S3 is outside VPC and to allow traffic to flow in and out of VPC specifically, a VPC endpoint is used.

Hybrid Storage Example

In hybrid storage , we have both onsite storage inside a corporate datacenter and in cloud in AWS S3.
Its great for disaster recovery solution becuase it provides high speed access ti our data in our corporate data center. And at the same time, we can take advantage pf the durability and availability of Amazon S3 as a disater recovery solution.
The first problem that we are going to encounter is that these corporate data centeres will have petabytes of data and tranfer that over via the internet to the AWS cloud is not going to be practical, si AWS can send us a snowball device. We can upload our data to that and then we can send that back to AWS where they will uplaod it to AWS cloud.
Further we need to make sure that data in our corporate center is in sync with the AWS cloud. And here comes AWS Storage Gateway in picture. This will orchestrate all of that for us.
If we have high speed link between our corporate data center and the AWS cloud. which we can have with AWS Direct Connect Service, we can use AWS Storage gatewayto orchestrate and manage all that.
So basically its tores all the frequesntly accessed data at onsite and also stores all of the data at AWS S3 bucket. Amazon S3 will be a disaster recovery solution.

Using S3 Service

Steps:

  • Craete a S3 bucket
  • Upload files to the bcket
  • Download files from the bucket
  • Empty and delete the bucket.

How do we do it.

  • Sign into AWS Management Console.
  • Go to Services.
  • Click S3 in Storage Categoy.
  • Easiest way is to search if not found.
  • Then on S3 home page, click on Create Bucket Button.
  • Give the bucket name. It has to be unique across AWS.
  • Click Next and them Next. We are not taking care of versioning as if mow.
  • We are creating a private bucket, so we only will be able to access it.
  • Verify the specifications. US East is largest and has almost all the services with cheapest rates.
  • Click on the Finish and the bucket will be craeted.
  • Then the bucket will appear in our bucket list (not your personal bucket list buddy).
    Not the bucket list you are going to check. Check in your AWS S3 console!!
  • Further click on the bucket
  • We we move in, we seee that our bucket is empty.
  • Bucket is simply a repository to dump objects to. It could be files, videos and even whole directory.
  • Click on Upload and drag and drop a directory (or file you wish to uplaod).
  • Post upload we will check the depcifications. Upon review the file upload will start.
  • If we open the uploaded folder, we can see the files there and from there we can download the files.
  • If we try to access the object in browsern via link, we will get error because it is a private object
  • Next we go to S3 bucket homepage
  • Delete the folder uploaded.
  • Next we go to S3 management console and then delete the created S3 bucket.

Introduction to Database Services

The Relational Database Service (RDS) is a fully managed database service that makes it easy to launch database servers in the AWS Cloud and scale them when required.
The RDS service can launch services for mySQL including variations of mySQL database engines viz., MariaDB, Amazon Enterprise mySQL (Amazon Aurora), Standard postgresql, Amazon Enterprise Aurora postgreSQL, Microsoft SQL server, Oracle.
Amazon DynamoDB is a nosql serverless low-latency performace database service.
Amazon Redshift is a fast fully managed petabyte scale data warehouse based upon postgreSQL and it is a pefect database solution if we are looking for a big data storage solution.
ElastiCache is an in-memory data store or cache in the cloud which allows us to retrieve information from fast fully managed in-memory ccahes instead of relying for slower disk-based databases.
AWS Database Migration Service orchestrates a migration of databeses over to AWS easily and securely. It can slo migrate data from, one database engine type to another totally different database. For example we can use it to migrate from Oracle to Amazon Aurora.
Amazon Neptune is a fast reliable fully managed graph database service . It has purpose based high performance graph database engine optimized for storing billions of relationships and querying the graph with milisecond latency.

Database Scenario

Lets say we have a onsite oracle relational database and we want to migrate over to Amazon Auroara. So first we will launch an RDS instance in our Virtual private Cloud. Further we will use a database migration service to migrate that data in that on-site Oracle Databse over to target RDS Amazon Aurora Server. Further lets suppose that new database is becoming overwhelmed with requests for frequently accessed data.
A scenario of Hybrid Database
Elastic Cache can help us put an ElasticCache node in front of that RDS instance and that will cache our frequently accessed data and becuase it is delivering that data from memory and it’s not delivering it from a harddrive it will be delivered with low latency and at the same time, the load on our database will be massively reducednad any request for any requests that is not in elastication will be simply forwarded to the RDS instance and that way we have we have high-speeed access to both frequently accessed data as well as less frequently accesssed data.

Database Hands on Example

  • Login to AWS Management Console
  • Go to RDS in database services
  • If we have any existing database instance there, it will take us to the dashboard or else it will take us to welcome screen.
  • Click Launch a DB instance.
  • we will check the check box of free tier wligibility only to make sure that we don’t get bill at month end.
  • we have mySQL community Edition and will select it.
  • We will make sure that the check box with text Only show options that are eligible for RDS Free tier is checked.
  • We will select db t2-micro DB instance class.
  • We will name DB instance idenitier and the name should be unique all of our personal DB instances.
  • Next we will create a Master username and Master Passwword.
  • We click on next step.
  • We will not change the advanced setting as if now.
  • We have an option to craete database on launch. We will craete one. We are not creating backup as we will set the period of backup retention period as 0 days. No need to worry about monitoring and maintenance as if now.
  • Then we will click on Launch Database Instance.
  • The we will view DB instance.
  • After few minutes, we can see RDS instance running
  • We can connect to this through endpoint
  • We will copy the endpoint.
  • Further we wiill instance mySQL local workbench
  • Here we will click connction name
  • hostname shall be the endpoint
  • We will use master username
  • Then we will connect it and post that we will add our password on popup.
  • Post that we can see our database.
  • We can see the details by opening Object Info.
  • Further wie will isntall mysql shell locally.
  • Connect to the sheel using the command -
1
\connect username@endpoint:port

and post that we will enter the password when popup appaers.

  • Next upon loin, we need to get into sql mode so wewill type in the command
1
\sql
  • Now Here we are in SQL mode. We need to put semicolon at the end of each command.
  • Next we will put following command to see the databases
1
show databases;
  • We will see our test database that we had setup on our console.
  • This will be a empty databases. So we will use our system database mysql using following command:
1
\use mysql
  • This will setup our schema to mysql.
  • Then we will type in following command to see the tables:
1
show tables
  • Here we will not put semicolon or else it will hang.
  • Now we will jump to AWS management console and we will delete the isnatance so that we don’t get bill at the end of the month
  • We will go to actions, select delete and we will mark create final snapshot as no, acknowledge it and then delete it.

Introduction to Compute and Networking Services

Amazon Elastic Compute Cloud (EC2) provide virtual servers in the AWS cloud we can launch one or thousands of instances simulatneously and only pay for whatever we use. There is a broad range of instance type with varying compute and memory capabilities. and those will be optimized for different use cases.
Amazon EC2 AutoScaling allows us to dynamically scale our Amazon Ec2 capacity up or down automatically accorning to conditions we define., by launching/terminating instances, based on demand. It can also perform health checks on those insatancs and replace them when they become unhealthy.
Amazon Amazon LightSail is the easiest way to launch virtual servers running applications in the AWS Cloud. AWS will provision everything yoweu need including DNS Management and storage management and get us up and running as quickly as possible.
Amazon Elastic Container Services (ECS) is a highly scalable high-performance container management service for docker containers which will run on a managed cluster of EC2 instances.
AWS Lambda is a serverless service and lets us run code in the AWS cloud without having to worry about provisioning or managing. We just need to upload the code and AWS takes care of everything.

Understanding the Compute Scenario

Suppose we launch an EC2 instance inside our VPC. In case if server is getting overwhelmed, we can manually add EC2 instances and further terminate them , when load subsides (horizontal scaling). This doesn’t see to be a proper solution, though there will be atleast one EC2 instance running as there will many endpoints in this architecture and if endpoint is down (i.e, EC2 instance for that endpoint has been removed), links won’t work.

So in this case we can take help of Elastic Load Balancing where it can receive traffic from end users and it will distribute traffic to and EC2 instance that is avaialble. If another equests come, it will direct to another available instance. And in this way we will be able to balance load. Ib case any EC2 inastance become unhealty, it will file a health check and won’t route traffic to the server.

Also if we have services where load is intermittent, say in an hour ot two demand goes up and down, it is not a practical solution. Here comes Auto Scaling Service, which will launch new EC2 instances when traffic goes up and terminate them, when traffic goes down. It can also perform health checks and if for any reason, the server becomes unhealthy, it will replace it with healthy one.

Networking and Content Delivery

Amazon CloudFront is a global Content Delivery Network (CDN) securely delivers our frequently requested content to over 100 edge locations across the globe and by doing so, it achieves low latency and high transfer speeds for our end users. It also provides protection against DDoS attacks.
Amazon Virtual Private Cloud (VPC) lets us provisioon a logically isolated section of AWS and we can launch AWS resources in thatb VPC that we wourselves define. VPC is our personal space within the AWS Cloud and no one can enter it, unless we allow thwm to enter it.
AWS Direct Connect is a high speed dedicated network connected to AWS. Enterprises can use it to establish a private conncetion to AWS Cloud in situations where stanadard internet connection won’t be adequate.
AWS Elastic Load Balancing (ELB) automatically distributes incoming traffic for our application across multiple EC2 instances and also in multiple Availability zones so if one availability zone goes down, the traffic will still go to the other availaibility zones and our applicatyion will continue to deliver respenses to requests. It also allows us to achieve high availability and fault tolerance by distributing traffic evenly amongst those instances and it can also bypass unhealthy instances.
Amazon Route 53 is a highy avaialble and Scalable Domain Name System (DNS). It can handle direct traffic for our domain name and direct that traffic to our back-end web server.
Amazon API Gateway is a fully mananged serverless service that makes it easy for developers to create and deploy secure application programming interface (API) at any scale and it can handle all the tasks involved in accepting and processing us to hundreds of thousads of concurrent API calls.

A Networking Scenario

Lets consider a scenario where we have kept EC2 instances inside AWS VPC in two availaibility zones to be on a safer side. For if there is only one availaibility zone and it goes down, our traffic will have nowhere to go and our application stops delivering respenses to requests. Using Elastic Load Balancer, we can distribute traffic across multiple Traffic zones across multiple availaibility zones.

A Networking Architecture

Suppose we have lot of static cotents on our application and it isn’t chaning much, so it won’t be efficient for us to continue to deliver them from EC2. We can use Cloud Front Delivery Network to assist us on this. CloudFront will cache them and distribute that across hundreds of edge locations across the globe so whenever our end users requests that content, it will be delivered to them with really high speed and low latency and at the same time, it’s going to take the load off from our EC2 instance. It will significantly reduce our costs. EC2 will continue to get requests for dynamically changing contents.

Further the DNS name for the CloudFront Distribution will be very complicatedand it just won’t mean anything to end user. Ideally we will want our user to type in our domain name which should get forwarded to the CloudFront. Route 53 does exactly the same.

Let’s say we work for a large enterprise that has it’s own corporate data center. Next we will look at a solution where Employees are work at a faster solution. In this case, we will use private high-speed-fiber-service AWS Connect which will very fast network between our corporate data center and AWS.

A Hands-on

  • Go to AWS Management Console
  • Go to Services -> Compute -> EC2
  • This will take to EC2 dashboard
  • Launch a EC2 Amazon Machine Image (AMI). We can select a AMI provided by AWs or from AWS user community.
  • So we will search for Wordpress AMI and select the t2-Micro instance, its free in some limit.
  • We will Enable Auto Assign Public IP.
  • Further with defaulyt settings, we will review and Launch
  • We will proceed without key pair now
  • Further we will launch it
  • Thn we will view it and after few minutes the status will change from pending to running
  • Further we can see a public IP which has been craeted.
  • If we go to that in browser, we will see the blog
  • People at bitNami have made it in such a way that it has created username and password and embedded them in logs.
  • So we will go to console -> Acttions -> Instance Setting -> Get System Logs
  • We will get the Bitname password there
  • Further we will hit the public IP in browser foloowed by forward slash and text admin
  • We will put user as user and paste in the password that we got.
  • Upon enter we ill raech our wordpress admin dashboard.
  • Here we can do whatever we want.
  • Now we will get rid of them
  • Go to Actions -> Instance State -> terminate
  • It will terminate the instance.

Introduction to AWS Management Services

  • Provisioning
  • Monitoring and Logging
  • Operations Management
  • Configuration Management

CloudFormation allows us to use a test file to define our infrastructure as well as deploy resources on AWS. This allows us to define our infrastructure as code and we can manage our infrastructure with with the same version controls that we use to manage our code.
AWS Service Catalog allows enterprises to catalogue resources that can be deployed on the AWS Cloud. This allowas an enterprise to achieve common geovernance and compliance for its IT resourcesby clearly defining what is to be allowed to be deployed on the AWS Cloud.
AWS CloudWatch is a monitoring service for AWS cloud resources and applications that are deployed on the AWS. It can be used for triggering scaling operations or it can be also used for providing insight into our deployed resources.
AWS AWS System Manager provides a unified user interface that allows us to view operational data from multiple AWS AWS services and to automate tasks across our AWS resources that helps to short thr time to detect and resolve operational problems.
AWS Cloudtrail monitors and logs AWS account activity including acytions taken through the

  • AWS Management Console,
  • The AWS SDKs,
  • CLI tools
  • and other AWS tools.
    This greatly simplifies security analysisof the activity of users of our AWS account.
    AWS Config enables us to access auit and evaluate the configurations of our AWS resources. This simplifies
  • compliance auditing,
  • security analysis,
  • change management and control and
  • operational troubleshooting
    AWS OpsWorks provides managed instances of Chef and Puppet. Chef and Puppet can be used to configure and automate the deployment of the AWS resources.
    AWS Trusted Advisor is an online expert system that can analyze our AWs account and the resources inside it and then advise us on how to achieve high security and best performance from those resources.

A Management Scenario

We will use a billing and cost management console and the cloud service to create a billing a alert and that will notify us using a Simple Notification Service when our account has exceeded a budgeted amount. Lets see how to do it.
A Cost Management Scenario

  • To enable billing alerts, we will go to our account dashboard services menu and we go the billing dashboard.
  • At left hand side, we will go to prefernces, Check the box of Receive Billing Alerts if not and then save the preferences.
  • Then we go back to console.
  • We will go to Services
  • We will go to Management Services
  • We will go to Cloudwatch
  • Jump to Alarms
  • Click on Create Alarm
  • Select the metric Total Estimated Charge from the popup.
  • Then we select Next and select the currency and then click Next.
  • We give metric a name and a description
  • We need to set a point at which we will receive the alarm, say 10 dollars
  • Scrolls to actions and craete a new list, add an email and create alarm.
  • This has created a SNS topic. WE will receive a popup where we will be asked to verify the email We have to confirm the email address to receive the alert.
  • Go to Services and then to Simple Notification Services and we can see a topic there which we have created.

Introduction to Application Services

  • Step Functions makes it easy to coordinate the componets of distributed applications amd microservices using a visual workflow.
  • Amazon Simpe WorkFlow Service (SWF) works in a similar way to step functions in coorfinating multiple components of a bsuiness process. For new applications, it is receommended to use step functions and not SWF Service.
  • Simpe Notification Service is a fully flexible fully managed pub/sub messaging service. We can craete topics and users subscribe to that topic. When we publish a message to the topic, the users who have subscribed to that topic will receive that message. It can also be used for push notifications for mobile devices.
  • Amazon Simpe Queuing Service (SQS) is a fully managed Message Quering Service, which makes it easy to decouple our applications from demand. In simple words it allows messaages to be uidl up in a queue until the processing server that processes those messages can catch up with the demand.

Process Decoupling Example

  • If average demand exceeds processing capacity, queue will grow indefinitely
  • SQS can provide Cloudwatch metrics that can be used with auto scaling.
    Process Decoupling Example

Consider we have an application running on an auto scaling group of EC2 instances and it is a process server as it processing messages as they come in. As the demand increases, auto scaling group launches new instances to cope up with new need, but it takes 5-10 mintes to spin up new instances. So in case of the rapid increase, during this tenure, with SQS we can let the message come in and they can build up until such time that our auto scaling group of EC2 instances get to the queue and empty out.

There is a possibility that many instances might get unhealthy at a momemt leading to infinite messaging building up. Or may be beacuse of a faulty update the messages count might spike. So for such situations we can setup a CloudWatch Metric and that will alert us with an SNS email notification that our SQS queue is continuing to grow and we need to invstigate further.

Here we can use CloudWatch Metric to inform Auto Scaling that queue is big and we need to start new instances as queue is big. We can also use this to notify it to terminate the instances if queue goes down.

Some Customer Engagement Services

  • Amazon Connect is a self service contact center in the AWS cloud and it is delivered as pay-as-you-go pricing model. It has a drag and drop graphical user interface which allows us to create process flows that define customer interaction without having any coding ar all.
  • Amazon PinPoint allows us to send emals, SMSs, and mobile push messages for targetting marketing campaigns as well direct messages for to our individual customers.
  • Amaazon Simple Email Service (SES) is a cloud based bulk email sending service.

Hands on

  • Go to AWS Management Console and from there go to Services.
  • We click on SES in messaging services.
  • First we will verify a new email address.
  • We will receive the link on our mail. We can use it going on.
  • it will occur as verified on console.
  • We will send a test email to check that it is working.
  • Sincere there is a possibility that we might send spams in bulk, SES allows us to send mails to verified emails. In case we want to change it, we can request a sending limit increase.
  • To do so, we will go to Sending Statistics under Email Sending on SES dashboard and click a Request a Sending Limit Increase.
  • This will open a form, we will fill it up and sent it to AWS team. Once it is processed, we should be good to send bulm emails.

Introduction to Analytics and Machine Learning

AWS Analytic Services

  • Elastic MapReduce (EMR) is a AWS’s Hadoop’s Framework as a Service. We can also run other frameworks in Amazon EMR that integrate with Hadoop such as Apache, Spark, HBase, Presto and Flink. Data can be analyzed by EMR in number of AWS Data Stores including Amazon S3 and Amazon DynamoDB.
  • Amazon Athena allows you to analyze data stored in a Amazon S3 bucket using standard SQL statements.
  • Amazon Elasticsearch is a fully managed service for elastic.co’s elastic search framework. This allows high-speed querying and analaysis of data that is stored on AWS.
  • Amazon Kinesis allows you to collect process and analyze real-time streaming data.
  • Amazon Quicksight is a business intelligence reporting tool similar to Tableau. If you are a Java Developer , its similar to BIRT and is fully managed by AWS.

AWS Machine Learning Services

  • AWS Deeplens is a deep learning enabled video camera, It has a deep learning software development kit that allows you to create advanced vision system applications.
  • Amazon Sagemaker is AWS’s flagship machine learning product which allows us to build and train our own machine learning models and the deploy them to the AWS cloud and use them as a back-end for our applications.
  • Amazon Rekognition provides deep learning based analysis of video and images.
  • Amazon Lex allows you to build conversational chatbots and these can be used in many applications such as first_line_support for customers.
  • Amazon Polly provides natural sounding text-to-speech.
  • Amazon Comprehend can use deep learning to analyze text for insights and relationships. This can be used fro customer analysis or for advanced searching of documents.
  • Amazon Translate can use machine learning to accurately trasnlate
  • Amazon Transcribe is an automatic speech recognition servicethat can analyze audio files that are stored in Amazon S3 and then return that transcribed text.

Work is in Progress. Keep visiting and stay tuned for updates.