Cloud Servers for the AI

Standard

As my project would have to gather a large amount of data to better the algorthim, cloud servers would be the best option to use in order to save and store all of it.

What is Cloud Computing:

Cloud computing would be a means of providing an unlimited number of Tarkas with access to the required computing infrastructure (comprising on-demand network access, servers, AI engine, storage, services and applications) with minimal management effort or service provider interaction.

 

Benefits of Cloud Computing:

  • Scalability: 

This allows for flexibility as the population of Tarkas changes. Cloud providers can quickly adjust the amount of computing resources to accommodate any increase or decrease in the numbers of Tarkas in operation. Hence if more storage was needed to benefit the AI algorithm, it could be automatically increased by the provider centrally rather than the user having to purchase upgrades that they would need to install into their individual otter.

  • Resilience:

This means a much more reliable service is provided by the product, as having data saved and stored on the cloud ensures that it is backed up in case of, power or hardware failures as well as any other crisis. In terms of my project, the benefit of this is that if the servers hardware goes down it is able to be quickly accessed again. This is not the case if the algorithm was in the hadware of the otter, which would mean if it was broken it would either have to be fixed or replaced, both taking a siginificant amount of time.

  • Flexibility: 

Cloud computing allows for much easier access to the data that is stored as long as there is an internet connection. This means that for my project if the algorthim and code needed to be changed or adjusted it could easily be done from anywhere and then it would be applied to all of the otters, no matter where they are.

  • Access to Automatic Updates:

The cloud system would regularly be updated with the lastest technology and improvements. As a result of this the AI algorithm would constanly be updating and improving itself based on the intonation data it would be recieving from the users. As oppose to if the AI was hardware in the otter, where if it was making a mistake it would constantly make that mistake and never improve or fix itself. 

 

Amazon Web Services (AWS):

 

 

I decided to use a server vendor as it would generally be less expensive than creating one by myself or using a third-party consulting partner. Another added benefit is that they are more knowledgeable when it comes to the scaling and optimisation of thier own hardware.

However maybe most importantly, a server vendor is able to provide a complete AI hardware/software stack (the underlying services needed) that is packaged with an all-encompassing support strategy that covers everything from the experimental stage to a well-integrated and scalable implementation of the AI solution.

  • 81% of deep learning projects in the cloud run on AWS
  • 85% of TensorFlow projects in the cloud run on AWS
  • Fastest training for popular deep learning models: AWS-optimized TensorFlow and PyTorch recorded the fastest training time for Mask-RCNN (object detection) and BERT (natural language processing).

Amazon SageMaker is a service that Amazon provide which allow machine learning models to be built, trained and deployed quickly. This would be the software that I would be able to use in order to make my AI and delploy it onto the cloud.

In terms of security, AWS specifiy that they have the deepest set of security and encryption capabilities.

 

What I would Use the Cloud Based Servers for:

The cloud based servers would be used in order to create a single framework and infastructure that all Tarkas are able to access. This would mean that the algorithm is constantly improving due to constant updates as well as the saving and storage of the data gathered by the Tarkas.

Each Tarka would then be able to connect to the servers through a WIFI module in order to access the algorithm that makes it function. As the user talks to it, it would pick up thier tone of voice using a microphone and relay the information to the cloud servers, where it would run through the algorithm and send it back to Tarka so that it could repond accordingly through its speaker.

 

 

References: 

Print Friendly, PDF & Email

Leave a Reply

Your email address will not be published. Required fields are marked *