How To Build and Deploy a Serverless Machine Learning App on AWS | by Ahmed Besbes | Jun, 2021 – DzTechno


You can watch this section on YouTube to learn more about GANs, the CartoonGAN model, and how to build the script test_from_code.py used in transforming the images.

Part 1 — CartoonGAN model (video by the author)

The goal of this section is to deploy the CartoonGAN model on a serverless architecture so that it can be requested through an API endpoint… from anywhere on the internet.

In a serverless architecture using Lambda functions, for example, you don’t have to provision servers yourself. Roughly speaking, you only write the code that’ll be executed and list its dependencies, and AWS will manage the servers for you automatically and take care of the infrastructure.

This has a lot of benefits:

  1. Cost efficiency: You don’t have to pay for a serverless architecture when you don’t use it. Conversely, when you have an EC2 machine running and not processing any request, you still pay for it.
  2. Scalability: If a serverless application starts having a lot of requests at the same time, AWS will scale it by allocating more power to manage the load. If you had the manage the load by yourself using EC2 instances, you would do this by manually allocating more machines and creating a load balancer.

Of course, serverless architectures cannot be a perfect fit for any use case. In some situations, they are not practical at all (need for real-time or quick responses, use of WebSockets, heavy processing, etc.).

Since I frequently build machine learning models and integrate them into web applications for inference only, I found that a serverless architecture was interesting in these specific use cases.

Let’s have a look at the architecture of the app before deploying the lambda function:

The architecture behind Cartoonify (image by the author)

On the right side, we have a client built in React, and on the left side, we have a backend deployed on a serverless AWS architecture.

The backend and the frontend communicate with each other over HTTP requests. Here is the workflow:

  • An image is sent from the client through a POST request.
  • The image is then received via API Gateway.
  • API Gateway triggers a Lambda function to execute and passes the image to it.
  • The Lambda function starts running: it first fetches the pre-trained models from S3 and then applies the style transformation on the image it received.
  • Once the Lambda function is done running, it sends the transformed image back to the client through API Gateway again.

We are going to define and deploy this architecture by writing it as a YAML file using the Serverless framework, an open source tool to automate deployment to AWS, Azure, Google Cloud, etc.

Isn’t this cool, writing your infrastructure as code?

Here are the steps to follow:

1. Install the Serverless framework on your machine.

npm install -g serverless

2. Create an IAM user on AWS with administrator access and name it cartoonify. Then configure serverless with this user’s credentials (I won’t show you mine — put in yours, buddy).

3. Bootstrap a serverless project with a Python template at the root of this project.

serverless create --template aws-python --path backend

4. Install two Serverless plugins to manage the Lambda dependencies and prevent the cold start of the lambda function:

5. Create a folder called network inside backend and put the following two files in it:

  • Transformer.py: a script that holds the architecture of the generator model.
  • A blank __init__.py

6. Modify the serverless.yml file with the following sections:

  • The provider section where we set up the provider, the runtime, and the permissions to access the bucket. Note here that you’ll have to specify your own S3 bucket.
  • The custom section where we configure the plugins:
  • The package section where we exclude unnecessary folders from the production:
  • The functions section where we create the Lambda function, configure it, and define the events that will invoke it. In our case, the lambda function is triggered by a post request on API Gateway on the path transform.
  • The plugins section to list external plugins:

7. List the dependencies inside requirements.txt (at the same level of serverless.yml).

8. Create an src folder inside backend and put handler.py in it to define the lambda function. Then modify handler.py.

First, add the imports:

Define two functions inside handler.py:

  • img_to_base64_str to convert binary images to base64 strings
  • load_models to load the four pre-trained models inside a dictionary and then keep them in memory

And finally, the lambda_handler that will be triggered by the API Gateway:

Now you’re done. The file structure of the backend should now look like this:

File structure of the backend folder (image by the author)

9. Start Docker before deploying.

10. Deploy the lambda function.

cd backend/
sls deploy

Deployment may take up to ten minutes, so go grab a ☕️.

What happens here, among many things, is that Docker will build an image of the Lambda deployment package, then Serverless will extract the dependencies of this environment in a zip before uploading to S3.

Once the lambda function deployed, you’ll be prompted for a URL of the API that you can request.

Go to Jupyter notebook to test it by loading an image, converting it to base64, and sending it inside a payload.

Demo of the API — (screenshot by the author)

If you want to follow this section step by step so that you don’t miss anything, you can watch it on YouTube.

Part 2 — Deploying on AWS Lambda (video by the author)

This section covers building a simple React interface to interact with the model.

I wanted this interface to be as user-friendly as possible to visualize the style transformation in a very simple way.

I hopefully found this nice React component that allows you to compare two images side by side and go from one to another by sliding a cursor.

Before running the React app and building it, you’ll need to specify the API URL of the model you just deployed. Go inside fontend/src/api.js and change the value of baseUrl.

  • To run the React app locally:
cd frontend/
yarn install
yarn start

This will start it at http://localhost:3000.

  • To build the app before deploying it to Netlify:
yarn build

This will create thebuild/folder that contains a build of the application to be served on Netlify.

You can watch this section on YouTube to understand how the code is structured and the other React components being used.

Part 3 — Building a React interface (video by the author)

In this last section, we’ll cover deploying the front interface.

There are many ways to deploy a React app so that it goes live on the internet and anyone can have access to it. One of them is using Netlify: a great platform that automates building and deploying applications in many frameworks (React, Vue, Gatsby, etc.)

  • To be able to deploy on Netlify, you’ll need an account. It’s free: Head over to Netlify to sign up.
  • Then you’ll need to install netlify-cli:
npm install netlify-cli -g
  • Authenticate the Netlify client with your account:
netlify login
cd app/
netlify deploy

Netlify will ask you for the build folder (enter “build”) and a custom name for your app (this will appear as a subdomain of netlify.com). I’ve already picked “cartoonify,” but you can choose another one.

And this should be it! Now your app is live!

But wait! There’s some wrong with the URL: It’s prefixed with an alphanumerical code — I didn’t want that, right?

That’s because you deployed a draft URL!

To have a clean URL, you’ll have to deploy by specifying the prod option:

netlify deploy --prod

You can watch this section on YouTube for a live demo to understand how easy the deployment on Netlify can be.

Part 4 — deploying React on Netlify (video by the author)

Leave a Reply

Your email address will not be published. Required fields are marked *