[TIL] Deploy AI Model Developed w/ Flask API Using AWS EC2 & Connect to Nest.js Project

06/20/23

ยท

2 min read

[TIL] Deploy AI Model Developed w/ Flask API Using AWS EC2 & Connect to Nest.js Project

Using AI Model for Emergency Level Prediction: EC2 Server Scale Up

t2.micro (free tier)

๐Ÿšซ Killed: Indicates that the process was terminated by the operating system. โ‡’ pip install tensorflow was killed by the OS. It was interrupted due to insufficient memory on the AWS EC2 free tier instance.

t3.small

๐Ÿšซ Similar to t2.micro, it was killed.

t3.medium

โœ… pip install tensorflow successfully installed TensorFlow. In the case of the Flask API, it does not require GPU since it loads a pre-trained model for predicting emergency levels. Therefore, a CPU EC2 instance is sufficient.

  • Additional commands executed on the EC2 instance / Installed modules:

    • sudo apt update

    • sudo apt install python3-pip

    • sudo apt-get install python3-venv

    • sudo apt-get install default-jdk โ‡’ JDK installation for using konlpy

    • sudo pip install konlpy

    • pip install intel-tensorflow==2.12.0 (Installed to address version-related errors)

      • pip install -U keras_applications==1.0.6 --no-deps pip install -U keras_preprocessing==1.0.5 --no-deps

Connecting to the EC2 Server Instance

  1. Port forwarding

    sudo iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 5000

    My apologies for the mistake... I kept accessing the t3.small IP address instead of t3.medium. It was a silly mistake, not a troubleshooting issue.

  2. Running the Flask server in the background

    nohup python -u app.py & โ‡’ Run in the background

    tail -f nohup.out โ‡’ Check the logs

    lsof -i :5000 โ‡’ Check the PID

    sudo kill -9 [PID] โ‡’ Terminate the running process

    Flask - Running in the Background with nohup

ย