Skip to content

SVinternShip/flask-celery-rabbitmq

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

77 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Service Introduction

Video Label

image

flask.celery.rabbitmq

Index

1. Prerequisites

Our service was created through the AI Application Development by Silicon Valley Engineering program organized by Headstart Silicon Valley. http://www.learnflagly.com/courses/347/

2. Installation Process

$ pip3 install requirements.txt

3. Getting Started

  • Please complete the Cuda installation and requirements installation

docker build

docker-compose up --build

model load

 wandb link : https://wandb.ai/pypyp/aiinternship/runs/2u55kqpw/files?workspace=user-sykim1106
 download model-best.h5 and change model file path

Requires Google Cloud Storage

https://cloud.google.com/docs/

Go to Cloud Storage and Dig Buckets

https://cloud.google.com/docs/authentication/production?hl=ko#create-service-account-console

Access json key file issue and git clone

https://cloud.google.com/docs/authentication/production?hl=ko#create_service_account

Modify the code to match the corresponding key file path and bucket name to task.py

os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "./robotic-haven-356701-952019494169.json"
bucket_name = 'savedcmbucket'    # 서비스 계정 생성한 bucket 이름 입력

tasks.py

image

RABBITMQ

Rabbitmq assigns continuous (long-term) tasks to CELERY WORKERS.
CELERY WOKERS store the original image from the preprocessing function and the Lime image returned from the predict_and_lime on the server.
Finally, it returns predicated, patient_id, study_modality.

Each CELERY WORKER has to load each deep learning model, but it's expensive. So load and use one global model.
Of course, when multiple requests are received at the same time, the time is delayed because the results must be sequentially waited from one model.
An alternative to this is to give two or more BATCHSIZE so that multiple inputs can be received at the same time and processed at the same time.

predict_module.py

preprocess

image

make gray scale dicom file to 3 channel image file

predict & lime explain

image

The dicom file is preprocessed according to the model input value and other dicom attributes are extracted.
The preprocessed image is input to the model, and a mask explaining the reason predicted through the rim expander is returned.

Send the corresponding result (model prediction, lime mask) to Django Server
Save the original image to Google Storage

4. File Manifest && API

📦root
 ┣ 📜api.py
 ┣ 📜docker-compose.yml
 ┣ 📜Dockerfile
 ┣ 📜predict_module.py
 ┣ 📜README.md
 ┣ 📜requirements.txt
 ┗ 📜tasks.py

5. Copyrights / End User Licesnse

This project is not intended for commercial use, please do not use it for commercial purpose

Name 전준형 김민지 김성윤 김정원 전경희
Profile
role Team ㅣLeader,
Backend ,
Frontend
Frontend ,
Backend
ML Frontend Backend
Github @Joon_Hyoung @Minji Kim @sykim1106 @grdnr13 @kjeon0901

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •