This project is implementation of the diploma HSE Master program
Тема: Применение методов машинного обучения для оценки количества биомассы по данным спутниковых снимков Sentinel-1 и Sentinel-2
Описание: Данные конкурса The BioMassters - Competition
uvicorn src.app:app
You should receive message that uvicorn started up on localhost:8000
INFO: Started server process [8084]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
docker build --tag agbm:latest .
docker run -p 8000:80 -d --rm --name agbm agbm:latest
- visit localhost:8000/echo or localhost:8000/docs#
- send get request via curl
curl {"Hello, diploma!"} localhost:8000/echo
- create
.env
file. See.env_example
for a reference - build and run services:
docker compose build
docker compose up
Simplest way is to use curl
(Or you can visit a docs page localhost:8000/docs#
)
curl localhost:8000/users/ -H 'accept: application/json'
Note: keep in mind that if you run it first time user table will be empty
- init
/data_sample
folder (To avoid monkey job we will use the simplest scripts)
$ python -m src.utils.init_data_samples
- create a task for inference
$ python -m src.utils.client_inference_task
files:
fed6bb57_S1_00.tif
fed6bb57_S1_01.tif
fed6bb57_S1_02.tif
fed6bb57_S1_03.tif
fed6bb57_S1_04.tif
fed6bb57_S1_05.tif
fed6bb57_S1_06.tif
fed6bb57_S1_07.tif
fed6bb57_S1_08.tif
fed6bb57_S1_09.tif
fed6bb57_S1_10.tif
fed6bb57_S1_11.tif
--------------------------------------------------
fb38b477-1add-481e-9bc4-243f4c7f28a1
- get result from task (keep attention: we will use task id printed in the last line)
$ python -m src.utils.client_inference_result -i fb38b477-1add-481e-9bc4-243f4c7f28a1
response code: 200 - response content saved to response.tif
You should find a response.tif
file in your folder. (Keep calm. I's ok if you see nothing in response.tif
. I just switched off model to save some resources)
- bonus: There is a flower in the docker-compose.yml. So, you could visit localhost:5555
There are all needed stuff in the docker-compose.yaml
to set up monitoring of the service: Prometheus, exporters, Grafana.
Followed objects in the service were added for monitoring:
- application
- celery (via flower)
- celery via celery-exporter
- postgresql via postgresql-exporter
- host via host-exporter (Be carefully, if you are using windows on your host you will face fail. So the best way is to
migrate to linuxdropnode-exporter
fromdocker-compose.yml
)
- run service in the docker compose manner:
docker compose up --build
-
check if every exporter ready. Every of the listed uri should return you current status:
-
check if Prometheus scrape every of exports above. Visit http://localhost:9090/targets. Every target should be healthy.
-
check if Grafana successfully connect to Prometheus. Visit http http://localhost:3000/dashboards. A few dashboards should be found:
- Application status - just one chart to proof that we successfully pick up metrics from our app
- Celery / Tasks / By Task - show details about Celery. It could be empty if you haven't created inference task yet. (If it happened just pass
How to check if creation inference task works properly
section) - Docker Prometheus Monitoring - in progress now. It should show docker container metrics, but now just count how many exporters do we have
- PostgresSQL Database - monitor Postgres. It works fine, but need a few minutes to load metrics.