Recognition of beam elements in an image using AI for mrbeam.app.
An examples of recognition:
Source: api/
The role of Main API is to:
- Provide access to the model inference service.
- Logging of predictions(optionally) and access to them for further analysis.
Stack:
- Add user service.
- Add login service.
Source: ml-service/
The Aqueduct was chosen for the model inference. At the moment, YOLOv5 is used as the baseline model. As it is supposed to be done in aqueduct, model inference is divided into several tasks: loading an image, preprocessing it, running model, and postprocessing the results. The code for this can be found here.
- Add an option to download weights from
W&B
and/orMLFLow
model registries. - Support of other models than
YOLOv5.
You can find a dataset for recognizing different beam elements in the image on the roboflow page: beams dataset. It contains the following beam elements:
- 0 - The whole beam
- 1 - Distribution load
- 2 - Fixed support
- 3 - Force
- 4 - Momentum
- 5 - Pin support
- 6 - Roller
Over time the dataset will be extended with new samples, of course, but even now it can be used to get some reasonable results. Download the dataset and place it in the data
folder.
Work in progress
The easiest way to get started is to use Docker and docker-compose.
-
Services are configured through env variables. The list of variables is given in the
.env
file. -
Environment variable
$ML_MODEL_WEIGHTS
allows you to specify which specific weights to use for recognizing beam elements. Weights must be in onnx format. You can find some of the pretrained weights on google drive: weights. Download them and place them in theweights
folder. -
TODO: (Optional) Run e2e tests:
-
Build and run the whole application:
$ docker-compose --profile production up --build