Skip to content

Latest commit

 

History

History
95 lines (68 loc) · 6.27 KB

README.md

File metadata and controls

95 lines (68 loc) · 6.27 KB

Crop detection

This repository contains a keras AI model trained to detect crops and vegetation in images of vegetable-gardens ("orto" in italiano) taken perpendicular to the terrain. The model is accessible via an installable python module: crop_detection.

Requirements & module installation

  • Install OpenCV with Python 3 support
    • import cv2 must work without throwing exceptions
    • on Raspberry either compile from source, or sudo apt-get install libatlas3-base and then let setup.py install opencv-python
    • you may also need this on raspberry: sudo apt install libatlas-base-dev
  • Install TensorFlow >=2.4.0
    • on Raspberry use this repo
    • Don't worry if TensorFlow and OpenCV require different versions of numpy, just make sure the latest one is installed and ignore pip warnings
  • The setup.py file can be used to install the module: just run python3 -m pip install . in the root directory
    • It will take care of installing the needed dependencies (OpenCV and TensorFlow), but on Raspberry it won't work as explained above
    • Note: pip may give some warnings that can be solved by appending --use-feature=in-tree-build to the command, but they can be ignored

Repository file tree

  • crop_detection/ is a Python 3 module that can be imported or installed
  • converters/ contains scripts useful to build a dataset in the dataset/ folder, see Using the converters
  • dataset/ will contain, once generated, training and validation data with both images and labels
  • datasets_raw/ contains the raw datasets you will download
  • datasets_raw/cyberorto/ contains images taken by MindsHub's cyberorto, along with scripts that generate labels based on pixel color heuristics (see the generators/cyberorto_*.py scripts)
  • models/ contains some trained models, see Model name
  • training/ contains scripts that instantiate a new model, train it and perform data augmentation
  • visualizers/ contains scripts that help visualize what is happening and the performance of the various models

Model name

The model name is like this: model_INPUTHEIGHTxINPUTWIDTH_DATASETVERSION_EPOCH.hd5

  • INPUTHEIGHTxINPUTWIDTH represents the size of the input image, e.g. 352x480
  • DATASETVERSION identifies the raw datasets used to create the training dataset:
    • 1: ijrr_sugarbeets, synthetic_sugarbeat_random_weeds
    • 2: ijrr_sugarbeets, synthetic_sugarbeat_random_weeds, cwfid
    • 3: ijrr_sugarbeets, synthetic_sugarbeat_random_weeds, cwfid, cyberorto, ews
  • EPOCH is the number of epochs the model was trained for, e.g. 10

Using the converters

Choose some datasets you want to use from the list below, then download them and unpack them in the datasets_raw/ subfolder (which you will need to create) in the repo's root. Then you can run python3 converters/converter_DATASET_NAME.py to create (part of) a training dataset with both training/ and validation/ images in the dataset/ subfolder.

Sources

Datasets

Others (not used)

Tutorials & techniques

This is the actually used tutorial and model: Segmentation tutorial 4 (custom keras)

Others (not used):

SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation

SegNet tutorial 1 (custom caffe)

SegNet tutorial 2 (keras)

Segmentation tutorial 3 (keras-segmentation)

Fast and Accurate Crop and Weed Identification with Summarized Train Sets for Precision Agriculture