Currently the repo is maintained by one person only, so everyone is welcome to pull and push requests. Thank you for your contribution for making this tool better.
A toy system about how to serve the crack segmentation model can be found here.
- create a virtual environment with conda
conda create -n pv-vision python=3.10
conda activate pv-vision
- Install from source (Recommended for the current beta version)
git clone https://github.com/hackingmaterials/pv-vision.git
cd pv-vision
pip install .
- Install from Pypi (alternative)
pip install pv-vision
- To enable CUDA and GPU acceleration, install Pytorch with cudatoolkit
PV-Vision package covers several topics in solar cell image analysis and is still under expansion. We published several papers related to various topics in this package. Please cite our papers accordingly.
If your work is about automatic defect identification, please cite the following paper:
@article{chen2022automated,
title={Automated defect identification in electroluminescence images of solar modules},
author={Chen, Xin and Karin, Todd and Jain, Anubhav},
journal={Solar Energy},
volume={242},
pages={20--29},
year={2022},
publisher={Elsevier}
}
If your work is about automatic crack segmentation and feature extraction, please cite the following paper:
# Crack segmentation paper
@article{chen2023automatic,
title={Automatic Crack Segmentation and Feature Extraction in Electroluminescence Images of Solar Modules},
author={Chen, Xin and Karin, Todd and Libby, Cara and Deceglie, Michael and Hacke, Peter and Silverman, Timothy J and Jain, Anubhav},
journal={IEEE Journal of Photovoltaics},
year={2023},
publisher={IEEE}
}
We also published our data set as a benchmark for crack segmentation. If you use our data set, please cite the following one:
# Crack segmentation dataset
@misc{chen2022benchmark,
title={A Benchmark for Crack Segmentation in Electroluminescence Images},
doi={10.21948/1871275},
url={https://datahub.duramat.org/dataset/crack-segmentation},
author={Chen, Xin and Karin, Todd and Libby, Cara and Deceglie, Michael and Hacke, Peter and Silverman, Timothy and Gabor, Andrew and Jain, Anubhav},
year={2022},
}
In general, if you want to cite the PV-Vision package or this repository, please use the following BibTex:
@misc{PV-Vision,
doi={10.5281/ZENODO.6564508},
url={https://github.com/hackingmaterials/pv-vision},
author={Chen, Xin},
title={pv-vision},
year={2022},
copyright={Open Access}
}
Examples of citing our works in latex can be:
To enable the automatic analysis of EL images, an open-source package PV-VISION~\cite{PV-Vision} was developed.
Individual defects were located and classified using object detection model in a previous work~\cite{chen2022automated}.
Cracks were segmented using a semantic segmentation model and crack features such as isolated area or length were automatically extracted in a previous work~\cite{chen2023automatic}. The corresponding dataset was publisehd as a benchmark~\cite{chen2022benchmark}.
This package allows you to analyze electroluminescene (EL) images of photovoltaics (PV) modules. The methods provided in this package include module transformation, cell segmentation, crack segmentation, defective cells identification, etc. Future work will include photoluminescence image analysis, image denoising, barrel distortion fixing, etc.
You can either use the package pv_vision
and write your own codes following the instruction in tutorials, or you can directly run our pipeline.sh
to do automated defects indentification. When pipeline.sh
is used, YOLO
model will be applied to do predictions in default. The output will give you the analysis from the model.
Our trained neural network models can be downloaded here.
Currently the model weights are:
-
Folder "crack_segmentation" is used for predicting the pixels that belong to cracks, busbars, etc. using semantic segmentation.
-
Folder "defect_detection" is used to do object detection of defective cells.
-
Folder "cell_classification" is used to do cell classification.
-
Folder "module_segmentation" is used for perspective transformation of solar module images using semantic segmantation. It will predict the contour of field module images
The tutorials of using PV-Vision
can be found in folder tutorials
. The tutorials cover perspective transformation, cell segmentation, model inference and model output analysis.
We published one of our datasets as a benchmark for crack segmentation. Images and annotations can be found on DuraMat datahub
There are three ways to deply our deep learning models:
Check tutorials of modelhandler.py
. This tool allows you to train your own deep learning models.
from pv_vision.nn import ModelHandler
Upload the model weights to Supervisely and make predictions on this website. The detailed tutorials can be found here and here.
You can also run the models using docker
.
First make sure you prepare required files as stated in the following folder structure.
Then pull the images
docker pull supervisely/nn-yolo-v3
docker pull supervisely/nn-unet-v2:6.0.26
You should see the two images by running
docker image ls
Start the containers by running
docker run -d --rm -it --runtime=nvidia -p 7000:5000 -v "$(pwd)/unet_model:/sly_task_data/model" --env GPU_DEVICE=0 supervisely/nn-unet-v2:6.0.26 python /workdir/src/rest_inference.py
docker run -d --rm -it --runtime=nvidia -p 5000:5000 -v "$(pwd)/yolo_model:/sly_task_data/model" --env GPU_DEVICE=0 supervisely/nn-yolo-v3 python /workdir/src/rest_inference.py
Here we deploy the UNet
to port 7000
and YOLO
to port 5000
.
The path $(pwd)/unet_model
or $(pwd)/yolo_model
is where we store our model weights. You can download them here.
Check if you successfully run the two dockers by running
docker container ls
After you have deployed the models, run our pipeline script to get the predictions. Note that this pipeline was only designed for object detection and doesn't have active maintenance currently Check our tutorials about how to do crack analysis.
bash pipeline.sh
You will find the predictions in a new folder output
.
In general, your folder structure should be like the following. When start the containers, you need to prepare unet_model
and yolo_model
. When running pipeline.sh
, You only need to prepare pipeline
, raw_images
where stores raw grayscale EL images, scripts
where you need to configure the metadata
in the parent folder PV-pipeline
. The output
folder will be created after you run the pipeline.sh
.
PV-pipeline
├── unet_model
│ ├── config.json
│ └── model.pt
├── yolo_model
│ ├── config.json
│ ├── model.weights
│ └── model.cfg
├── pipeline.sh
├── raw_images
│ ├── img1.png
│ ├── img2.png
│ ├── img3.png
│ ├── img4.png
│ └── img5.png
├── scripts
│ ├── metadata
│ │ ├── defect_colors.json
│ │ └── defect_name.json
│ ├── collect_cell_issues.py
│ ├── highlight_defects.py
│ ├── move2folders.py
│ └── transform_module_v2.py
└── output
├── analysis
│ ├── cell_issues.csv
│ ├── classified_images
│ │ ├── category1
│ │ │ └── img1.png
│ │ ├── category2
│ │ │ ├── img4.png
│ │ │ └── img2.png
│ │ └── category3
│ │ ├── img3.png
│ │ └── img5.png
│ └── visualized_images
│ ├── img1.png
│ ├── img2.png
│ ├── img3.png
│ ├── img4.png
│ └── img5.png
├── transformation
│ ├── failed_images
│ └── transformed_images
│ ├── img1.png
│ ├── img2.png
│ ├── img3.png
│ ├── img4.png
│ └── img5.png
├── unet_ann
│ ├── img1.png.json
│ ├── img2.png.json
│ ├── img3.png.json
│ ├── img4.png.json
│ └── img5.png.json
└── yolo_ann
├── img1.png.json
├── img2.png.json
├── img3.png.json
├── img4.png.json
└── img5.png.json
We will upload some EL images for users to practice after we get approval from our data provider.DoneWe will improve the user experience of our tools. We will do more Object-oriented programming (OOP) in the future version.DoneWe also developed algoritms of extracting cracks from solar cells. We will integrate the algorithms withDonePV-Vision
.We want to predict the worst degradation amount based on the existing crack pattern. This will also be integrated intoDonePV-Vision
.Add neural network modulesDoneAdd result analysisDone