Skip to content

PyTorch & Matlab code for the paper: CIE XYZ Net: Unprocessing Images for Low-Level Computer Vision Tasks (TPAMI 2021).

Notifications You must be signed in to change notification settings

BarzelS/CIE_XYZ_NET

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 

Repository files navigation

CIE XYZ Net: Unprocessing Images for Low-Level Computer Vision Tasks

Mahmoud Afifi, Abdelrahman Abdelhamed, Abdullah Abuolaim, Abhijith Punnappurath, and Michael S. Brown

York University

Reference code for the paper CIE XYZ Net: Unprocessing Images for Low-Level Computer Vision Tasks. Mahmoud Afifi, Abdelrahman Abdelhamed, Abdullah Abuolaim, Abhijith Punnappurath, and Michael S. Brown, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2021. If you use this code or our dataset, please cite our paper:

@article{CIEXYZNet,
  title={CIE XYZ Net: Unprocessing Images for Low-Level Computer Vision Tasks},
  author={Afifi, Mahmoud and Abdelhamed, Abdelrahman and Abuolaim, Abdullah and Punnappurath, Abhijith and Brown, Michael S},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)},
  pages={},
  year={2021}
}

Code (MIT License)

network_design

Prerequisite

  1. Python 3.6
  2. opencv-python
  3. pytorch (tested with 1.5.0)
  4. torchvision (tested with 0.6.0)
  5. cudatoolkit
  6. tensorboard (optional)
  7. numpy
  8. future
  9. tqdm
  10. matplotlib
The code may work with library versions other than the specified.

Get Started

Demos:

  1. Run demo_single_image.py or demo_images.py to convert from sRGB to XYZ and back. You can change the task to run only one of the inverse or forward networks.
  2. Run demo_single_image_with_operators.py or demo_images_with_operators.py to apply an operator(s) to the intermediate layers/images. The operator code should be located in the pp_code directory. You should change the code in pp_code/postprocessing.py with your operator code.

Training Code:

Run train.py to re-train our network. You will need to adjust the training/validation directories accordingly.

Note:

All experiments in the paper were reported using the Matlab version of CIE XYZ Net. The PyTorch code/model is provided to facilitate using our framework with PyTorch, but there is no guarantee that the Torch version gives exactly the same reconstruction/rendering results reported in the paper.



Prerequisite

  1. Matlab 2019b or higher
  2. Deep Learning Toolbox

Get Started

Run install_.m.

Demos:

  1. Run demo_single_image.m or demo_images.m to convert from sRGB to XYZ and back. You can change the task to run only one of the inverse or forward networks.
  2. Run demo_single_image_with_operators.m or demo_images_with_operators.m to apply an operator(s) to the intermediate layers/images. The operator code should be located in the pp_code directory. You should change the code in pp_code/postprocessing.m with your operator code.

Training Code:

Run training.m to re-train our network. You will need to adjust the training/validation directories accordingly.

sRGB2XYZ Dataset

srgb2xyz

Our sRGB2XYZ dataset contains ~1,200 pairs of camera-rendered sRGB and the corresponding scene-referred CIE XYZ images (971 training, 50 validation, and 244 testing images).

Training set (11.1 GB): Part 0 | Part 1 | Part 2 | Part 3 | Part 4 | Part 5

Validation set (570 MB): Part 0

Testing set (2.83 GB): Part 0 | Part 1

Dataset License:

As the dataset was originally rendered using raw images taken from the MIT-Adobe FiveK dataset, our sRGB2XYZ dataset follows the original license of the MIT-Adobe FiveK dataset.

About

PyTorch & Matlab code for the paper: CIE XYZ Net: Unprocessing Images for Low-Level Computer Vision Tasks (TPAMI 2021).

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • MATLAB 54.2%
  • Python 45.8%