|
1 | 1 | .. _sec_attribute:
|
2 | 2 |
|
3 |
| -================================================ |
4 |
| -Attribution and evaluation given counterfactuals |
5 |
| -================================================ |
| 3 | +=============================================== |
| 4 | +Discriminative attribution from Counterfactuals |
| 5 | +=============================================== |
6 | 6 |
|
7 |
| -Attribution |
8 |
| -=========== |
| 7 | +Now that we have generated counterfactuals, we will refine our **generated** images into **counterfactuals** using discriminative attribution. |
| 8 | +Remember that although the conversion network is trained to keep as much of the image fixed as possible, it is not perfect. |
| 9 | +This means that there may still be regions of the **generated** image that differ from the **query** image *even if they don't need to*. |
| 10 | +Luckily, we have a classifier that can help us identify and keep only the necessary regions of change. |
| 11 | + |
| 12 | +The first thing that we want to do is load the classifier. |
9 | 13 |
|
10 | 14 | .. code-block:: python
|
11 | 15 | :linenos:
|
12 | 16 |
|
13 |
| - # Load the classifier |
| 17 | + classifier_checkpoint = "path/to/classifier/checkpoint" |
| 18 | +
|
14 | 19 | from quac.generate import load_classifier
|
15 | 20 | classifier = load_classifier(
|
16 |
| -
|
| 21 | + checkpoint_path=classifier_checkpoint |
17 | 22 | )
|
18 | 23 |
|
| 24 | +Next, we will define the attribution that we want to use. |
| 25 | +In this tutorial, we will use Discriminative Integrated Gradients, using the classifier as a baseline. |
| 26 | +As a comparison, we will also use Vanilla Integrated Gradients, which uses a black image as a baseline. |
| 27 | +This will allow us to identify the regions of the image that are most important for the classifier to make its decision. |
| 28 | +Later in the :doc:`evaluation <evaluate>` tutorial, we will process these attributions into masks, and finally get our counterfactuals. |
| 29 | + |
| 30 | + |
| 31 | +.. code-block:: python |
| 32 | + :linenos: |
| 33 | +
|
| 34 | + # Parameters |
| 35 | + attribution_directory = "path/to/store/attributions" |
| 36 | +
|
19 | 37 | # Defining attributions
|
20 | 38 | from quac.attribution import (
|
21 |
| - DDeepLift, |
22 | 39 | DIntegratedGradients,
|
| 40 | + VanillaIntegratedGradients, |
23 | 41 | AttributionIO
|
24 | 42 | )
|
25 | 43 | from torchvision import transforms
|
26 | 44 |
|
27 | 45 | attributor = AttributionIO(
|
28 | 46 | attributions = {
|
29 |
| - "deeplift" : DDeepLift(), |
30 |
| - "ig" : DIntegratedGradients() |
| 47 | + "discriminative_ig" : DIntegratedGradients(classifier), |
| 48 | + "vanilla_ig" : VanillaIntegratedGradients(classifier) |
31 | 49 | },
|
32 |
| - output_directory = "my_attributions_directory" |
| 50 | + output_directory = atttribution_directory |
33 | 51 | )
|
34 | 52 |
|
| 53 | +
|
| 54 | +Finally, we want to make sure that the images are processed as we would like for the classifier. |
| 55 | +Here, we will simply define a set of `torchvision` transforms to do this, we will pass them to the `attributor` object. |
| 56 | +Keep in mind that if you processed your data in a certain way when training your classfier, you will need to use the same processing here. |
| 57 | + |
| 58 | +.. code-block:: python |
| 59 | + :linenos: |
| 60 | +
|
35 | 61 | transform = transforms.Compose(
|
36 | 62 | [
|
37 |
| - transforms.Resize(224), |
38 |
| - transforms.CenterCrop(224), |
39 |
| - transforms.Normalize(...) |
| 63 | + transforms.ToTensor(), |
| 64 | + transforms.Grayscale(), |
| 65 | + transforms.Resize(128), |
| 66 | + transforms.Normalize(0.5, 0.5), |
40 | 67 | ]
|
41 | 68 | )
|
42 | 69 |
|
43 |
| - # This will run attributions and store all of the results in the output_directory |
44 |
| - # Shows a progress bar |
45 |
| - attributor.run( |
46 |
| - source_directory="my_source_image_directory", |
47 |
| - counterfactual_directory="my_counterfactual_image_directory", |
48 |
| - transform=transform |
49 |
| - ) |
50 |
| -
|
51 |
| -Evaluation |
52 |
| -========== |
53 |
| - |
54 |
| -Once you have attributions, you can run evaluations. |
55 |
| -You may want to try different methods for thresholding and smoothing the attributions to get masks. |
56 |
| - |
57 |
| - |
58 |
| -In this example, we evaluate the results from the DeepLift attribution method. |
| 70 | +Finally, let's run the attributions. |
59 | 71 |
|
60 | 72 | .. code-block:: python
|
61 | 73 | :linenos:
|
62 | 74 |
|
63 |
| - # Defining processors and evaluators |
64 |
| - from quac.evaluation import Processor, Evaluator |
65 |
| - from sklearn.metrics import ConfusionMatrixDisplay |
| 75 | + data_directory = "path/to/data/directory" |
| 76 | + counterfactual_directory = "path/to/counterfactual/directory" |
66 | 77 |
|
67 |
| - classifier = load_classifier(...) |
68 |
| -
|
69 |
| - evaluator = Evaluator( |
70 |
| - classifier, |
71 |
| - source_directory="my_source_image_directory", |
72 |
| - counterfactual_directory="my_counterfactual_image_directory", |
73 |
| - attribution_directory="my_attributions_directory/deeplift", |
| 78 | + # This will run attributions and store all of the results in the output_directory |
| 79 | + # Shows a progress bar |
| 80 | + attributor.run( |
| 81 | + source_directory=data_directory, |
| 82 | + counterfactual_directory=counterfactual_directory, |
74 | 83 | transform=transform
|
75 | 84 | )
|
76 | 85 |
|
| 86 | +If you look into the `attribution_directory`, you should see a set of attributions. |
| 87 | +They will be organized in the following way: |
77 | 88 |
|
78 |
| - cf_confusion_matrix = evaluator.classification_report( |
79 |
| - data="counterfactuals", # this is the default |
80 |
| - return_classifications=False, |
81 |
| - print_report=True, |
82 |
| - ) |
| 89 | +.. code-block:: bash |
83 | 90 |
|
84 |
| - # Plot the confusion matrix |
85 |
| - disp = ConfusionMatrixDisplay( |
86 |
| - confusion_matrix=cf_confusion_matrix, |
87 |
| - ) |
88 |
| - disp.show() |
| 91 | + attribution_directory/ |
| 92 | + attribution_method_name/ |
| 93 | + source_class/ |
| 94 | + target_class/ |
| 95 | + image_name.npy |
89 | 96 |
|
90 |
| - # Run QuAC evaluation on your attribution and store a report |
91 |
| - report = evaluator.quantify(processor=Processor()) |
92 |
| - # The report will be stored based on the processor's name, which is "default" by default |
93 |
| - report.store("my_attributions_directory/deeplift/reports") |
| 97 | +In the next tutorial, we will use these attributions to generate masks and finally get our counterfactuals. |
0 commit comments