-
Notifications
You must be signed in to change notification settings - Fork 855
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deviation in outputs of nodes between TFLM and Tflite (Python) #3046
Comments
The deviation is significant enough that the I have given it the correct input and randomly generated numbers too, both result in the same output. To verify that the deviation is what causes this issue, I extracted what the inputs for I then used this C-array to manually override the inputs to This actually gave me the correct final output! |
Node 11 was indeed not the first node with deviation. Attaching results from my analysis. |
Note:- This issue is a continuation of a previously filed issue (#3039) after more research and identifying more problems.
I am trying to run a int8 quantized MobileNetV2 model on a ESP32S3-Eye but am running into multiple issues.
I will use this thread to explain the deviation of outputs issue, the previously linked thread explains issues with allocation of Softmax node and incorrect value of
kBeta
As the model kept giving incorrect results I decided to take a look at output values for every node of my model and I found a very interesting observation. At node no. 11 (a DepthwiseConv2D node) I saw that a few values were different between my Python and TFLM run.
Now, I'm only printing the first 100 values of the output and thus it cannot be said that the deviation started from this point onwards, it might very well be possible that it started before but was visible in the first 100 values from this node.
Proof:-
At the top we have the values fed into the input tensor of interpreter (only first 100), at bottom we have the first 100 values of the output of node 11.
Similarly, top is input, bottom is output of node 11.
Here I have also printed what final predictions I'm getting which are right.
These differences might look very small but they add up quickly. At node 62 (Mean) I have the following outputs:-

Left: TFLM
Right: Python
Notice the increase in the number of deviations and also the widening gap.
Additional Info
While yes I'm using
esp-tflite-micro
which replaces TFLM kernels with its own which usesesp-nn
drastically improving performance, the issue still persists after disabling all optimizations and custom kernels.I have uploaded my code (esp-idf and python) along with the test image and model on GitHub.
Check it here: https://github.com/ShardulNalegave/esp-mbnetv2-test
Note:- The linked GitHub repo includes the
kBeta
override fix as mentioned in #3039The text was updated successfully, but these errors were encountered: