Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Screenshot failure not failing on CI #222

Open
joekrump opened this issue Feb 28, 2025 · 9 comments
Open

Screenshot failure not failing on CI #222

joekrump opened this issue Feb 28, 2025 · 9 comments

Comments

@joekrump
Copy link
Contributor

I have a screenshot failure that I get when I run my tests in my docker container on my local machine and it's a legitimate failure; Something in the UI changed. However, when the test is run on CI, it does not report any failures even though I can see that it outputs a and "actual" and "diff" file for the .toMatchImageSnapshot() expectation that fails. So it is recognizing that something is different but vitest isn't reporting a failure for some reason.

Is there any sort of special treatment of toMatchImageSnapshot() failures based on environment variables or something like that? That's the only thing I can think of. Maybe something based on whether CI is set or not?

@unional
Copy link
Contributor

unional commented Feb 28, 2025

Did you await the call?
It is an async operation.

@joekrump
Copy link
Contributor Author

joekrump commented Feb 28, 2025

Did you await the call? It is an async operation.

Yes. It is awaited and fails 💯 of the time when running in my container locally and even when running npx vitest on my local machine directly. It just does not result in a failure on CI. It's super strange and the only thing I've been able to think that could be different is some environment variable.

On CI it is simply as thought the fact that there is a diff isn't treated as a failure for some reason since the file in the __diff__ dir is there and shows the diff I'm expecting. It just doesn't fail the expectation.

@unional
Copy link
Contributor

unional commented Feb 28, 2025

That's weird. All it does is just throw an error. Do you have a link to the CI job that passes?

This may be a question the Vitest team can help.

@joekrump
Copy link
Contributor Author

That's weird. All it does is just throw an error. Do you have a link to the CI job that passes?

This may be a question the Vitest team can help.

So here's the bit from the test test:

When the last line of the test block (it block) is this, and the screenshots don't match, the test still passed:

await expect(page.getByTestId("notes-container").element()).toMatchImageSnapshot();

When I update the test so that there is a failure on the line after the screenshot, it fails:

await expect(page.getByTestId("notes-container").element()).toMatchImageSnapshot();
expect(false).toBe(true); // <- intentional failure

When I add the intentionally failing expectation after the screenshot assertion then the screenshot assertion failure shows up for me on CI:

Image

@unional
Copy link
Contributor

unional commented Feb 28, 2025

Which version of Vite and Vitest are you using?

I notice the sourcemap is also not correct. I suspect there are some versioning issue. See if you have multiple versions of Vite and Vitest installed also.

@joekrump
Copy link
Contributor Author

Which version of Vite and Vitest are you using?

I notice the sourcemap is also not correct. I suspect there are some versioning issue. See if you have multiple versions of Vite and Vitest installed also.

Vite is v5.4.7 and vitest is v3.0.5

Image Image

@joekrump
Copy link
Contributor Author

I notice the sourcemap is also not correct.

Ya that has been a real pain and I haven't nailed down exactly where the problem is as sourcemaps work correctly for our app in prod. Something is configured slightly differently for vitest and haven't nailed it down yet.

@unional
Copy link
Contributor

unional commented Mar 12, 2025

@joekrump are you still facing this issue?

@joekrump
Copy link
Contributor Author

@joekrump are you still facing this issue?

Yes. I have not been able to put more time into investigating the root cause so as of right now I've been running all the tests locally to keep things up to date and catch errors. It's not practical for my team but it's been working okay as a stop-gap solution for now

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants