-
Notifications
You must be signed in to change notification settings - Fork 12.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[llvm][lit] Update regexes in Xunit test #126527
Conversation
I got a report that downstream this test failed and the cause was that it took longer than the 1 second we expected to run one of the test cases. There's no reason for this test to be that specific, so I've changed all the [0-1] to [0-9]+. This test does not care whether the times make any sense, only that they exist. A couple exisitng regex here make the decimal point optional, and on my machine I always seem to get `.00`. So I think they are always finding a decimal point anyway. I've left them just on the slim chance that some platform happens to print that time as a single number if it's a whole number of seconds.
@llvm/pr-subscribers-testing-tools Author: David Spickett (DavidSpickett) ChangesI got a report that downstream this test failed and the cause was that it took longer than the 1 second we expected to run one of the test cases. There's no reason for this test to be that specific, so I've changed all the [0-1] to [0-9]+. This test does not care whether the times make any sense, only that they exist. A couple exisitng regex here make the decimal point optional, and on my machine I always seem to get I've left them just on the slim chance that some platform happens to print that time as a single number if it's a whole number of seconds. Full diff: https://github.com/llvm/llvm-project/pull/126527.diff 1 Files Affected:
diff --git a/llvm/utils/lit/tests/xunit-output.py b/llvm/utils/lit/tests/xunit-output.py
index 392cded4653fe1..35e092e9e4e555 100644
--- a/llvm/utils/lit/tests/xunit-output.py
+++ b/llvm/utils/lit/tests/xunit-output.py
@@ -10,17 +10,17 @@
# CHECK: <?xml version="1.0" encoding="UTF-8"?>
# CHECK-NEXT: <testsuites time="{{[0-9.]+}}">
# CHECK-NEXT: <testsuite name="test-data" tests="5" failures="1" skipped="3" time="{{[0-9.]+}}">
-# CHECK-NEXT: <testcase classname="test-data.test-data" name="bad&name.ini" time="{{[0-1]\.[0-9]+}}">
+# CHECK-NEXT: <testcase classname="test-data.test-data" name="bad&name.ini" time="{{[0-9]+\.[0-9]+}}">
# CHECK-NEXT: <failure><![CDATA[& < > ]]]]><![CDATA[> &"]]></failure>
# CHECK-NEXT: </testcase>
-# CHECK-NEXT: <testcase classname="test-data.test-data" name="excluded.ini" time="{{[0-1]\.[0-9]+}}">
+# CHECK-NEXT: <testcase classname="test-data.test-data" name="excluded.ini" time="{{[0-9]+\.[0-9]+}}">
# CHECK-NEXT: <skipped message="Test not selected (--filter, --max-tests)"/>
# CHECK-NEXT: </testcase>
-# CHECK-NEXT: <testcase classname="test-data.test-data" name="missing_feature.ini" time="{{[0-1]\.[0-9]+}}">
+# CHECK-NEXT: <testcase classname="test-data.test-data" name="missing_feature.ini" time="{{[0-9]+\.[0-9]+}}">
# CHECK-NEXT: <skipped message="Missing required feature(s): dummy_feature"/>
# CHECK-NEXT: </testcase>
-# CHECK-NEXT: <testcase classname="test-data.test-data" name="pass.ini" time="{{[0-1]\.[0-9]+}}"/>
-# CHECK-NEXT: <testcase classname="test-data.test-data" name="unsupported.ini" time="{{[0-1]\.[0-9]+}}">
+# CHECK-NEXT: <testcase classname="test-data.test-data" name="pass.ini" time="{{[0-9]+\.[0-9]+}}"/>
+# CHECK-NEXT: <testcase classname="test-data.test-data" name="unsupported.ini" time="{{[0-9]+\.[0-9]+}}">
# CHECK-NEXT: <skipped message="Unsupported configuration"/>
# CHECK-NEXT: </testcase>
# CHECK-NEXT: </testsuite>
|
could you extend timeout in max-time.py too? It's also failing for me on some very loaded CI machine. |
Part of my advice here is to not overload the machine, but I will look at that test. For example if you have multiple CI jobs on the same machine and they all If I change that test it I'll make a new PR for that. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, that looks more consistent!
I got a report that downstream this test failed and the cause was that it took longer than the 1 second we expected to run one of the test cases. This test doesn't need to be that specific, so I am updating all the time regexes to be the same one that allows 0-9 any number of digits, requires a decimal point, then 0-9 any number of digits for the final part.
I got a report that downstream this test failed and the cause was that it took longer than the 1 second we expected to run one of the test cases. This test doesn't need to be that specific, so I am updating all the time regexes to be the same one that allows 0-9 any number of digits, requires a decimal point, then 0-9 any number of digits for the final part.
I got a report that downstream this test failed and the cause was that it took longer than the 1 second we expected to run one of the test cases.
This test doesn't need to be that specific, so I am updating all the time regexes to be the same one that allows 0-9 any number of digits, requires a decimal point, then 0-9 any number of digits for the final part.