We'd like to stamp out all the tests that have ordering dependencies. This helps make the tests more reliable and, eventually, will make it so we can run tests in a random order and avoid new ordering dependencies being introduced. To get there, we need to weed out and fix all the existing ordering dependencies.
These are steps for diagnosing ordering flakiness once you have a test that you believe depends on an earlier test running.
./tools/print_web_test_ordering.py
and save the output to a file. This outputs the tests run in the order they were run on each content_shell instance../tools/bisect_web_test_ordering.py --test-list=path/to/file/from/step/3
The bisect_web_test_ordering.py script should spit out a list of tests at the end that causes the test to fail.
Instead of running bisect_web_test_ordering.py
, you can manually do the work of step 4 above.
run_web_tests.py --child-processes=1 --order=none --test-list=path/to/file/from/step/3
run_web_tests.py --order=natural
and repeat this process to find which test causes the test in question to pass (e.g. crbug.com/262793).run_web_tests.py --order=random --no-retry
./tools/print_web_test_ordering.py
and save the output to a file. This outputs the tests run in the order they were run on each content_shell instance.Run run_web_tests.py --run-singly --no-retry
. This starts up a new content_shell instance for each test. Tests that fail when run in isolation but pass when run as part of the full test suite represent some state that we‘re not properly resetting between test runs or some state that we’re not properly setting when starting up content_shell. You might want to run with --timeout-ms=60000
to weed out tests that timeout due to waiting on content_shell startup time.