Run CI tests with max processes#7254
Conversation
📝 WalkthroughWalkthroughThe CI workflow now separates handling of flaky multiprocessing tests from regular tests. It introduces core detection to enable parallel test execution, runs flaky MP tests with a single core initially, and then runs standard tests with detected cores while skipping flaky tests. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
9ad8f9f to
7f1ca40
Compare
.github/workflows/ci.yaml
Outdated
| test_unpack_ex | ||
| test_weakref | ||
| test_yield_from | ||
| # Tests that can be flaky when running with multiple processes `-j 2`. These tests will be ran using `-j 1` |
There was a problem hiding this comment.
@youknowone can you please let me know what you think about this change?
There was a problem hiding this comment.
Out of curiosity, how is this different from marking those tests with, say, @support.thread_unsafe("TODO: RUSTPYTHON, blah blah blah...")?
There was a problem hiding this comment.
Out of curiosity, how is this different from marking those tests with, say,
@support.thread_unsafe("TODO: RUSTPYTHON, blah blah blah...")?
That's a good point. If this is the approach we are gonna choose then we need to add support for lib_updater for it as well, ATM it only looks at unittest patches
314686f to
ea0d06e
Compare
|
Whats the option cpython is using? |
CPython uses max processes for their CI (from what I can tell) |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/ci.yaml:
- Around line 243-249: The "Detect available cores" step is calling a
non-existent Python function os.process_cpu_count(); update the step (id: cores)
to avoid Python and use bash instead: detect CPU count with a POSIX-safe command
such as nproc or getconf _NPROCESSORS_ONLN (falling back to 2 if detection
fails), assign the result to the cores variable and then echo "cores=${cores}"
>> $GITHUB_OUTPUT; ensure the step retains shell: bash and preserves the output
key name and id (cores) so downstream jobs still read the value.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: 23cc4bdc-425b-4286-9c21-25706e3ebe20
📒 Files selected for processing (1)
.github/workflows/ci.yaml
| - name: Detect available cores | ||
| id: cores | ||
| shell: bash | ||
| run: | | ||
| target/release/rustpython -m test -j 1 -u all --slowest --fail-env-changed --timeout 600 -v ${{ env.PLATFORM_INDEPENDENT_TESTS }} | ||
| timeout-minutes: 45 | ||
| cores=$(python -c 'print(__import__("os").process_cpu_count())') | ||
| echo "cores=${cores}" >> $GITHUB_OUTPUT | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In Python 3.14, can os.process_cpu_count() return None, and is a fallback required before using it as an integer worker count?
💡 Result:
No, os.process_cpu_count does not exist in Python 3.14, and thus cannot return None. The standard function is os.cpu_count, which returns the number of logical CPUs in the system or None if indeterminable. A fallback (e.g., defaulting to 1 or 4) is required when using it as an integer worker count, such as in multiprocessing.Pool(processes=os.cpu_count).
Citations:
- 1: os.process_cpu_count(): get the number CPUs usable by the current process python/cpython#109649
- 2: https://docs.python.org/3.14/library/os.html
- 3: https://docs.python.org/3/library/os.html
- 4: os.cpu_count() recommends non-portable len(os.sched_getaffinity(0)) to get usable CPUs python/cpython#105972
- 5: https://docs.python.org/3.11/library/os.html
- 6: gh-109649: Add affinity parameter to os.cpu_count() python/cpython#109652
- 7: https://github.com/python/cpython/blob/main/Lib/os.py
🏁 Script executed:
# Check the actual content of lines 243-249 in .github/workflows/ci.yaml
head -n 249 .github/workflows/ci.yaml | tail -n 7Repository: RustPython/RustPython
Length of output: 280
Fix non-existent function call in core detection.
Line 247 calls os.process_cpu_count(), which does not exist in Python—the correct function is os.cpu_count(), which can also return None. This will cause runtime failure. Additionally, Python for CI test orchestration introduces unnecessary complexity; use bash directly with fallbacks.
🛠️ Suggested fix
- name: Detect available cores
id: cores
shell: bash
run: |
- cores=$(python -c 'print(__import__("os").process_cpu_count())')
- echo "cores=${cores}" >> $GITHUB_OUTPUT
+ cores="${NUMBER_OF_PROCESSORS:-}"
+ if [ -z "${cores}" ]; then
+ cores="$(getconf _NPROCESSORS_ONLN 2>/dev/null || nproc 2>/dev/null || echo 1)"
+ fi
+ echo "cores=${cores}" >> "$GITHUB_OUTPUT"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: Detect available cores | |
| id: cores | |
| shell: bash | |
| run: | | |
| target/release/rustpython -m test -j 1 -u all --slowest --fail-env-changed --timeout 600 -v ${{ env.PLATFORM_INDEPENDENT_TESTS }} | |
| timeout-minutes: 45 | |
| cores=$(python -c 'print(__import__("os").process_cpu_count())') | |
| echo "cores=${cores}" >> $GITHUB_OUTPUT | |
| - name: Detect available cores | |
| id: cores | |
| shell: bash | |
| run: | | |
| cores="${NUMBER_OF_PROCESSORS:-}" | |
| if [ -z "${cores}" ]; then | |
| cores="$(getconf _NPROCESSORS_ONLN 2>/dev/null || nproc 2>/dev/null || echo 1)" | |
| fi | |
| echo "cores=${cores}" >> "$GITHUB_OUTPUT" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/ci.yaml around lines 243 - 249, The "Detect available
cores" step is calling a non-existent Python function os.process_cpu_count();
update the step (id: cores) to avoid Python and use bash instead: detect CPU
count with a POSIX-safe command such as nproc or getconf _NPROCESSORS_ONLN
(falling back to 2 if detection fails), assign the result to the cores variable
and then echo "cores=${cores}" >> $GITHUB_OUTPUT; ensure the step retains shell:
bash and preserves the output key name and id (cores) so downstream jobs still
read the value.
Recently our CI has started to take around an hour, this will make it faster at the cost of stability. I want to see how faster / stable this effect the CI
Summary by CodeRabbit