Skip to content

Add Python 3.14 support#402

Open
forsyth2 wants to merge 5 commits intomainfrom
add-python3.14
Open

Add Python 3.14 support#402
forsyth2 wants to merge 5 commits intomainfrom
add-python3.14

Conversation

@forsyth2
Copy link
Collaborator

Summary

Important: this should be merged to main after the production release of zstash v1.5.0.

Objectives:

  • Add support for Python 3.14

Issue resolution:

Select one: This pull request is...

  • a bug fix: increment the patch version
  • a small improvement: increment the minor version
  • a new feature: increment the minor version
  • an incompatible (non-backwards compatible) API change: increment the major version

Small Change

  • To merge, I will use "Squash and merge". That is, this change should be a single commit.
  • Logic: I have visually inspected the entire pull request myself.
  • Pre-commit checks: All the pre-commits checks have passed.

@forsyth2 forsyth2 self-assigned this Nov 18, 2025
@forsyth2 forsyth2 added the DevOps CI/CD, configuration, etc. label Nov 18, 2025
@forsyth2
Copy link
Collaborator Author

Xylar had noted:

we would want python >=3.11,<3.15 by next spring and python >=3.12,<3.15 by next fall. That is, we add a python version each spring and drop one each fall.

So, for the Spring release (tentatively planned as: testing period beginning April 1, release on May 1), we'd want to support four versions of Python: 3.11, 3.12, 3.13, 3.14.

Tom also noted:

numba/llvmlite don't [yet support Python 3.14]. Both are dependencies of packages xesmf, which is a dependency of xcdat. Related issue with xcdat: xCDAT/xcdat#813 (comment)

@forsyth2
Copy link
Collaborator Author

Unlike zppy (E3SM-Project/zppy#757), we're going to need some fixes for zstash to support Python 3.14. Unlike zppy-interfaces (E3SM-Project/zppy-interfaces#44), the relevant changes appear to be within our control -- unfortunately that means we'll need to add some code changes to get Python 3.14 working.

Claude's response:


Looking at this GitHub Actions log for the zstash project running with Python 3.14, the main issue is a pickling error with SQLite cursor objects in multiprocessing.

The Core Problem

The tests are failing with this error:

TypeError: cannot pickle 'sqlite3.Cursor' object
when serializing tuple item 4
when serializing dict item '_args'
when serializing multiprocessing.context.Process state
when serializing multiprocessing.context.Process object

What's Happening

The code is trying to use Python's multiprocessing module to parallelize file checking/extraction operations, but it's passing a SQLite cursor object (sqlite3.Cursor) to worker processes. Cursor objects cannot be pickled (serialized), which is required when passing objects between processes.

Why This Affects Python 3.14

Python 3.14 changed the default multiprocessing start method on Unix from fork to forkserver for improved safety. The forkserver method requires all arguments to be picklable, whereas fork could sometimes work with unpicklable objects because it copied the entire memory space.

The Failing Tests

All failures are in parallel execution tests:

  • testCheckParallel
  • testCheckParallelTars
  • testCheckParallelVerbose
  • testCheckParallelVerboseMismatch
  • testExtractParallel
  • testExtractParallelTars

The Fix

The code needs to be updated to avoid passing SQLite cursor objects to worker processes. Instead, it should:

  1. Pass the database path and have each worker open its own connection
  2. Pass serializable data (like row IDs or file paths) instead of cursor objects
  3. Or restructure to avoid needing the cursor in multiprocessing contexts

This is a compatibility issue introduced by Python 3.14's stricter multiprocessing requirements, not a bug in Python itself.

@forsyth2
Copy link
Collaborator Author

This could end up more involved. The second commit still fails. Claude's analysis:


Looking at this test failure log, the issue is clear: the tests are failing on Python 3.14 due to a multiprocessing serialization error.

The key error appears multiple times:

TypeError: cannot pickle 'weakref.ReferenceType' object
when serializing zstash.parallel.PrintMonitor object
when serializing zstash.parallel.ExtractWorker object

This happens specifically when trying to use the --workers flag for parallel processing in tests like:

  • testCheckParallel
  • testCheckParallelTars
  • testCheckParallelVerbose
  • testCheckParallelVerboseMismatch
  • testExtractParallel
  • testExtractParallelTars

Why this is happening:

Python 3.14 has changed how multiprocessing serializes objects. The PrintMonitor and ExtractWorker classes in zstash/parallel.py contain objects (likely a SyncManager or similar) that include weak references, which cannot be pickled for transfer to child processes.

What changed in Python 3.14:

Python 3.14 made the default multiprocessing start method forkserver on all platforms (it was previously fork on Unix systems). The forkserver method requires all objects passed to child processes to be fully serializable (picklable), whereas fork could share memory directly.

The fix:

The zstash code needs to be updated to handle Python 3.14's stricter serialization requirements. Options include:

  1. Set the start method explicitly to fork (if Unix-only is acceptable)
  2. Refactor the parallel code to avoid passing non-picklable objects to workers
  3. Use simpler synchronization primitives that are picklable

The test is correctly detecting this incompatibility - it's not a test issue, but a genuine compatibility problem with the parallel processing code in Python 3.14.

@forsyth2
Copy link
Collaborator Author

I went through quite a few iterations with Claude and don't feel particularly closer to a solution. It's good we caught this well ahead of the Spring release (when we want to add Python 3.14 support).

Important note from Claude's summary below:

This was accidental correctness - the code had a race condition, but Python's implementation details masked it.

That is, python -m unittest tests.integration.python_tests.group_by_command.test_extract_parallel.TestExtractParallel.testExtractParallel was passing essentially by accident previously. Now, we certainly can't remove that test since ordered output has been a requirement of zstash for at least 5 years. Notice even the original tests had this case: https://github.com/E3SM-Project/zstash/pull/61/files. Specifically, tests/test.py has

error_message = 'The tars were printed in this order: {}\nWhen it should have been in this order: {}'.format(
                tar_order, sorted(tar_order))

Claude's full summary:


Python 3.14 Multiprocessing Output Race Condition

The Problem

Parallel extract tests pass on Python 3.11-3.13 but fail on Python 3.14 with output appearing in the wrong order, even though the code logic enforces correct sequencing.

Why Python 3.11-3.13 Worked

In Python 3.11-3.13, there was implicit synchronization that happened to preserve output ordering when multiple processes wrote to stdout:

  • The Global Interpreter Lock (GIL) provided some serialization
  • stdout buffering behavior inadvertently made line-level writes appear more atomic
  • OS-level scheduling happened to align with our synchronization logic often enough to pass tests

This was accidental correctness - the code had a race condition, but Python's implementation details masked it.

What Changed in Python 3.14

Python 3.14 introduced several changes that exposed the race condition:

  1. PEP 703 (Free-threaded Python): Removed/weakened GIL-based implicit synchronization
  2. Improved parallelism: Processes truly run more concurrently
  3. Modified stdout handling: Changes to how file descriptors are inherited and buffered across processes
  4. Different buffering behavior: stdout flushes and writes from different processes can interleave more readily

Why Our Fixes Didn't Work

We tried multiple approaches, all of which failed:

  1. Print locks in worker processes - Lock acquired, messages printed sequentially... but print() itself breaks into multiple syscalls, and the OS can interleave writes between processes at the syscall level.

  2. Atomic writes with sys.stdout.write() - Better than print(), but still not truly atomic across processes. The OS can split large writes or interleave small ones.

  3. Queue-based output to main process - Workers send messages to a queue, main process controls printing order. This SHOULD work, but the test framework itself captures stdout in a way that still sees interleaving, possibly because:

    • The test's subprocess.run() or similar capture mechanism has its own buffering
    • Multiple writes from the main process Python code still hit OS-level buffering that can reorder
    • The pipe/redirect used for capture doesn't guarantee write ordering even from a single process

The Fundamental Issue

stdout is not atomic across processes in Python 3.14. Even when a single process (main) controls the printing order, the OS and Python's IO subsystem can still interleave the output when it's being captured/redirected. This is especially true when:

  • Output is being captured by a test framework
  • Writes are happening rapidly in succession
  • The IO buffer isn't flushed synchronously between writes

Why This Is Hard To Fix

The only true solution would be to avoid stdout entirely and use a different IPC mechanism (like writing to a file with locks, or using a proper message queue library). But this would be a significant architectural change for what is fundamentally a Python 3.14 platform issue, not a logic bug in the code.

Recommendation

Skip these specific parallel output-ordering tests on Python 3.14 until:

  1. Python's multiprocessing stdout handling stabilizes
  2. The test framework is updated to handle this case
  3. The application moves to a different output architecture (if warranted)

The actual functionality works fine - files are extracted correctly, in parallel, with proper synchronization. Only the output ordering verification fails.

@forsyth2 forsyth2 mentioned this pull request Dec 8, 2025
7 tasks
@forsyth2
Copy link
Collaborator Author

Claude's description of the 3 commits from 2026-01-26 (4f172d5, 3f19ca5, 9f2692d) can be found below.

The tests run via python -m unittest tests/integration/python_tests/group_by_command/test_*.py are now all passing, which was the major obstacle in the previous iterations (see comments above, from November).

I still need to run the rest of the test suite though -- notably I see the GitHub Actions have caught another issue.


Here's a concise commit message:

Fix Python 3.14 multiprocessing output ordering race condition

Python 3.14's improved parallelism (PEP 703 free-threading) exposed a race condition in parallel extraction where output could arrive out of order.

Key changes:

  • Main process now controls synchronization: Advances the PrintMonitor counter after printing each tar, not workers
  • Fixed loop condition: Changed from current_tar_idx < len(tar_ordering) (always true) to len(output_buffer) > 0 to prevent infinite loops
  • Simplified worker flow: Workers send output via flush_output_for_tar(), then block on next wait_turn() until main process advances counter
  • Added diagnostics: Progress tracking to detect hangs (no progress for 5+ seconds)

Flow:

  1. Worker calls wait_turn() → blocks until counter matches its tar index
  2. Worker processes tar and calls flush_output_for_tar() → sends output to queue
  3. Main process drains queue, prints in order, advances counter
  4. Next worker's wait_turn() succeeds

This ensures ordered output in Python 3.14 while maintaining the same performance as Python 3.11-3.13.

@forsyth2
Copy link
Collaborator Author

I've run the Chrysalis tests and they also pass using the latest commit. The only test I haven't run is the Globus authentication test, because it requires NERSC HPSS, which is down today (see NERSC Center Status). That said, there weren't any Globus changes in this PR, so I don't anticipate any issues there.

@forsyth2
Copy link
Collaborator Author

forsyth2 commented Jan 28, 2026

Review guide from Claude:

Output from Claude

Code Review Guide: Python 3.14 Support PR

Overview

This PR adds Python 3.14 support to zstash. Python 3.14's changes to multiprocessing (PEP 703 free-threading, improved parallelism) exposed race conditions in output ordering that were masked in 3.11-3.13 by GIL-based implicit synchronization.

✅ Low-Risk Changes (Quick Review)

Version Updates - Straightforward:

  • .github/workflows/build_workflow.yml: Python 3.13 → 3.14, added to matrix, added fail-fast: false
  • conda/dev.yml: Python constraint <3.14<3.15
  • setup.py: Same constraint update
  • setup.cfg: mypy version 3.13 → 3.14

Simple Bug Fix:

  • extract.py line 784: Added Python 3.12+ check for tar.extract() filter parameter (security feature)

⚠️ High-Risk Changes (Requires Careful Testing)

Multiprocessing Refactor (extract.py and parallel.py):

Root Cause Fixed

Python 3.14's improved parallelism broke the implicit stdout synchronization that accidentally made output ordering work in 3.11-3.13. The fix moves synchronization control to the main process.

Architecture Changes

1. Synchronization Model Redesign (parallel.py)

  • Old: Workers used condition variables and controlled their own printing
  • New: Main process controls synchronization via counter, workers send output to queue
  • Key insight: Main process calls advance_to_next_tar() after printing (line 383), unblocking next worker
  • Review: Polling with 0.1s sleep (line 464) vs event-driven - CPU/latency tradeoff necessary for 3.14 compatibility

2. Work Distribution (lines 329-342)

  • Changed from heap-based greedy → round-robin for simpler ordering guarantees
  • Sorts workers' matches by tar (line 347) to ensure sequential processing
  • Review: Load imbalance acceptable given the ordering requirements?

3. Output Collection (lines 367-436)

  • New output_queue for worker → main communication
  • output_buffer dict for reordering out-of-sequence arrivals
  • Main process drains queue, prints in order, advances counter
  • Critical fix: Loop condition includes len(output_buffer) > 0 to handle buffered output after workers finish
  • Review:
    • Buffer memory usage with many tars
    • 5s stall detection threshold (line 421) appropriate?
    • 180s max timeout (line 387) based on real workloads?

4. Database Handling (lines 547-552)

  • Workers open own DB connections (required - cursors not picklable in forkserver mode)
  • Connection closed properly (lines 827-828)
  • Review: SQLite concurrent reader behavior confirmed?

5. Worker Flow (critical for reviewers to understand)

1. wait_turn(tar_N) → blocks until counter == N
2. extract tar_N files
3. flush_output_for_tar(tar_N) → sends to queue
4. [main process prints tar_N, calls advance_to_next_tar(tar_N)]
5. wait_turn(tar_N+1) → succeeds because counter advanced

6. Error Handling

  • Exception wrapper (lines 502-521) ensures failures reported even on crashes
  • Timeout on turn-waiting prevents indefinite hangs
  • Progress logging every 30s (line 355)

Testing Checklist

Critical - Major refactor necessitated by Python 3.14 compatibility:

  • All Python versions: 3.11, 3.12, 3.13, 3.14
  • Serial mode (--workers 1) - uses different code path
  • Parallel modes (--workers 2, 4, 8, 16)
  • Output determinism: Multiple runs produce identical output order
  • Previously failing tests now pass on 3.14:
    • testCheckParallel*
    • testExtractParallel*
  • No regressions on 3.11-3.13
  • Performance: Comparable to 3.13 (slight overhead from polling expected)
  • Stress test: Large file sets, mixed tar sizes
  • Edge cases:
    • Single tar
    • More workers than tars
    • Workers crash mid-extraction

Key Review Points

  1. Line 335: Sorting tar names - verify extraction order is correct
  2. Line 347: Worker matches sorted by tar - ensures sequential processing within worker
  3. Line 383: advance_to_next_tar() - critical synchronization point
  4. Line 390: Loop condition includes buffer check - prevents premature exit
  5. Line 464: Polling interval - necessary tradeoff for 3.14 compatibility
  6. Line 625-646: Wait-before-work pattern - prevents race condition
  7. Line 808: flush_output_for_tar() - replaces old two-step enqueue+print

Context for Reviewers

This wasn't over-engineering - Python 3.14 broke the original implementation through:

  • PEP 703 free-threading removed GIL-based implicit synchronization
  • stdout is no longer atomic across processes
  • Improved parallelism means true concurrency, exposing race conditions

The refactor moves from "workers control printing with locks" to "main process orchestrates everything" - the only architecture that works reliably in 3.14.

Known Tradeoffs

  • Round-robin less balanced than heap (acceptable for ordering guarantees)
  • Polling adds latency/CPU overhead (0.1s is reasonable)
  • Buffer memory grows with out-of-order completions (bounded by number of tars)
  • Hardcoded timeouts may need tuning for very slow systems

@forsyth2
Copy link
Collaborator Author

forsyth2 commented Jan 28, 2026

After some testing the branch at various stages, Claude was able to generate this summary table:

Output from Claude
Case Test Result Comments on Output (Ordering)
Case 1: Base (0 commits) ✅ PASS Tar files extracted sequentially in correct order: 000000.tar → 000001.tar → 000002.tar → 000003.tar → 000004.tar. No ordering issues detected.
Case 2: +1 commit (Python 3.14 support) ❌ FAIL Fatal error during parallel extract: TypeError: cannot pickle 'sqlite3.Cursor' object. No extraction output produced. Test fails because expected file extraction messages are absent. The sqlite3 cursor cannot be serialized for multiprocessing.
Case 3: +7 commits (functionality fixes) ❌ FAIL Ordering violation due to "Worker waiting" debug messages. The tar order array is [0, 0, 1, 1, 2, 2, 3, 3, 4, 4, 0, 0, 0, 0, 0, 1, 2, 3, ...] instead of [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, ...]. The first part [0, 0, 1, 1, 2, 2, 3, 3, 4, 4] comes from the initial non-verbose extraction output. The problem is the debug section contains DEBUG: Worker waiting for tar 000001.tar (index 1), current index is 0 which adds extra tar number mentions before tar 0 completes in the parsed output, breaking sequential ordering.
Case 4: +4 commits (major refactor) ✅ PASS Tar files extracted in correct sequential order with proper synchronization. All expected "Extracting" messages present. Test passes completely - both functional correctness and ordering requirements met.
Case 5: -4 commits, +1 commit (comment out "Worker waiting" line) ❌ FAIL Ordering still violated, but differently. Tar order is now [0, 0, 1, 1, 2, 2, 3, 3, 4, 4, 0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2, ...] instead of expected [0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, ...]. Removing "Worker waiting" messages eliminated the early interleaving (no more 0, 0, 1, 1, 2, 2, 3, 3, 4, 4, 0, 0, 0, 0, 0, 1, 2, 3), but the fundamental issue remains: the first extraction block [0, 0, 1, 1, 2, 2, 3, 3, 4, 4] is still counted, followed by the debug write() calls [0, 0, 0, 0, 1, 1, 1, 1, ...], creating duplicate/interleaved counting.
Case 6: -1 commit, +1 commit (remove both "Worker waiting" and "write()" debug lines) ❌ FAIL Cleaner ordering violation. Tar order is [0, 0, 1, 1, 2, 2, 3, 3, 4, 4, 0, 0, 1, 1, 2, 2, 3, 3, 4, 4] instead of [0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4]. Both debug message types removed, resulting in simpler pattern: first extraction produces [0, 0, 1, 1, 2, 2, 3, 3, 4, 4] (2 tar mentions per archive), second extraction also produces [0, 0, 1, 1, 2, 2, 3, 3, 4, 4] (2 tar mentions per archive). Debug section now only shows: DEBUG: Worker got turn for tar 000000.tar! and DEBUG: Set curr_tar to 000000.tar (repeated for each tar). The root cause is clear: two separate extraction runs are being counted, not interleaved debug output.

Key insight from Case 6: Removing all the verbose debug logging reveals the actual problem—the test is capturing output from two consecutive extraction runs (both the first + zstash extract -v --hpss=none --workers=3 and what appears to be a second run), and the regex is counting tar mentions from both, creating the doubled pattern.


Yes, the major refactor in Case 4 was necessary.

Here's why:

The Root Problem (revealed by Case 6):
The test captures output from what appears to be two extraction runs, and compare_tar_orders() counts tar mentions from both runs. Each run mentions each tar file twice (e.g., 000000.tar appears in "exists. Checking expected size" and "Opening tar archive"), resulting in the pattern [0, 0, 1, 1, 2, 2, 3, 3, 4, 4, 0, 0, 1, 1, 2, 2, 3, 3, 4, 4].

Why Cases 3, 5, and 6 all fail:

  • Case 3: Debug messages added MORE tar mentions to an already-broken output capture
  • Case 5: Removed "Worker waiting" messages but still had "write()" debug messages AND the double-extraction problem
  • Case 6: Removed all verbose debug but still had the double-extraction problem

Why Case 4 succeeds:
The major refactor must have fundamentally changed how output is captured/produced during parallel extraction. It likely:

  1. Ensured only ONE extraction run's output is captured by the test, OR
  2. Changed the output mechanism so that tar file mentions appear in strict sequential order even across multiple processing phases, OR
  3. Modified the test infrastructure to properly isolate/reset between runs

Simply removing debug messages (Cases 5 & 6) only treated symptoms. The refactor in Case 4 addressed the architectural issue of how parallel workers coordinate their console output and/or how the test captures that output.

Conclusion: Yes, the major refactor was necessary because the problem wasn't just verbose logging—it was a fundamental issue with output ordering/capture in the parallel execution model.


Looking at the test file, I need to revise my previous answer. The test structure shows there are actually TWO separate zstash extract commands run sequentially:

  1. First run (line 65-78): zstash extract -v --hpss={} --workers=3 (with verbose flag)
  2. Second run (line 80-93): zstash extract --hpss={} --workers=3 (without verbose flag)

Both runs call self.compare_tar_orders(output + err) to check ordering.

Revised Understanding:

The Root Problem:
Each individual extraction run is producing output where tar files are mentioned out of order. The pattern [0, 0, 1, 1, 2, 2, 3, 3, 4, 4] from a single run shows that:

  • Tar 0 is mentioned twice (e.g., "000000.tar exists" + "Opening tar archive 000000.tar")
  • Tar 1 is mentioned twice
  • Etc.

But they appear interleaved: tar 0 (twice), then tar 1 (twice), then tar 2 (twice)... instead of all tar 0 mentions grouped together.

Why Cases 3, 5, and 6 fail:
Looking at Case 6's output more carefully, the pattern [0, 0, 1, 1, 2, 2, 3, 3, 4, 4, 0, 0, 1, 1, 2, 2, 3, 3, 4, 4] is:

  • First half: [0, 0, 1, 1, 2, 2, 3, 3, 4, 4] from the verbose run
  • Second half: [0, 0, 1, 1, 2, 2, 3, 3, 4, 4] from the non-verbose run

Each run individually has the wrong ordering! The workers are processing tars concurrently but their output isn't properly serialized.

Why Case 4 succeeds:
The major refactor was absolutely necessary because it fixed the fundamental parallel output coordination problem. It likely implemented proper output serialization so that within each extraction run, all mentions of tar N complete before any mention of tar N+1 begins, giving the correct pattern [0, 0, 0, 0, 1, 1, 1, 1, ...] for each run.

Conclusion: Yes, the major refactor was necessary. The problem wasn't about capturing two runs—that's intentional test design. The problem was that each individual parallel extraction run wasn't maintaining sequential tar ordering in its output, and the refactor fixed that core parallelization/synchronization issue.

@forsyth2
Copy link
Collaborator Author

It turns out that case 3 was in fact simply failing because of the added debug lines.

Two important lessons:

  • Claude appears to be over-eager to engineer overly-complex solutions. Here, we see that the major refactor (case 4) turned out to be completely unnecessary. It is evidently extremely important that in AI prompts we 1) specify to change as little code as possible and 2) include any key pieces of code that are likely to be the issue (so it can focus in on that).
    • Jill has noted previously in another PR how Claude had over-engineered a solution.
    • (While Claude did note "This wasn't over-engineering" above, that was because I had said in the prompt the refactor had turned out necessary. That is, it wasn't the same context window where Claude had over-engineered the original solution).
  • The zstash output review tests are extremely sensitive to printed output. Simply introducing new debug lines can easily break them.
    • I can't find the discussion to link, but Tony had previously run into this when introducing debug lines.

@forsyth2
Copy link
Collaborator Author

forsyth2 commented Jan 28, 2026

I've updated the commits. I removed the most recent 4 commits and added 2 commits: 137c101 removes extra write lines, 976397c fixes tar extraction (this is the same change as the 4th commit that was removed).

The Chrysalis tests are passing. Remaining action items:

  • Test on Perlmutter (once HPSS is up)
  • Visually inspect code changes
  • Draft a code review guide

@forsyth2
Copy link
Collaborator Author

forsyth2 commented Feb 2, 2026

I've confirmed all tests pass on Chrysalis & Perlmutter. I've squashed the commits into logical units: 7740c12 for adding Python 3.14 support, 3cdcdd3 for implementation changes necessary to keep expected behavior.

@forsyth2
Copy link
Collaborator Author

forsyth2 commented Feb 3, 2026

I was able to remove more extraneous changes by reviewing the point in the commit history where the debug lines began interfering with the ordering tests. The 3 commits now are:

  1. Support Python 3.14. This just adds Python 3.14 to the conda dev environment and makes any changes related to that.
  2. Make changes to implementation details. This is based off early commits from November.
  3. Add final fixes. Changes necessary to get all tests passing. Added in January.

Commits 1 & 3 are known to be necessary. If any extraneous changes remain, they'd be in commit 2.

@forsyth2
Copy link
Collaborator Author

forsyth2 commented Feb 3, 2026

Chrysalis tests pass as well.

@forsyth2
Copy link
Collaborator Author

forsyth2 commented Feb 3, 2026

I think I've removed the over-engineering aspects. Remaining action items:

  • Self review
  • Have others code review

Claude's summary guide:


Summary: Python 3.14 Compatibility & Deadlock Fix

The Problem

Two distinct issues blocking Python 3.14 support:

  1. Pickle Error: Python 3.14 changed default multiprocessing start method from fork to forkserver, which cannot pickle sqlite3.Cursor objects

    • Error: TypeError: cannot pickle 'sqlite3.Cursor' object
    • Affected 12 parallel extract/check tests
  2. Deadlock: Multiple wait_turn() calls per tar created circular dependency

    • Workers would block waiting for their own turn repeatedly
    • Tests would hang indefinitely

The Solution

1. Database Connection Management

Before: Main process passed shared sqlite3.Cursor to all workers
After: Each worker opens its own database connection

# In extractFiles():
if cur is None:  # Worker process
    con = sqlite3.connect(get_db_filename(cache))
    cur = con.cursor()
    close_at_end = True

2. Synchronization Simplification

Before: Complex flow with multiple wait points

Process tar → wait → log → wait → extract → print → wait → done

After: Single wait at start, then process entirely

wait_turn() → set_curr_tar() → process all files → print_all_contents()

Key changes:

  • wait_turn() called once per tar at beginning (not during or after processing)
  • Removed wait_turn() from print_all_contents() - now just drains queue
  • Logging setup moved to worker initialization (once per worker, not per tar)

3. Defensive Measures

  • 180-second timeouts to prevent hung tests
  • Try-catch blocks around all synchronization calls
  • Workers can still report failures even if coordination fails

Implementation Details

PrintMonitor Changes

  • Replaced condition variables with polling + sleep (more robust)
  • Changed from queue-based to index-based tracking (simpler)
  • Added timeout mechanisms with clear error messages

extractFiles() Changes

  • Split into wrapper + _extractFiles_impl() for better error handling
  • Made cur parameter optional
  • Moved logging setup outside tar processing loop

Worker Flow

# Once per worker (setup)
- Setup logging handlerprint_queue

# Once per tar
- wait_turn(tar)              # Acquire "lock" on this tar's output
- set_curr_tar(tar)
- Process all files           # Logs accumulate in queue
- print_all_contents()        # Drain queue (no waiting)
- Advance to next tar         # Implicitly releases "lock"

Why This Approach

  1. Minimal for pickle fix: Only change was removing cursor from multiprocess args
  2. Necessary for deadlock: Single wait-point prevents circular dependencies
  3. Production-ready: Timeouts ensure tests don't hang forever
  4. Maintainable: Simpler synchronization model is easier to reason about

Impact

  • ✅ All 12 tests pass on Python 3.14
  • ✅ No deadlocks or hangs
  • ✅ Maintains correct output ordering
  • ✅ Backwards compatible with Python 3.11-3.13

Trade-offs

Added complexity: Timeout logic and polling vs. condition variables
Gained reliability: Tests complete reliably, easier debugging with timeouts
Net result: More code, but addresses two real blockers for Python 3.14

@forsyth2
Copy link
Collaborator Author

forsyth2 commented Feb 3, 2026

I think I've removed the over-engineering aspects.

To confirm this further, I went back through my Claude conversations from November. Re-applying the diffs confirms the following as the debugging workflow:

  1. Add Python 3.14 support. Causes many tests fail
  2. Multiple iterations to get to just 2 tests failing -- the ordering tests. These are legitimate errors though -- the tars are printing in the wrong order. Getting to this point also involved dealing with hangs/deadlocks.
  3. Further changes were necessary to get the tars printing in the correct order. (Unfortunately, it was during this part that Claude inserted debugging lines that continued to fail the ordering tests even though the behavior had been corrected).

To summarize, it appears everything in commit 2 here really was necessary to 1) get all tests passing on Python 3.14 and 2) not cause deadlocks.

workers_to_tars: List[set] = [set() for _ in range(num_workers)]
# A min heap, of (work, worker_idx) tuples, work is the size of data
# that worker_idx needs to work on.
# We can efficiently get the worker with the least amount of work.
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suppose this is the root cause of the ordering issue -- we were adding tars based on the least busy workers. But 1) Why did it work before Python 3.14? and 2) is the new round-robin fashion going to impact efficiency?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Response from Claude based on the diff

Looking at your diff, I can address your three questions:

1) Is this over-engineered?

Yes, portions are over-engineered for what appears to be the core issue. The fundamental problem seems to be process synchronization during parallel extraction, but some of your changes add complexity that may not be necessary:

  • Timeout handling with multiple fallback paths - You've added timeouts at several levels (wait_turn, print_all_contents, the main loop). This suggests you're treating symptoms rather than root causes.
  • The complete rewrite of PrintMonitor - Switching from condition variables to a polling-based index system is actually simpler, not more complex, but the extensive timeout handling suggests the synchronization approach might still have issues.
  • Delayed logging setup - Moving logging setup inside the tar processing loop adds complexity. This seems like a workaround for ordering issues rather than a fix.

2) Will round-robin reduce efficiency vs. heap?

Yes, potentially, but probably not significantly in practice.

The heap approach was trying to balance work across workers by assigning tars based on their size - giving the next largest tar to the worker with the least work. This optimizes for work balance.

Round-robin just cycles through workers regardless of tar size, which could lead to imbalanced workloads if tar sizes vary significantly (e.g., Worker 1 gets all small tars, Worker 2 gets all large tars).

However, the efficiency loss is likely minor because:

  • Your ordering constraint requires processing tars sequentially for output anyway
  • Workers that finish early just wait, so perfect load balancing doesn't help much

3) Why did the heap method pass ordering tests before?

This is the key question. The heap method probably passed because:

  1. Tests may have had uniform tar sizes - If all tars were similar sizes, heap and round-robin would produce similar assignments
  2. The synchronization logic worked correctly in the old Python version - The condition variable approach in the original PrintMonitor likely worked fine with Python 3.11-3.13's multiprocessing implementation
  3. Race conditions were rare/didn't manifest - The bugs may have existed but only triggered under specific timing conditions that Python 3.14's multiprocessing changes exposed

The real issue: Python 3.14 likely changed something in the multiprocessing implementation (perhaps how condition variables, queues, or manager proxies work) that broke the synchronization. Your changes work around this by:

  • Using simpler primitives (index counter vs. queue + condition variable)
  • Adding timeouts everywhere to prevent hangs
  • Simplifying the work assignment to make ordering more predictable

Better approach: Instead of round-robin + extensive timeout handling, consider:

  • Keeping round-robin (simpler is better given the ordering constraint)
  • Investigating what specifically changed in Python 3.14's multiprocessing
  • Potentially using a more robust synchronization primitive (like a Barrier or Semaphore instead of polling an index)
  • Removing the timeout fallbacks once proper synchronization is achieved

The fact that you needed timeouts suggests the synchronization still isn't quite right - a well-designed synchronization mechanism shouldn't need timeouts to prevent deadlocks.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've worked with Claude to reduce over-engineering further. Will add another commit.

@forsyth2
Copy link
Collaborator Author

forsyth2 commented Feb 3, 2026

Commit 4 & Commit 5 reduce the line change count:

First 3 commits All 5 commits
Entire PR +246/-119 +115/-117
zstash/extract.py +140/-52 +64/-46
zstash/parallel.py +93/-58 +42/-63

Copy link
Collaborator Author

@forsyth2 forsyth2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here are explanatory comments from Claude re: the PR as these 5 diffs:

> git log --oneline | head -n 6
0ebd7a5 Restore comments and type annotations
f33ea80 Claude's changes to reduce over-engineering
734f200 Fixes to make Perlmutter tests pass
a54c842 Implementation changes to support Python 3.14
e315362 Add Python 3.14 support
880b9fd Optimize zstash update by using scandir (#420)

All tests pass on Perlmutter. I still need to check on Chrysalis, but based on previous iterations, I don't expect issues there.

EDIT: I've also tested on Chrysalis now.


Summary: The main theme is replacing complex load-balancing + synchronization with simple, predictable ordering. This eliminates deadlocks at the cost of potentially less optimal work distribution (but for most workloads, round-robin is fine).

uses: actions/checkout@v3

- name: Set up Python 3.13
- name: Set up Python 3.14
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Python 3.14 Support

Files: .github/workflows/build_workflow.yml, conda/dev.yml, setup.cfg, setup.py

  • Bumping Python version support from 3.13 to 3.14 across all configuration files
  • Adding fail-fast: false to the build matrix so all Python versions get tested even if one fails

import sqlite3
import sys
import tarfile
import time
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Import Changes

File: zstash/extract.py

  • Removed heapq import (was used for load balancing workers, no longer needed)
  • Added time import for sleep delays in the parallel processing loop

cur: sqlite3.Cursor,
args: argparse.Namespace,
multiprocess_worker: Optional[parallel.ExtractWorker] = None,
cur: Optional[sqlite3.Cursor] = None,
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Database Connection Fix for Parallel Workers

Function: extractFiles() - signature change

  • Swapped order of multiprocess_worker and cur parameters
  • Made cur optional (defaults to None)
  • Why: Each worker process needs its own database connection. SQLite connections can't be shared across processes. When cur=None, each worker opens its own connection and closes it when done.


# For worker i, workers_to_tars[i] is a set of tars
# that worker i will work on.
# Round-robin assignment for predictable ordering
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Simplified Work Distribution

Function: multiprocess_extract() - worker assignment logic

  • Old approach: Used a min-heap to assign tars to workers based on total file size (trying to balance workload)
  • New approach: Simple round-robin assignment (tar 0 → worker 0, tar 1 → worker 1, etc.)
  • Why the change: The heap-based approach was complex and could result in unpredictable processing order, leading to deadlocks when workers waited for their turn to print. Round-robin is predictable and prevents deadlocks.

"""

def __init__(self, tars_to_print: List[str], *args, **kwargs):
def __init__(self, tars_to_print: List[str], manager=None, *args, **kwargs):
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Print Synchronization Redesign

Class: PrintMonitor

  • Old approach: Used a queue of tars and a condition variable with complex locking
  • New approach: Uses a simple counter tracking position in an ordered list of tars
  • Why: The condition variable approach had race conditions causing deadlocks. The counter-based approach is simpler: each tar has an index, and we just increment the counter when done.

Key methods:

  • wait_turn(): Worker spins (with small sleep) until the counter reaches its tar's index
  • done_enqueuing_output_for_tar(): Increments the counter to let the next worker proceed

# let the process know.
# This is to synchronize the print statements.

# Wait for turn before processing this tar
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Extraction Loop Changes

Function: extractFiles() - main loop

  • Added wait_turn() call before processing each tar - ensures workers process tars in order
  • Moved print_all_contents() to after marking tar as done - ensures all output is flushed before advancing
  • Added sleep in the monitoring loop (time.sleep(0.01)) - prevents busy-waiting and gives workers CPU time
  • Added extra drain of failure queue after processes finish - catches any failures that arrived just as processes were ending


elif extract_this_file:
tar.extract(tarinfo)
if sys.version_info >= (3, 12):
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Python 3.12+ tar.extract() Security

Function: extractFiles() - tar extraction

  • Added version check to use filter="tar" for Python 3.12+
  • Why: Python 3.12 deprecated calling tar.extract() without a filter due to security concerns about malicious tar files

@forsyth2 forsyth2 marked this pull request as ready for review February 3, 2026 21:25
@forsyth2
Copy link
Collaborator Author

forsyth2 commented Feb 3, 2026

This PR adds Python 3.14 support and adds implementation changes to keep expected behavior while using Python 3.14. Most importantly, there are large changes to the parallelism workflow, notably replacing the heap method with the round-robin method (which Claude claims won't impact performance). These changes were ultimately necessary to keep the output ordering tests passing and to avoid deadlocks. It seems that Python 3.14 has changed some multiprocessing handling. I've put explanations from Claude on relevant code blocks above (see here).

@golaz -- Do you think this change acceptable? It might be more efficient to go over this PR together at the next EZ discussion.

@chengzhuzhang @tomvothecoder @andrewdnolan -- cc'ing you all since this PR resolves the issue of supporting Python 3.14 for the upcoming E3SM Unified release. Please let me know if you have any suggestions/concerns.

Thanks all!

Comment on lines +77 to 80
if [[ "${{ matrix.python-version }}" == "3.12" ]] || [[ "${{ matrix.python-version }}" == "3.13" ]] || [[ "${{ matrix.python-version }}" == "3.14" ]]; then
python -m ensurepip --upgrade || true
python -m pip install --upgrade --force-reinstall pip setuptools wheel
fi
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm pretty sure if you include setuptoolsin the dev.yml then you don't need this block.

Example: https://github.com/xCDAT/xcdat/blob/8238fab6bbcbcc25f8dc67b4cbe753ab6ba7edfc/conda-env/dev.yml#L8-L9

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

DevOps CI/CD, configuration, etc.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants