Does it make sense to revisit our large-chunking algorithm? Or start with our original 33 big chunks that we carved out "by hand".
Train a script to look for passages that do not align in particular witnesses and set markers around the last moments where the texts all align together, and the first moments when they align again on the other side.
Mark the divergences and use them as "cutting points" for creating new subchunks.