Skip to content

Conversation

@tothambrus11
Copy link
Member

@tothambrus11 tothambrus11 commented Dec 7, 2025

Copy link
Collaborator

@dabrahams dabrahams left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't remember what cross-compilation was in there for. Please check

  1. with @compnerd, from whose work this is derived, about his reasons for using cross-compilation in the first place
  2. that this change doesn't slow the process down too much. That's not a very high bar if we're running this code infrequently anyway.

@tothambrus11
Copy link
Member Author

I don't remember what cross-compilation was in there for. Please check

A couple years ago GitHub didn't offer such a wide range of runners. I guess this was at least one reason.

@compnerd
Copy link
Collaborator

Yeah, the ARM64 runners were not available until recently. The cross-compilation allowed us to generate the binaries even if we could not test/run them.

run: |
pwsh -File ${{ github.workspace }}/SourceCache/llvm-build/scripts/fix-vs.ps1
- name: Test binary compatibility
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does this check (aside from its diagnostic output)? That the llvm-config we built runs on the runner that built it?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, previously this kind of problem was only detected by a later step that made the .pc file by calling llvm-config. These are the runners that we are using for the hylo-new builds, so it's good to have a postcondition check that the tool is not broken. Making the .pc file also invokes llvm-config, but that's just incidental, this postcondition check is explicitly there to check if binaries are not corrupted or accidentally compiled in a way that makes them fail to run.

Copy link
Collaborator

@dabrahams dabrahams left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See comments inline

Comment on lines +357 to +367
echo "=== Testing Binary Compatibility ==="
echo "Runner OS: ${{ runner.os }}"
echo "Runner Arch: ${{ runner.arch }}"
echo "Target Arch: ${{ matrix.arch }}"
echo "Target Triple: ${{ matrix.triple_cpu }}-${{ matrix.triple_suffix }}"
echo ""
# Test llvm-config
echo "Testing llvm-config..."
"${{ github.workspace }}/BuildRoot/${{ env.PACKAGE_NAME }}/bin/llvm-config${{ matrix.executable_suffix }}" --version
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some or all of the diagnostic printing looks like AI-generated slop. Under what circumstances would it be useful?

- name: Setup sccache
id: sccache
uses: compnerd/ccache-action@sccache-0.7.4
uses: hendrikmuhs/ccache-action@v1.2.20
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not opposed to changing ccache vendors but I'd like to understand why we're doing it.

Copy link
Member Author

@tothambrus11 tothambrus11 Dec 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Compnerd's fork was no longer necessary, the build succeeded and the speedup was at least noticeably with sccache than without. This is the upstream repository which is actively maintained.

I am not sure about the reliability of sccache though, or in the timing of the the builds being consistent. Some targets were significantly faster with a cache hit than without, while some others got speed up. I'll rerun the build a couple more times to see if there is a consistent pattern.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants