Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 11 additions & 0 deletions .cirun.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
runners:
- name: aws-gpu-runner
cloud: aws
instance_type: g4dn.xlarge
machine_image: ami-067a4ba2816407ee9
region: eu-north-1
preemptible:
- true
- false
labels:
- cirun-aws-gpu
115 changes: 115 additions & 0 deletions .github/workflows/rsc_integration-test.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,115 @@
name: Integration test RSC

on:
schedule:
- cron: "0 0 * * *" # Run daily at midnight UTC
push:
branches: [main]
pull_request:
branches: [main]

concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true

jobs:
test:
name: Integration test/ test (3.x rapids-singlecell)
runs-on: "cirun-aws-gpu--${{ github.run_id }}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there no way to refactor the other workflow to make runs-on a configuration setting?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have no idea tbh.

timeout-minutes: 30
outputs:
failure_type: ${{ steps.export_failure.outputs.failure_type }}

defaults:
run:
shell: bash -el {0}
working-directory: rapids_singlecell

env:
GH_TOKEN: ${{ secrets.TOKEN_FOR_ISSUE_WRITE }}

steps:
- name: Checkout this repository (integration-testing)
uses: actions/checkout@v4
with:
fetch-depth: 0

- name: Checkout rapids-singlecell
uses: actions/checkout@v4
with:
repository: scverse/rapids_singlecell
fetch-depth: 0
path: rapids_singlecell

- name: Nvidia SMI sanity check
run: nvidia-smi

- uses: mamba-org/setup-micromamba@v2
with:
environment-file: rapids_singlecell/ci/environment.yml
init-shell: >-
bash
post-cleanup: 'all'

- name: Install rapids-singlecell
id: install_pkg
run: >-
pip install -e .[test]
"scanpy @ git+https://github.com/scverse/scanpy.git"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why install the live version of scanpy?

"anndata @ git+https://github.com/scverse/anndata.git"
Comment on lines +47 to +59
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need conda? Why are you also installing the upstream of scanpy?

https://github.com/scverse/anndata/blob/main/.github/workflows/test-gpu.yml uses uv, as does the rest of the tests in this repo.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I used uv for a while to do ci for rapids-singlecell. At some point every 5 runs there was an issue with installing rapids with uv so i switched back to conda

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we at least try it here? If not we can add an if-else statement to use conda for rapids in the unified CI, but I've never had an issue with the anndata CI

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know I implemented the uv ci for anndata-GPU. But you "only" install cupy and not the other rapids-stuff. I always got timeout erros from the nvidia-index


- name: Pip list
run: pip list

- name: Set failure type for install
if: failure()
run: |
echo "Installation failed for rapids-singlecell"
echo "failure_type=install" >> $GITHUB_ENV

- name: Run test
id: run_tests
run: pytest

- name: Set failure type for test
if: failure() && env.failure_type != 'install'
run: |
echo "Test failed for rapids-singlecell"
echo "failure_type=test" >> $GITHUB_ENV

- name: Check for open failure issue
if: failure() && github.event_name == 'schedule'
id: find_issue
run: |
ISSUE_TITLE="Integration Testing CI ${failure_type^} Failure on python rapids_singlecell"
echo "Checking for existing issue: $ISSUE_TITLE"
ISSUE_COUNT=$(gh issue list --repo scverse/rapids_singlecell --state open --search "${ISSUE_TITLE}" --json number --jq 'length')
if [[ "$ISSUE_COUNT" -gt 0 ]]; then
echo "${failure_type^} failure issue already exists for today."
echo "issue_exists=true" >> $GITHUB_ENV
else
echo "issue_exists=false" >> $GITHUB_ENV
echo "issue_title=$ISSUE_TITLE" >> $GITHUB_ENV
fi

- name: Report failure issue
if: failure() && env.issue_exists == 'false' && github.event_name == 'schedule'
run: |
RUN_URL="${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}"
ISSUE_BODY="The daily CI failed on ${failure_type} for rapids_singlecell failed. Please go to [the logs of the integration testing repo](${RUN_URL}) to review. @scverse/anndata"
gh issue create --repo scverse/rapids_singlecell --title "${{ env.issue_title }}" --body "${ISSUE_BODY}"



keepalive-job:
name: Keepalive Workflow
runs-on: ubuntu-latest
permissions:
actions: write
steps:
- name: Re-enable workflow
env:
GITHUB_TOKEN: ${{ github.token }}
shell: sh
run: |
gh api --verbose -X PUT "repos/${GITHUB_REPOSITORY}/actions/workflows/rsc_integration-test.yml/enable"
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ The following packages are tested:
- scvi-tools
- pertpy
- decoupler
- rapids-singlecell

## How it Works

Expand Down