Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor parquet dataloader #867

Open
wants to merge 88 commits into
base: main
Choose a base branch
from

Conversation

zyaoj
Copy link
Contributor

@zyaoj zyaoj commented Dec 3, 2024

What does this PR do? Please describe:
The first attempt to extract and migrate generic parquet dataloader from MERES to fairseq2.

Does your PR introduce any breaking changes? If yes, please list them:
N/A

Check list:

  • Was the content of this PR discussed and approved via a GitHub issue? (no need for typos or documentation improvements)
  • Did you read the contributor guideline?
  • Did you make sure that your PR does only one thing instead of bundling different changes together?
  • Did you make sure to update the documentation with your changes? (if necessary)
  • Did you write any new necessary tests?
  • Did you verify new and existing tests pass locally with your changes?
  • Did you update the CHANGELOG? (no need for typos, documentation, or minor internal changes)

@zyaoj zyaoj self-assigned this Dec 3, 2024
@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Dec 3, 2024
@zyaoj zyaoj marked this pull request as ready for review January 3, 2025 13:46
@zyaoj zyaoj removed request for artemru and cbalioglu January 3, 2025 13:47
@zyaoj zyaoj marked this pull request as draft January 3, 2025 13:47
@artemru artemru force-pushed the zyaoj/refactor-data-parquet-fairseq2 branch from aa4c4f7 to 25fc843 Compare February 10, 2025 11:53
@artemru artemru force-pushed the zyaoj/refactor-data-parquet-fairseq2 branch from c4354fe to 5a41814 Compare March 10, 2025 12:57


### on filtering
One can use the `partition_filters` (as `pa.compute.Expression`) parameter to restrict the dataset on a subset of the parquet files.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it possible to apply filters that use non-partition columns (e.g. some numeric scores computed within a dataset)?

Copy link
Contributor Author

@zyaoj zyaoj Mar 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I could think of some simple example like:

import pyarrow as pa
import pyarrow.parquet as pq
import pyarrow.compute as pc
import tempfile
from pathlib import Path

table = pa.Table.from_pydict(
    {
        "col1": [1, 2, 3, 4, 5],
        "col2": ["a", "b", "c", "d", "e"],
        "col3": [1.1, 2.2, 3.3, 4.4, 5.5],
    }
)

# Create a temporary directory and file
with tempfile.TemporaryDirectory() as temp_dir:
    file_path = Path(temp_dir) / "test.parquet"

    # Write the parquet file
    pq.write_table(table, file_path)

    filters = pq.filters_to_expression(eval("pc.field('col1') == pc.scalar(1)"))  # the way we do the filter conversion in this PR
    dataset = pq.ParquetDataset(
        file_path, filters=filters, filesystem=None
    )
    df = dataset.read().to_pandas()

Haven't tried more advanced lambda functions that dynamically computes some values yet.

Copy link
Contributor

@artemru artemru Mar 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, it's possible to pass this filter expression to FragmentLoadingConfig(filters ="(pc.field('col1') == 1) & (pc.field('col2') > 1)") which will apply the filter in memory just after loading the table. This filters should be expressed in terms of the original dataset name space (not after renaming).
https://github.com/facebookresearch/fairseq2/pull/867/files#diff-5965d7b9782e8b1d6ee5af20dd9dd95e4e26d6f4bfc6262645f8bbb2fe5bb239R147
you can pass them as string (to do python eval) or directly as pa.dataset.Expression

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so, we offer both partition filters (at fragment scheduling time) and any type of row filters (after downloading)

### Benefits of Using Hugging Face Datasets with fairseq2

- **No Download Required**: Access datasets directly from Hugging Face Hub without downloading them first
- **Efficient Loading**: Only load the necessary parts of the dataset
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but SafeFragment will do an auto retry in case of lost connection!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants