-
Notifications
You must be signed in to change notification settings - Fork 103
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor parquet dataloader #867
base: main
Are you sure you want to change the base?
Conversation
aa4c4f7
to
25fc843
Compare
c4354fe
to
5a41814
Compare
src/fairseq2/data/parquet/README.md
Outdated
|
||
|
||
### on filtering | ||
One can use the `partition_filters` (as `pa.compute.Expression`) parameter to restrict the dataset on a subset of the parquet files. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it possible to apply filters that use non-partition columns (e.g. some numeric scores computed within a dataset)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I could think of some simple example like:
import pyarrow as pa
import pyarrow.parquet as pq
import pyarrow.compute as pc
import tempfile
from pathlib import Path
table = pa.Table.from_pydict(
{
"col1": [1, 2, 3, 4, 5],
"col2": ["a", "b", "c", "d", "e"],
"col3": [1.1, 2.2, 3.3, 4.4, 5.5],
}
)
# Create a temporary directory and file
with tempfile.TemporaryDirectory() as temp_dir:
file_path = Path(temp_dir) / "test.parquet"
# Write the parquet file
pq.write_table(table, file_path)
filters = pq.filters_to_expression(eval("pc.field('col1') == pc.scalar(1)")) # the way we do the filter conversion in this PR
dataset = pq.ParquetDataset(
file_path, filters=filters, filesystem=None
)
df = dataset.read().to_pandas()
Haven't tried more advanced lambda functions that dynamically computes some values yet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, it's possible to pass this filter expression to FragmentLoadingConfig(filters ="(pc.field('col1') == 1) & (pc.field('col2') > 1)")
which will apply the filter in memory just after loading the table. This filters should be expressed in terms of the original dataset name space (not after renaming).
https://github.com/facebookresearch/fairseq2/pull/867/files#diff-5965d7b9782e8b1d6ee5af20dd9dd95e4e26d6f4bfc6262645f8bbb2fe5bb239R147
you can pass them as string (to do python eval) or directly as pa.dataset.Expression
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so, we offer both partition filters (at fragment scheduling time) and any type of row filters (after downloading)
…ebookresearch/fairseq2 into zyaoj/refactor-data-parquet-fairseq2
…ub.com:facebookresearch/fairseq2 into zyaoj/refactor-data-parquet-fairseq2
### Benefits of Using Hugging Face Datasets with fairseq2 | ||
|
||
- **No Download Required**: Access datasets directly from Hugging Face Hub without downloading them first | ||
- **Efficient Loading**: Only load the necessary parts of the dataset |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but SafeFragment will do an auto retry in case of lost connection!
What does this PR do? Please describe:
The first attempt to extract and migrate generic parquet dataloader from MERES to fairseq2.
Does your PR introduce any breaking changes? If yes, please list them:
N/A
Check list: