We love Spark! But in production code we're wary when we see:
from pyspark.sql import DataFrame
def foo(df: DataFrame) -> DataFrame:
# do stuff
return dfBecause… How do we know which columns are supposed to be in df?
Using typedspark, we can be more explicit about what these data should look like.
from typedspark import Column, DataSet, Schema
from pyspark.sql.types import LongType, StringType
class Person(Schema):
id: Column[LongType]
name: Column[StringType]
age: Column[LongType]
def foo(df: DataSet[Person]) -> DataSet[Person]:
# do stuff
return dfThe advantages include:
- Improved readability of the code
- Typechecking, both during runtime and linting
- Auto-complete of column names
- Easy refactoring of column names
- Easier unit testing through the generation of empty
DataSetsbased on their schemas - Improved documentation of tables
Please see our documentation on readthedocs.
You can install typedspark from pypi by running:
pip install typedsparkBy default, typedspark does not list pyspark as a dependency, since many platforms (e.g. Databricks) come with pyspark preinstalled. If you want to install typedspark with pyspark, you can run:
pip install "typedspark[pyspark]"ide.mov
You can find the corresponding code here.
notebook.mov
You can find the corresponding code here.
I found a bug! What should I do?
Great! Please make an issue and we'll look into it.
I have a great idea to improve typedspark! How can we make this work?
Awesome, please make an issue and let us know!