-
Notifications
You must be signed in to change notification settings - Fork 11
/
Copy pathget_started.qmd
216 lines (150 loc) · 8.65 KB
/
get_started.qmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
---
title: Get started with pins
jupyter: python3
---
```{python}
#| include: false
import pandas as pd
pd.options.display.max_rows = 25
```
The pins package helps you publish data sets, models, and other Python objects, making it easy to share them across projects and with your colleagues.
You can pin objects to a variety of "boards", including local folders (to share on a networked drive or with DropBox), Posit Connect, Amazon S3,
Google Cloud Storage, Azure, and more.
This vignette will introduce you to the basics of pins.
```{python}
from pins import board_local, board_folder, board_temp, board_url
```
## Getting started
Every pin lives in a pin *board*, so you must start by creating a pin board.
In this vignette I'll use a temporary board which is automatically deleted when your Python session is over:
```{python}
board = board_temp()
```
In real life, you'd pick a board depending on how you want to share the data.
Here are a few options:
```python
board = board_local() # share data across R sessions on the same computer
board = board_folder("~/Dropbox") # share data with others using dropbox
board = board_folder("Z:\\my-team\pins") # share data using a shared network drive
board = board_connect() # share data with Posit Connect
```
## Reading and writing data
Once you have a pin board, you can write data to it with the [](`~pins.boards.BaseBoard.pin_write`) method:
```{python}
from pins.data import mtcars
meta = board.pin_write(mtcars, "mtcars", type="csv")
```
The first argument is the object to save (usually a data frame, but it can be any Python object), and the second argument gives the "name" of the pin.
The name is basically equivalent to a file name; you'll use it when you later want to read the data from the pin.
The only rule for a pin name is that it can't contain slashes.
After you've pinned an object, you can read it back with [](`~pins.boards.BaseBoard.pin_read`):
```{python}
board.pin_read("mtcars")
```
You don't need to supply the file type when reading data from a pin because pins automatically stores the file type in the [metadata](#metadata).
::: {.callout-note}
If you are using the Posit Connect board [](`~pins.board_connect`), then you must specify your pin name as
`"user_name/content_name"`. For example, `"hadley/sales-report"`.
:::
## How and what to store as a pin
Above, we saved the data as a CSV, but you can choose another option depending on your goals:
- `type = "csv"` uses `to_csv()` from pandas to create a CSV file. CSVs are plain text and can be read easily by many applications, but they only support simple columns (e.g. numbers, strings), can take up a lot of disk space, and can be slow to read.
- `type = "parquet"` uses `to_parquet()` from pandas to create a Parquet file. [Parquet](https://parquet.apache.org/) is a modern, language-independent, column-oriented file format for efficient data storage and retrieval. Parquet is an excellent choice for storing tabular data.
- `type = "arrow"` uses `to_feather()` from pandas to create an Arrow/Feather file.
- `type = "joblib"` uses `joblib.dump()` to create a binary Python data file, such as for storing a trained model. See the [joblib docs](https://joblib.readthedocs.io/en/latest/) for more information.
- `type = "json"` uses `json.dump()` to create a JSON file. Pretty much every programming language can read JSON files, but they only work well for nested lists.
- `type = "geoparquet"` uses `to_parquet()` from [geopandas](https://github.com/geopandas/geopandas) to create a [GeoParquet](https://github.com/opengeospatial/geoparquet) file, which is a specialized Parquet format for geospatial data.
Note that when the data lives elsewhere, pins takes care of downloading and caching so that it's only re-downloaded when needed.
That said, most boards transmit pins over HTTP, and this is going to be slow and possibly unreliable for very large pins.
As a general rule of thumb, we don't recommend using pins with files over 500 MB.
If you find yourself routinely pinning data larger that this, you might need to reconsider your data engineering pipeline.
Storing your data/object as a pin works well when you write from a single source or process. It is _not_ appropriate when multiple sources or processes need to write to the same pin; since the pins package reads and writes files, it cannot manage concurrent writes. It is also not appropriate for high frequency writes (multiple times per second).
- **Good** use for pins: an ETL pipeline that stores a model or summarized dataset once a day
- **Bad** use for pins: a Shiny app that collects data from users, who may be using the app at the same time
## Metadata
Every pin is accompanied by some metadata that you can access with [](`~pins.boards.BaseBoard.pin_meta`):
```{python}
board.pin_meta("mtcars")
```
This shows you the metadata that’s generated by default. This includes:
* `title`, a brief textual description of the dataset.
* an optional `description`, where you can provide more details.
* the date-time when the pin was `created`.
* the `file_size`, in bytes, of the underlying files.
* a unique `pin_hash` that you can supply to [](`~pins.boards.BaseBoard.pin_read`) to ensure that you’re reading exactly the data that you expect.
When creating the pin, you can override the default description or provide additional metadata that is stored with the data:
```{python}
board.pin_write(
mtcars,
name="mtcars2",
type="csv",
description = "Data extracted from the 1974 Motor Trend US magazine, and comprises fuel consumption and 10 aspects of automobile design and performance for 32 automobiles (1973–74 models).",
metadata = {
"source": "Henderson and Velleman (1981), Building multiple regression models interactively. Biometrics, 37, 391–411."
}
)
```
```{python}
board.pin_meta("mtcars")
```
While we’ll do our best to keep the automatically generated metadata consistent over time, I’d recommend manually capturing anything you really care about in metadata.
## Versioning
Every [](`~pins.boards.BaseBoard.pin_write`) will create a new version:
```{python}
board2 = board_temp()
board2.pin_write([1,2,3,4,5], name = "x", type = "json")
board2.pin_write([1,2,3], name = "x", type = "json")
board2.pin_write([1,2], name = "x", type = "json")
board2.pin_versions("x")
```
By default, [](`~pins.boards.BaseBoard.pin_read`) will return the most recent version:
```{python}
board2.pin_read("x")
```
But you can request an older version by supplying the `version` argument:
```{python}
version = board2.pin_versions("x").version[1]
board2.pin_read("x", version = version)
```
## Storing models
::: {.callout-warning}
The examples in this section use joblib to read and write data. Joblib uses the pickle format, and **pickle files are not secure**. Only read pickle files you trust. In order to read pickle files, set the `allow_pickle_read=True` argument. [Learn more about pickling](https://docs.python.org/3/library/pickle.html).
:::
You can write a pin with `type="joblib"` to store arbitrary python objects, including fitted models from packages like [scikit-learn](https://scikit-learn.org/).
For example, suppose you wanted to store a custom `namedtuple` object.
```{python}
from collections import namedtuple
board3 = board_temp(allow_pickle_read=True)
Coords = namedtuple("Coords", ["x", "y"])
coords = Coords(1, 2)
coords
```
Using `type="joblib"` lets you store and read back the custom `coords` object.
```{python}
board3.pin_write(coords, "my_coords", type="joblib")
board3.pin_read("my_coords")
```
## Caching
The primary purpose of pins is to make it easy to share data.
But pins is also designed to help you spend as little time as possible downloading data.
[](`~pins.boards.BaseBoard.pin_read`) and [](`~pins.boards.BaseBoard.pin_download`) automatically cache remote pins: they maintain a local copy of the data (so it's fast) but always check that it's up-to-date (so your analysis doesn't use stale data).
Wouldn't it be nice if you could take advantage of this feature for any dataset on the internet?
That's the idea behind [](`~pins.board_url`); you can assemble your own board from datasets, wherever they live on the internet.
For example, this code creates a board containing a single pin, `penguins`, that refers to some fun data I found on GitHub:
```{python}
my_data = board_url("", {
"penguins": "https://raw.githubusercontent.com/allisonhorst/palmerpenguins/master/inst/extdata/penguins_raw.csv"
})
```
You can read this data by combining [](`~pins.boards.BaseBoard.pin_download`) with `read_csv` from pandas:
```{python}
fname = my_data.pin_download("penguins")
fname
```
```{python}
import pandas as pd
pd.read_csv(fname[0]).head()
```
```{python}
my_data.pin_download("penguins")
```