You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
_Credit: [This cloning pattern is inspired by Dan Gooden’s article here from the Airtasker Tribe blog.](https://medium.com/airtribe/test-sql-pipelines-against-production-clones-using-dbt-and-snowflake-2f8293722dd4)_
4
+
5
+
Cloning is a cost- and time-efficient way of developing dbt models on Snowflake but it can be challenging when your cloning needs traverse different environments with different access controls: i.e. you want to clone a production database for use in development.
6
+
7
+
A solution for this is to run a 2-step cloning pattern:
8
+
9
+
1. A production role clones the production database or schema and then changes the ownership of its sub-objects to a developer role, thus creating a developer clone of production. The cloned object is still owned by the production role (which preserves the privilege to drop or replace that clone), but now the developer role has full access of its sub-objects.
10
+
2. Developer users use the developer role to clone that developer clone database or schema, thus creating a new personal developer clone for development. The developer role has full ownership of this cloned database and all its sub-objects.
11
+
12
+
This pattern can be used for cloning a schema or a database. If all the dbt models are stored within a single schema, schema-level cloning is a good option. When dbt is configured to write data to multiple schemata, database-level cloning is a good, more production-like option.
13
+
14
+
This patterns optimizes for the following:
15
+
16
+
-**Access Control:** no need to compromise on your access control system, such as by allowing your developer role to have extensive access on production. This pattern takes environmental separation as a given.
17
+
-**Flexible Availability:** step 1 can be run on any preferred schedule: the developer clone could be updated hourly, daily, weekly, or any other cadence. This first clone is ideally run after a complete execution of dbt for the freshest data possible.
18
+
-**Developer Flexibility:** developers can take personal clones whenever they need to and can even take multiple clones if they have need of more than one concurrent development environment. These developer clones are ideally commonly rotated to keep data fresh and production-like.
19
+
20
+
## Setup:
21
+
22
+
1. Update one of your production jobs to include step 1 of the cloning pattern. Here is an example implementation for database-level cloning from production to production_clone:
2. As needed, locally run step 2 of the cloning pattern to create or update personal development clones. Here is an example implementation for database-level cloning from production_clone to an ephemeral database called developer_clone_me:
Copy file name to clipboardexpand all lines: README.md
+58-11
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ This [dbt](https://github.com/dbt-labs/dbt-core) package contains Snowflake-spec
6
6
Check [dbt Hub](https://hub.getdbt.com/montreal-analytics/snowflake_utils/latest/) for the latest installation instructions, or [read the docs](https://docs.getdbt.com/docs/package-management) for more information on installing packages.
7
7
8
8
## Prerequisites
9
-
Snowflake Utils is compatible with dbt 0.20.0 and later.
9
+
Snowflake Utils is compatible with dbt 1.1.0 and later.
10
10
11
11
----
12
12
@@ -68,28 +68,60 @@ When a variable is configured for a conditon _and_ that condition is matched whe
68
68
When compiling or generating docs, the console reports that dbt is using the incremental run warehouse. It isn't actually so. During these operations, only the target warehouse is activated.
This macro clones the source schema into the destination schema.
71
+
This macro is a part of the recommended 2-step Cloning Pattern for dbt development, explained in detail [here](2-step_cloning_pattern.md).
72
+
73
+
This macro clones the source schema into the destination schema and optionally grants ownership over its tables and views to a new owner.
74
+
75
+
Note: the owner of the schema is the role that executed the command, but if configured, the owner of its sub-objects would be the new_owner_role. This is important for maintaining and replacing clones and is explained in more detail [here](2-step_cloning_pattern.md).
72
76
73
77
#### Arguments
74
78
*`source_schema` (required): The source schema name
75
79
*`destination_schema` (required): The destination schema name
76
-
*`source_database` (optional): The source database name
77
-
*`destination_database` (optional): The destination database name
80
+
*`source_database` (optional): The source database name; default value is your profile's target database.
81
+
*`destination_database` (optional): The destination database name; default value is your profile's target database.
82
+
*`new_owner_role` (optional): The new ownership role name. If no value is passed, the ownership will remain unchanged.
83
+
84
+
#### Usage
85
+
86
+
Call the macro as an [operation](https://docs.getdbt.com/docs/using-operations):
This macro is a part of the recommended 2-step Cloning Pattern for dbt development, explained in detail [here](2-step_cloning_pattern.md).
100
+
101
+
This macro clones the source database into the destination database and optionally grants ownership over its schemata and its schemata's tables and views to a new owner.
102
+
103
+
Note: the owner of the database is the role that executed the command, but if configured, the owner of its sub-objects would be the new_owner_role. This is important for maintaining and replacing clones and is explained in more detail [here](2-step_cloning_pattern.md).
104
+
105
+
#### Arguments
106
+
*`source_database` (required): The source database name
107
+
*`destination_database` (required): The destination database name
108
+
*`new_owner_role` (optional): The new ownership role name. If no value is passed, the ownership will remain unchanged.
78
109
79
110
#### Usage
80
111
81
112
Call the macro as an [operation](https://docs.getdbt.com/docs/using-operations):
This macro drops a schema in the selected database (defaults to target database if no database is selected).
124
+
This macro drops a schema in the selected database (defaults to target database if no database is selected). A schema can only be dropped by the role that owns it.
93
125
94
126
#### Arguments
95
127
*`schema_name` (required): The schema to drop
@@ -100,8 +132,23 @@ This macro drops a schema in the selected database (defaults to target database
100
132
Call the macro as an [operation](https://docs.getdbt.com/docs/using-operations):
0 commit comments