-
Notifications
You must be signed in to change notification settings - Fork 568
Schema change to add DecompressedLength column to Resource table #5247
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,5 +1,5 @@ | ||
| { | ||
| "sdk": { | ||
| "version": "9.0.305" | ||
| "version": "9.0.306" | ||
| } | ||
| } |
Large diffs are not rendered by default.
Large diffs are not rendered by default.
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -110,5 +110,6 @@ public enum SchemaVersion | |
| V98 = 98, | ||
| V99 = 99, | ||
| V100 = 100, | ||
| V101 = 101, | ||
| } | ||
| } | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -14,7 +14,8 @@ CREATE TABLE dbo.CurrentResource -- This is replaced by view CurrentResource | |
| IsRawResourceMetaSet bit NOT NULL, | ||
| SearchParamHash varchar(64) NULL, | ||
| TransactionId bigint NULL, | ||
| HistoryTransactionId bigint NULL | ||
| HistoryTransactionId bigint NULL, | ||
| DecompressedLength decimal(36,18) NULL | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Why it cannot be int?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I should have used INT. I will change |
||
| ) | ||
| GO | ||
| DROP TABLE dbo.CurrentResource | ||
|
|
@@ -32,7 +33,8 @@ CREATE TABLE dbo.Resource | |
| IsRawResourceMetaSet bit NOT NULL DEFAULT 0, | ||
| SearchParamHash varchar(64) NULL, | ||
| TransactionId bigint NULL, -- used for main CRUD operation | ||
| HistoryTransactionId bigint NULL -- used by CRUD operation that moved resource version in invisible state | ||
| HistoryTransactionId bigint NULL, -- used by CRUD operation that moved resource version in invisible state | ||
| DecompressedLength decimal(36,18) NULL | ||
|
|
||
| CONSTRAINT PKC_Resource PRIMARY KEY CLUSTERED (ResourceTypeId, ResourceSurrogateId) WITH (DATA_COMPRESSION = PAGE) ON PartitionScheme_ResourceTypeId(ResourceTypeId), | ||
| CONSTRAINT CH_Resource_RawResource_Length CHECK (RawResource > 0x0) | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Parsing JSON on each merge call sounds expensive. Instead, we should add decompressed length to ResourceList table type. There should be multiple iterations to get compatibility and final clean state.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was debating between changing ResourceList table type but as you mentioned that needs multiple iterations of code release to get final clean slate.
From what I have researched additional cost for parsing takes around few hundred milli seconds for rows around 10000.
I will discuss offline with you about which option to go with
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe. But imagine that we start adding extra JSON vars for each new column we add... This looks like antipattern. Also, there are extra joins to @Resources, and they are extra cost too... All of these are just not applicable to the case of table type change. BTW You should get fully functioning code on first iteration...