Skip to content

fix code block highlighting #3509

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Mar 17, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
32 changes: 32 additions & 0 deletions contribute/style-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,6 +92,38 @@ Code blocks:
- Have a title (optional) such as 'Query' or 'Response'
- Use language `response` if it is for the result of a query.

### Highlighting

You can highlight lines in a code block using the following keywords:

- `highlight-next-line`
- `highlight-start`
- `highlight-end`

These keywords should be added as comments in the codeblock with the appropriate
escape symbol for the codeblock language.

For example, if the codeblock is SQL:

```text
SELECT UserID, count(UserID) AS Count
-- highlight-next-line
FROM mv_hits_URL_UserID
WHERE URL = 'http://public_search'
GROUP BY UserID
ORDER BY Count DESC
LIMIT 10;
```

If the codeblock is a response:

```text
10 rows in set. Elapsed: 0.026 sec.
# highlight-next-line
Processed 335.87 thousand rows,
13.54 MB (12.91 million rows/s., 520.38 MB/s.)
```

### Associated markdown rule or CI check

- [`MD040` enforces that codeblocks have a language specified](/scripts/.markdownlint-cli2.yaml)
Expand Down
4 changes: 2 additions & 2 deletions docs/cloud/bestpractices/avoidnullablecolumns.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ To avoid `Nullable` columns, consider setting a default value for that column.
CREATE TABLE default.sample
(
`x` Int8,
# highlight-next-line
-- highlight-next-line
`y` Nullable(Int8)
)
ENGINE = MergeTree
Expand All @@ -25,7 +25,7 @@ use
CREATE TABLE default.sample2
(
`x` Int8,
# highlight-next-line
-- highlight-next-line
`y` Int8 DEFAULT 0
)
ENGINE = MergeTree
Expand Down
60 changes: 30 additions & 30 deletions docs/guides/best-practices/sparse-primary-indexes.md
Original file line number Diff line number Diff line change
Expand Up @@ -146,7 +146,7 @@ The response is:
└────────────────────────────────┴───────┘

10 rows in set. Elapsed: 0.022 sec.
// highlight-next-line
# highlight-next-line
Processed 8.87 million rows,
70.45 MB (398.53 million rows/s., 3.17 GB/s.)
```
Expand Down Expand Up @@ -183,7 +183,7 @@ CREATE TABLE hits_UserID_URL
`EventTime` DateTime
)
ENGINE = MergeTree
// highlight-next-line
-- highlight-next-line
PRIMARY KEY (UserID, URL)
ORDER BY (UserID, URL, EventTime)
SETTINGS index_granularity = 8192, index_granularity_bytes = 0, compress_primary_key = 0;
Expand Down Expand Up @@ -532,7 +532,7 @@ The response is:
└────────────────────────────────┴───────┘

10 rows in set. Elapsed: 0.005 sec.
// highlight-next-line
# highlight-next-line
Processed 8.19 thousand rows,
740.18 KB (1.53 million rows/s., 138.59 MB/s.)
```
Expand All @@ -543,13 +543,13 @@ The output for the ClickHouse client is now showing that instead of doing a full
If <a href="https://clickhouse.com/docs/operations/server-configuration-parameters/settings/#server_configuration_parameters-logger" target="_blank">trace logging</a> is enabled then the ClickHouse server log file shows that ClickHouse was running a <a href="https://github.com/ClickHouse/ClickHouse/blob/22.3/src/Storages/MergeTree/MergeTreeDataSelectExecutor.cpp#L1452" target="_blank">binary search</a> over the 1083 UserID index marks, in order to identify granules that possibly can contain rows with a UserID column value of `749927693`. This requires 19 steps with an average time complexity of `O(log2 n)`:
```response
...Executor): Key condition: (column 0 in [749927693, 749927693])
// highlight-next-line
# highlight-next-line
...Executor): Running binary search on index range for part all_1_9_2 (1083 marks)
...Executor): Found (LEFT) boundary mark: 176
...Executor): Found (RIGHT) boundary mark: 177
...Executor): Found continuous range in 19 steps
...Executor): Selected 1/1 parts by partition key, 1 parts by primary key,
// highlight-next-line
# highlight-next-line
1/1083 marks by primary key, 1 marks to read from 1 ranges
...Reading ...approx. 8192 rows starting from 1441792
```
Expand Down Expand Up @@ -597,7 +597,7 @@ The response looks like:
│ UserID │
│ Condition: (UserID in [749927693, 749927693]) │
│ Parts: 1/1 │
// highlight-next-line
# highlight-next-line
│ Granules: 1/1083 │
└───────────────────────────────────────────────────────────────────────────────────────┘

Expand Down Expand Up @@ -765,7 +765,7 @@ The response is: <a name="query-on-url-slow"></a>
└────────────┴───────┘

10 rows in set. Elapsed: 0.086 sec.
// highlight-next-line
# highlight-next-line
Processed 8.81 million rows,
799.69 MB (102.11 million rows/s., 9.27 GB/s.)
```
Expand All @@ -776,11 +776,11 @@ If [trace_logging](/operations/server-configuration-parameters/settings#logger)
```response
...Executor): Key condition: (column 1 in ['http://public_search',
'http://public_search'])
// highlight-next-line
# highlight-next-line
...Executor): Used generic exclusion search over index for part all_1_9_2
with 1537 steps
...Executor): Selected 1/1 parts by partition key, 1 parts by primary key,
// highlight-next-line
# highlight-next-line
1076/1083 marks by primary key, 1076 marks to read from 5 ranges
...Executor): Reading approx. 8814592 rows with 10 streams
```
Expand Down Expand Up @@ -927,7 +927,7 @@ CREATE TABLE hits_URL_UserID
`EventTime` DateTime
)
ENGINE = MergeTree
// highlight-next-line
-- highlight-next-line
PRIMARY KEY (URL, UserID)
ORDER BY (URL, UserID, EventTime)
SETTINGS index_granularity = 8192, index_granularity_bytes = 0, compress_primary_key = 0;
Expand Down Expand Up @@ -964,7 +964,7 @@ This is the resulting primary key:
That can now be used to significantly speed up the execution of our example query filtering on the URL column in order to calculate the top 10 users that most frequently clicked on the URL "http://public_search":
```sql
SELECT UserID, count(UserID) AS Count
// highlight-next-line
-- highlight-next-line
FROM hits_URL_UserID
WHERE URL = 'http://public_search'
GROUP BY UserID
Expand All @@ -990,7 +990,7 @@ The response is:
└────────────┴───────┘

10 rows in set. Elapsed: 0.017 sec.
// highlight-next-line
# highlight-next-line
Processed 319.49 thousand rows,
11.38 MB (18.41 million rows/s., 655.75 MB/s.)
```
Expand All @@ -1004,13 +1004,13 @@ The corresponding trace log in the ClickHouse server log file confirms that:
```response
...Executor): Key condition: (column 0 in ['http://public_search',
'http://public_search'])
// highlight-next-line
# highlight-next-line
...Executor): Running binary search on index range for part all_1_9_2 (1083 marks)
...Executor): Found (LEFT) boundary mark: 644
...Executor): Found (RIGHT) boundary mark: 683
...Executor): Found continuous range in 19 steps
...Executor): Selected 1/1 parts by partition key, 1 parts by primary key,
// highlight-next-line
# highlight-next-line
39/1083 marks by primary key, 39 marks to read from 1 ranges
...Executor): Reading approx. 319488 rows with 2 streams
```
Expand Down Expand Up @@ -1055,19 +1055,19 @@ The response is:
└────────────────────────────────┴───────┘

10 rows in set. Elapsed: 0.024 sec.
// highlight-next-line
# highlight-next-line
Processed 8.02 million rows,
73.04 MB (340.26 million rows/s., 3.10 GB/s.)
```

Server Log:
```response
...Executor): Key condition: (column 1 in [749927693, 749927693])
// highlight-next-line
# highlight-next-line
...Executor): Used generic exclusion search over index for part all_1_9_2
with 1453 steps
...Executor): Selected 1/1 parts by partition key, 1 parts by primary key,
// highlight-next-line
# highlight-next-line
980/1083 marks by primary key, 980 marks to read from 23 ranges
...Executor): Reading approx. 8028160 rows with 10 streams
```
Expand Down Expand Up @@ -1119,7 +1119,7 @@ ClickHouse is storing the [column data files](#data-is-stored-on-disk-ordered-by
The implicitly created table (and its primary index) backing the materialized view can now be used to significantly speed up the execution of our example query filtering on the URL column:
```sql
SELECT UserID, count(UserID) AS Count
// highlight-next-line
-- highlight-next-line
FROM mv_hits_URL_UserID
WHERE URL = 'http://public_search'
GROUP BY UserID
Expand All @@ -1144,7 +1144,7 @@ The response is:
└────────────┴───────┘

10 rows in set. Elapsed: 0.026 sec.
// highlight-next-line
# highlight-next-line
Processed 335.87 thousand rows,
13.54 MB (12.91 million rows/s., 520.38 MB/s.)
```
Expand All @@ -1156,11 +1156,11 @@ The corresponding trace log in the ClickHouse server log file confirms that Clic
```response
...Executor): Key condition: (column 0 in ['http://public_search',
'http://public_search'])
// highlight-next-line
# highlight-next-line
...Executor): Running binary search on index range ...
...
...Executor): Selected 4/4 parts by partition key, 4 parts by primary key,
// highlight-next-line
# highlight-next-line
41/1083 marks by primary key, 41 marks to read from 4 ranges
...Executor): Reading approx. 335872 rows with 4 streams
```
Expand Down Expand Up @@ -1203,7 +1203,7 @@ ClickHouse is storing the [column data files](#data-is-stored-on-disk-ordered-by
The hidden table (and its primary index) created by the projection can now be (implicitly) used to significantly speed up the execution of our example query filtering on the URL column. Note that the query is syntactically targeting the source table of the projection.
```sql
SELECT UserID, count(UserID) AS Count
// highlight-next-line
-- highlight-next-line
FROM hits_UserID_URL
WHERE URL = 'http://public_search'
GROUP BY UserID
Expand All @@ -1228,7 +1228,7 @@ The response is:
└────────────┴───────┘

10 rows in set. Elapsed: 0.029 sec.
// highlight-next-line
# highlight-next-line
Processed 319.49 thousand rows, 1
1.38 MB (11.05 million rows/s., 393.58 MB/s.)
```
Expand All @@ -1241,14 +1241,14 @@ The corresponding trace log in the ClickHouse server log file confirms that Clic
```response
...Executor): Key condition: (column 0 in ['http://public_search',
'http://public_search'])
// highlight-next-line
# highlight-next-line
...Executor): Running binary search on index range for part prj_url_userid (1083 marks)
...Executor): ...
// highlight-next-line
# highlight-next-line
...Executor): Choose complete Normal projection prj_url_userid
...Executor): projection required columns: URL, UserID
...Executor): Selected 1/1 parts by partition key, 1 parts by primary key,
// highlight-next-line
# highlight-next-line
39/1083 marks by primary key, 39 marks to read from 1 ranges
...Executor): Reading approx. 319488 rows with 2 streams
```
Expand Down Expand Up @@ -1327,7 +1327,7 @@ CREATE TABLE hits_URL_UserID_IsRobot
`IsRobot` UInt8
)
ENGINE = MergeTree
// highlight-next-line
-- highlight-next-line
PRIMARY KEY (URL, UserID, IsRobot);
```

Expand Down Expand Up @@ -1355,7 +1355,7 @@ CREATE TABLE hits_IsRobot_UserID_URL
`IsRobot` UInt8
)
ENGINE = MergeTree
// highlight-next-line
-- highlight-next-line
PRIMARY KEY (IsRobot, UserID, URL);
```
And populate it with the same 8.87 million rows that we used to populate the previous table:
Expand Down Expand Up @@ -1395,7 +1395,7 @@ The response is:
└─────────┘

1 row in set. Elapsed: 0.026 sec.
// highlight-next-line
# highlight-next-line
Processed 7.92 million rows,
31.67 MB (306.90 million rows/s., 1.23 GB/s.)
```
Expand All @@ -1413,7 +1413,7 @@ The response is:
└─────────┘

1 row in set. Elapsed: 0.003 sec.
// highlight-next-line
# highlight-next-line
Processed 20.32 thousand rows,
81.28 KB (6.61 million rows/s., 26.44 MB/s.)
```
Expand Down
8 changes: 4 additions & 4 deletions docs/integrations/data-ingestion/gcs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -155,7 +155,7 @@ CREATE TABLE trips_gcs
ENGINE = MergeTree
PARTITION BY toYYYYMM(pickup_date)
ORDER BY pickup_datetime
# highlight-next-line
-- highlight-next-line
SETTINGS storage_policy='gcs_main'
```

Expand Down Expand Up @@ -468,10 +468,10 @@ zk_synced_followers 2
# highlight-end
```


### Start ClickHouse server {#start-clickhouse-server}

On `chnode1` and `chnode` run:

```bash
sudo service clickhouse-server start
```
Expand Down Expand Up @@ -546,7 +546,7 @@ cache_path:
```
#### Verify that tables created on the cluster are created on both nodes {#verify-that-tables-created-on-the-cluster-are-created-on-both-nodes}
```sql
# highlight-next-line
-- highlight-next-line
create table trips on cluster 'cluster_1S_2R' (
`trip_id` UInt32,
`pickup_date` Date,
Expand All @@ -564,7 +564,7 @@ create table trips on cluster 'cluster_1S_2R' (
ENGINE = ReplicatedMergeTree
PARTITION BY toYYYYMM(pickup_date)
ORDER BY pickup_datetime
# highlight-next-line
-- highlight-next-line
SETTINGS storage_policy='gcs_main'
```
```response
Expand Down
Loading