```
-### Read Formats {#read-formats}
+### Read formats {#read-formats}
Read formats control the data types of values returned from the client `query`, `query_np`, and `query_df` methods. (The `raw_query`
and `query_arrow` do not modify incoming data from ClickHouse, so format control does not apply.) For example, if the read format
@@ -878,7 +878,7 @@ client.query('SELECT user_id, user_uuid, device_uuid from users', query_formats=
client.query('SELECT device_id, dev_address, gw_address from devices', column_formats={'dev_address':'string'})
```
-#### Read Format Options (Python Types) {#read-format-options-python-types}
+#### Read format options (Python types) {#read-format-options-python-types}
| ClickHouse Type | Native Python Type | Read Formats | Comments |
|-----------------------|-----------------------|--------------|-------------------------------------------------------------------------------------------------------------------|
@@ -906,7 +906,7 @@ client.query('SELECT device_id, dev_address, gw_address from devices', column_fo
| Dynamic | object | - | Returns the matching Python type for the ClickHouse datatype stored for the value |
-### External Data {#external-data}
+### External data {#external-data}
ClickHouse queries can accept external data in any ClickHouse format. This binary data is sent along with the query string to be used to process the data. Details of
the External Data feature are [here](/engines/table-engines/special/external-data.md). The client `query*` methods accept an optional `external_data` parameter
@@ -941,7 +941,7 @@ result = client.query('SELECT name, avg(rating) FROM directors INNER JOIN movies
Additional external data files can be added to the initial ExternalData object using the `add_file` method, which takes the same parameters
as the constructor. For HTTP, all external data is transmitted as part of a `multi-part/form-data` file upload.
-### Time Zones {#time-zones}
+### Time zones {#time-zones}
There are multiple mechanisms for applying a time zone to ClickHouse DateTime and DateTime64 values. Internally, the ClickHouse server always stores any DateTime or DateTime64
object as a time zone naive number representing seconds since the epoch, 1970-01-01 00:00:00 UTC time. For DateTime64 values, the representation can be milliseconds, microseconds,
or nanoseconds since the epoch, depending on precision. As a result, the application of any time zone information always occurs on the client side. Note that this involves meaningful
@@ -961,7 +961,7 @@ timezone metadata is not available to clickhouse-connect for DateTime columns pr
Note that if the applied timezone based on these rules is UTC, `clickhouse-connect` will _always_ return a time zone naive Python `datetime.datetime` object. Additional timezone
information can then be added to this timezone naive object by the application code if desired.
-## Inserting Data with ClickHouse Connect: Advanced Usage {#inserting-data-with-clickhouse-connect--advanced-usage}
+## Inserting data with ClickHouse Connect: Advanced usage {#inserting-data-with-clickhouse-connect--advanced-usage}
### InsertContexts {#insertcontexts}
@@ -990,7 +990,7 @@ assert qr[0][0] == 4
InsertContexts include mutable state that is updated during the insert process, so they are not thread safe.
-### Write Formats {#write-formats}
+### Write formats {#write-formats}
Write formats are currently implemented for limited number of types. In most cases ClickHouse Connect will attempt to
automatically determine the correct write format for a column by checking the type of the first (non-null) data value.
For example, if inserting into a DateTime column, and the first insert value of the column is a Python integer, ClickHouse
@@ -999,7 +999,7 @@ Connect will directly insert the integer value under the assumption that it's ac
In most cases, it is unnecessary to override the write format for a data type, but the associated methods in the
`clickhouse_connect.datatypes.format` package can be used to do so at a global level.
-#### Write Format Options {#write-format-options}
+#### Write format options {#write-format-options}
| ClickHouse Type | Native Python Type | Write Formats | Comments |
|-----------------------|-----------------------|---------------|-------------------------------------------------------------------------------------------------------------|
@@ -1027,11 +1027,11 @@ In most cases, it is unnecessary to override the write format for a data type, b
| Dynamic | object | | Warning -- at this time any inserts into a Dynamic column are persisted as a ClickHouse String |
-## Additional Options {#additional-options}
+## Additional options {#additional-options}
ClickHouse Connect provides a number of additional options for advanced use cases
-### Global Settings {#global-settings}
+### Global settings {#global-settings}
There are a small number of settings that control ClickHouse Connect behavior globally. They are accessed from the top
level `common` package:
@@ -1092,7 +1092,7 @@ Note that the `raw*` client methods don't use the compression specified by the c
We also recommend against using `gzip` compression, as it is significantly slower than the alternatives for both compressing
and decompressing data.
-### HTTP Proxy Support {#http-proxy-support}
+### HTTP proxy support {#http-proxy-support}
ClickHouse Connect adds basic HTTP proxy support using the `urllib`3` library. It recognizes the standard `HTTP_PROXY` and
`HTTPS_PROXY` environment variables. Note that using these environment variables will apply to any client created with the
@@ -1103,25 +1103,25 @@ documentation.
To use a Socks proxy, you can send a `urllib3` SOCKSProxyManager as the `pool_mgr` argument to `get_client`. Note that
this will require installing the PySocks library either directly or using the `[socks]` option for the `urllib3` dependency.
-### "Old" JSON Data Type {#old-json-data-type}
+### "Old" JSON data type {#old-json-data-type}
The experimental `Object` (or `Object('json')`) data type is deprecated and should be avoided in a production environment.
ClickHouse Connect continues to provide limited support for the data type for backward compatibility. Note that this
support does not include queries that are expected to return "top level" or "parent" JSON values as dictionaries or the
equivalent, and such queries will result in an exception.
-### "New" Variant/Dynamic/JSON Datatypes (Experimental Feature) {#new-variantdynamicjson-datatypes-experimental-feature}
+### "New" Variant/Dynamic/JSON datatypes (experimental feature) {#new-variantdynamicjson-datatypes-experimental-feature}
Beginning with the 0.8.0 release, `clickhouse-connect` provides experimental support for the new (also experimental)
ClickHouse types Variant, Dynamic, and JSON.
-#### Usage Notes {#usage-notes}
+#### Usage notes {#usage-notes}
- JSON data can be inserted as either a Python dictionary or a JSON string containing a JSON object `{}`. Other
forms of JSON data are not supported
- Queries using subcolumns/paths for these types will return the type of the sub column.
- See the main ClickHouse documentation for other usage notes
-#### Known limitations: {#known-limitations}
+#### Known limitations {#known-limitations}
- Each of these types must be enabled in the ClickHouse settings before using.
- The "new" JSON type is available started with the ClickHouse 24.8 release
- Due to internal format changes, `clickhouse-connect` is only compatible with Variant types beginning with the ClickHouse 24.7 release
diff --git a/docs/integrations/language-clients/rust.md b/docs/integrations/language-clients/rust.md
index 1914303da8d..98c72902287 100644
--- a/docs/integrations/language-clients/rust.md
+++ b/docs/integrations/language-clients/rust.md
@@ -7,7 +7,7 @@ description: 'The official Rust client for connecting to ClickHouse.'
title: 'ClickHouse Rust Client'
---
-# ClickHouse Rust Client
+# ClickHouse Rust client
The official Rust client for connecting to ClickHouse, originally developed by [Paul Loyd](https://github.com/loyd). The client source code is available in the [GitHub repository](https://github.com/ClickHouse/clickhouse-rs).
@@ -332,7 +332,7 @@ This example relies on the legacy Hyper API and is a subject to change in the fu
See also: [custom HTTP client example](https://github.com/ClickHouse/clickhouse-rs/blob/main/examples/custom_http_client.rs) in the client repo.
-## Data Types {#data-types}
+## Data types {#data-types}
:::info
See also the additional examples:
diff --git a/docs/integrations/migration/clickhouse-to-cloud.md b/docs/integrations/migration/clickhouse-to-cloud.md
index 36baa4bc8fa..76902215882 100644
--- a/docs/integrations/migration/clickhouse-to-cloud.md
+++ b/docs/integrations/migration/clickhouse-to-cloud.md
@@ -198,7 +198,7 @@ Modify the allow list and allow access from **Anywhere** temporarily. See the [I
- Verify the data in the destination service
-#### Re-establish the IP Access List on the source {#re-establish-the-ip-access-list-on-the-source}
+#### Re-establish the IP access list on the source {#re-establish-the-ip-access-list-on-the-source}
If you exported the access list earlier, then you can re-import it using **Share**, otherwise re-add your entries to the access list.
diff --git a/docs/integrations/migration/object-storage-to-clickhouse.md b/docs/integrations/migration/object-storage-to-clickhouse.md
index 2b43d52edde..2f323db04ef 100644
--- a/docs/integrations/migration/object-storage-to-clickhouse.md
+++ b/docs/integrations/migration/object-storage-to-clickhouse.md
@@ -8,7 +8,7 @@ slug: /integrations/migration/object-storage-to-clickhouse
import Image from '@theme/IdealImage';
import object_storage_01 from '@site/static/images/integrations/migration/object-storage-01.png';
-# Move data from Cloud Object Storage to ClickHouse Cloud
+# Move data from cloud object storage to ClickHouse Cloud
diff --git a/docs/integrations/migration/overview.md b/docs/integrations/migration/overview.md
index d0ef1b379e9..46457d3c294 100644
--- a/docs/integrations/migration/overview.md
+++ b/docs/integrations/migration/overview.md
@@ -7,7 +7,7 @@ title: 'Migrating Data into ClickHouse'
description: 'Page describing the options available for migrating data into ClickHouse'
---
-# Migrating Data into ClickHouse
+# Migrating data into ClickHouse