Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use Hikari Connection Pool for handling Clickhouse connections #893

Open
wants to merge 56 commits into
base: 2.5.1
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
56 commits
Select commit Hold shift + click to select a range
eb46e45
Use Hikari Connection Pool for handling Clickhouse connections
subkanthi Oct 31, 2024
68dad63
Fixed imports
subkanthi Oct 31, 2024
af91740
Changes to use Hikari connection pool, Replaced ClickHouseConnection …
subkanthi Nov 8, 2024
c47d513
Set connection timeout in hikari pool to 50 seconds
subkanthi Nov 8, 2024
83cfcd4
Set connection pool to 100 in hikari
subkanthi Nov 8, 2024
03beb53
Merged conflicts with 2.5.1 branch.
subkanthi Jan 26, 2025
b8285ef
Merged conflicts with 2.5.1 branch.
subkanthi Jan 26, 2025
a5a2bb4
Changes to all integration tests to move the common logic of creating…
subkanthi Jan 27, 2025
ec2d634
Changes to all integration tests to move the common logic of creating…
subkanthi Jan 27, 2025
62ce23d
Changes to all integration tests to move the common logic of creating…
subkanthi Jan 27, 2025
00032c2
Changes to support one HikariDbSource for one database.
subkanthi Jan 27, 2025
67dc11c
Fix integration tests.
subkanthi Feb 3, 2025
538428f
Fix integration tests.
subkanthi Feb 3, 2025
a47ea48
Fix integration tests.
subkanthi Feb 4, 2025
19b242e
Fix integration tests.
subkanthi Feb 4, 2025
65299f0
Fix integration tests.
subkanthi Feb 4, 2025
fd103ef
DBWriter - Add step to create destination database.
subkanthi Feb 5, 2025
4b7eb6b
revert docker-compose changes.
subkanthi Feb 5, 2025
b435571
Add prometheus metrics for Hikari connection pool.
subkanthi Feb 5, 2025
3f2b7c4
Set connection pool name and updated grafana dashboard.
subkanthi Feb 5, 2025
5c36c44
Modified integration tests to close hikari connection pool.
subkanthi Feb 6, 2025
a4e070c
Modified integration tests to close hikari connection pool.
subkanthi Feb 6, 2025
c4477c8
Modified integration tests to close hikari connection pool.
subkanthi Feb 6, 2025
6c1cc81
add connection.pool.max.size to lightweight
Feb 6, 2025
9b4f5d0
Modified integration tests to close hikari connection pool.
subkanthi Feb 6, 2025
f354e45
Merge remote-tracking branch 'origin/867-potential-connection-leak-in…
subkanthi Feb 6, 2025
96fca9c
Modified integration tests to close hikari connection pool.
subkanthi Feb 6, 2025
79d666a
Modified integration tests to close hikari connection pool.
subkanthi Feb 6, 2025
8d3a241
Modified integration tests to close hikari connection pool.
subkanthi Feb 6, 2025
a5ac62f
Modified integration tests to close hikari connection pool.
subkanthi Feb 6, 2025
b9783a6
fixed integration tests.
subkanthi Feb 6, 2025
a6c5794
fixed integration tests.
subkanthi Feb 6, 2025
9557020
fixed integration tests.
subkanthi Feb 6, 2025
3af4ed6
fixed integration tests.
subkanthi Feb 6, 2025
fc7cb34
fixed integration tests.
subkanthi Feb 6, 2025
31a4363
fixed integration tests.
subkanthi Feb 7, 2025
ce16a4e
fixed integration tests.
subkanthi Feb 7, 2025
00372c4
try different values for `connection.pool.max.size`
Feb 7, 2025
1bdf393
Added logic to pause/resume thread pool instead of shutting it down.
subkanthi Feb 7, 2025
3588ce3
Merge remote-tracking branch 'origin/867-potential-connection-leak-in…
subkanthi Feb 7, 2025
a5251ab
Remove connection.close in tests.
subkanthi Feb 7, 2025
42e1f9d
Fixed select with db name.
subkanthi Feb 7, 2025
f803a87
Fixed select with db name.
subkanthi Feb 7, 2025
7035494
Fixed select with db name.
subkanthi Feb 7, 2025
ef8db8f
Fixed select with db name.
subkanthi Feb 7, 2025
ad26764
Fixed select with db name.
subkanthi Feb 7, 2025
db49449
Fixed select with db name.
subkanthi Feb 7, 2025
779a82b
Fixed select with db name.
subkanthi Feb 7, 2025
530795c
Fixed select with db name.
subkanthi Feb 7, 2025
d933913
Fixed select with db name.
subkanthi Feb 7, 2025
abdf30f
Fixed select with db name.
subkanthi Feb 7, 2025
2ca0cd8
Fixed select with db name.
subkanthi Feb 7, 2025
c541b38
Disabled MongoDBIT
subkanthi Feb 8, 2025
1373505
Added support for drop database.
subkanthi Feb 10, 2025
e687771
Fix integration tests.
subkanthi Feb 10, 2025
07076c2
mvoe to debug mode
Feb 12, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
56 changes: 38 additions & 18 deletions sink-connector-lightweight/dependency-reduced-pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -162,10 +162,6 @@
<artifactId>oracle-xe</artifactId>
<groupId>org.testcontainers</groupId>
</exclusion>
<exclusion>
<artifactId>junit-platform-launcher</artifactId>
<groupId>org.junit.platform</groupId>
</exclusion>
<exclusion>
<artifactId>junit-jupiter</artifactId>
<groupId>org.junit.jupiter</groupId>
Expand Down Expand Up @@ -207,7 +203,7 @@
<dependency>
<groupId>org.testcontainers</groupId>
<artifactId>testcontainers</artifactId>
<version>1.19.1</version>
<version>1.20.4</version>
<scope>test</scope>
<exclusions>
<exclusion>
Expand All @@ -231,7 +227,7 @@
<dependency>
<groupId>org.testcontainers</groupId>
<artifactId>jdbc</artifactId>
<version>1.19.1</version>
<version>1.20.4</version>
<scope>test</scope>
<exclusions>
<exclusion>
Expand All @@ -243,37 +239,37 @@
<dependency>
<groupId>org.testcontainers</groupId>
<artifactId>junit-jupiter</artifactId>
<version>1.19.1</version>
<version>1.20.4</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.testcontainers</groupId>
<artifactId>clickhouse</artifactId>
<version>1.19.1</version>
<version>1.20.4</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.testcontainers</groupId>
<artifactId>mysql</artifactId>
<version>1.19.1</version>
<version>1.20.4</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.testcontainers</groupId>
<artifactId>postgresql</artifactId>
<version>1.19.1</version>
<version>1.20.4</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.testcontainers</groupId>
<artifactId>mariadb</artifactId>
<version>1.19.1</version>
<version>1.20.4</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.testcontainers</groupId>
<artifactId>mongodb</artifactId>
<version>1.19.1</version>
<version>1.20.4</version>
<scope>test</scope>
</dependency>
<dependency>
Expand All @@ -289,10 +285,6 @@
<version>5.8.1</version>
<scope>test</scope>
<exclusions>
<exclusion>
<artifactId>junit-platform-engine</artifactId>
<groupId>org.junit.platform</groupId>
</exclusion>
<exclusion>
<artifactId>junit-jupiter-api</artifactId>
<groupId>org.junit.jupiter</groupId>
Expand All @@ -317,10 +309,38 @@
<artifactId>junit-jupiter-api</artifactId>
<groupId>org.junit.jupiter</groupId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.junit.platform</groupId>
<artifactId>junit-platform-launcher</artifactId>
<version>1.8.2</version>
<scope>test</scope>
<exclusions>
<exclusion>
<artifactId>apiguardian-api</artifactId>
<groupId>org.apiguardian</groupId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.junit.platform</groupId>
<artifactId>junit-platform-engine</artifactId>
<version>1.8.2</version>
<scope>test</scope>
<exclusions>
<exclusion>
<artifactId>junit-platform-launcher</artifactId>
<artifactId>opentest4j</artifactId>
<groupId>org.opentest4j</groupId>
</exclusion>
<exclusion>
<artifactId>junit-platform-commons</artifactId>
<groupId>org.junit.platform</groupId>
</exclusion>
<exclusion>
<artifactId>apiguardian-api</artifactId>
<groupId>org.apiguardian</groupId>
</exclusion>
</exclusions>
</dependency>
<dependency>
Expand Down Expand Up @@ -356,7 +376,7 @@
<properties>
<quarkus.platform.version>2.14.0.Final</quarkus.platform.version>
<quarkus.platform.artifact-id>quarkus-bom</quarkus.platform.artifact-id>
<version.testcontainers>1.19.1</version.testcontainers>
<version.testcontainers>1.20.4</version.testcontainers>
<surefire-plugin.version>3.0.0-M7</surefire-plugin.version>
<version.kafka>3.8.0</version.kafka>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
Expand Down
6 changes: 4 additions & 2 deletions sink-connector-lightweight/docker/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ database.server.name: "ER54"

# database.include.list An optional list of regular expressions that match database names to be monitored;
# any database name not included in the whitelist will be excluded from monitoring. By default all databases will be monitored.
database.include.list: test
database.include.list: sbtest

# table.include.list An optional list of regular expressions that match fully-qualified table identifiers for tables to be monitored;
table.include.list: ""
Expand Down Expand Up @@ -179,6 +179,8 @@ ignore.ddl.regex: "(?i)(ANALYZE PARTITION).*"
#Metrics (Prometheus target), required for Grafana Dashboard
metrics.enable: "true"

connection.pool.max.size: 1000

# Skip schema history capturing, use the following configuration
# to reduce slow startup when replicating dbs with large number of tables
#schema.history.internal.store.only.captured.tables.ddl: "true"
Expand All @@ -189,4 +191,4 @@ use.nongraceful.disconnect: "true"
database.keep.alive.interval.`ms: "30000" #Send keepalive every 30 seconds
database.connection.reconnect.backoff.ms: "1000"
database.connection.reconnect.backoff.max.ms: "10000"
database.ssl.mode: "disabled"
database.ssl.mode: "disabled"
19 changes: 18 additions & 1 deletion sink-connector-lightweight/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<version.debezium>3.0.0.Final</version.debezium>
<version.junit>5.9.1</version.junit>
<version.testcontainers>1.19.1</version.testcontainers>
<version.testcontainers>1.20.4</version.testcontainers>
<version.checkstyle.plugin>3.1.1</version.checkstyle.plugin>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
Expand Down Expand Up @@ -233,6 +233,11 @@
<!-- <scope>test</scope>-->
</dependency>

<dependency>
<groupId>com.zaxxer</groupId>
<artifactId>HikariCP</artifactId>
<version>6.0.0</version>
</dependency>
<dependency>
<groupId>org.antlr</groupId>
<artifactId>antlr4-runtime</artifactId>
Expand Down Expand Up @@ -334,6 +339,18 @@
<scope>test</scope>
</dependency>

<dependency>
<groupId>org.junit.platform</groupId>
<artifactId>junit-platform-launcher</artifactId>
<version>1.8.2</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.junit.platform</groupId>
<artifactId>junit-platform-engine</artifactId>
<version>1.8.2</version>
<scope>test</scope>
</dependency>
<dependency> <!-- necessary for Java 9+ -->
<groupId>org.apache.tomcat</groupId>
<artifactId>annotations-api</artifactId>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,22 +2,21 @@

import com.altinity.clickhouse.debezium.embedded.common.PropertiesHelper;
import com.altinity.clickhouse.debezium.embedded.config.SinkConnectorLightWeightConfig;
import com.altinity.clickhouse.debezium.embedded.ddl.parser.DDLParserService;
import com.altinity.clickhouse.debezium.embedded.ddl.parser.MySQLDDLParserService;
import com.altinity.clickhouse.debezium.embedded.parser.DebeziumRecordParserService;
import com.altinity.clickhouse.sink.connector.ClickHouseSinkConnectorConfig;
import com.altinity.clickhouse.sink.connector.ClickHouseSinkConnectorConfigVariables;
import com.altinity.clickhouse.sink.connector.common.Metrics;
import com.altinity.clickhouse.sink.connector.db.BaseDbWriter;
import com.altinity.clickhouse.sink.connector.db.DBMetadata;
import com.altinity.clickhouse.sink.connector.db.HikariDbSource;
import com.altinity.clickhouse.sink.connector.db.operations.ClickHouseAlterTable;
import com.altinity.clickhouse.sink.connector.executor.ClickHouseBatchExecutor;
import com.altinity.clickhouse.sink.connector.executor.ClickHouseBatchRunnable;
import com.altinity.clickhouse.sink.connector.executor.ClickHouseBatchWriter;
import com.altinity.clickhouse.sink.connector.executor.DebeziumOffsetManagement;
import com.altinity.clickhouse.sink.connector.model.ClickHouseStruct;
import com.altinity.clickhouse.sink.connector.model.DBCredentials;
import com.clickhouse.jdbc.ClickHouseConnection;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.util.concurrent.ThreadFactoryBuilder;
import io.debezium.config.CommonConnectorConfig;
Expand All @@ -41,6 +40,7 @@
import org.json.simple.parser.ParseException;

import java.io.IOException;
import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.ResultSetMetaData;
import java.sql.SQLException;
Expand Down Expand Up @@ -89,7 +89,7 @@ public class DebeziumChangeEventCapture {
DebeziumEngine<ChangeEvent<SourceRecord, SourceRecord>> engine;

// Keep one clickhouse connection.
private ClickHouseConnection conn;
private Connection conn;

ClickHouseBatchWriter singleThreadedWriter;

Expand Down Expand Up @@ -197,7 +197,8 @@ private String getDatabaseName(SourceRecord sr) {
private BaseDbWriter createWriter(ClickHouseSinkConnectorConfig config, String databaseName) {
DBCredentials dbCredentials = parseDBConfiguration(config);
String jdbcUrl = BaseDbWriter.getConnectionString(dbCredentials.getHostName(), dbCredentials.getPort(), databaseName);
ClickHouseConnection conn = BaseDbWriter.createConnection(jdbcUrl, "Client_1", dbCredentials.getUserName(), dbCredentials.getPassword(), config);
Connection conn = BaseDbWriter.createConnection(jdbcUrl, BaseDbWriter.DATABASE_CLIENT_NAME, dbCredentials.getUserName(), dbCredentials.getPassword(),
databaseName, config);
return new BaseDbWriter(dbCredentials.getHostName(), dbCredentials.getPort(), databaseName, dbCredentials.getUserName(), dbCredentials.getPassword(), config, conn);
}

Expand Down Expand Up @@ -270,11 +271,17 @@ private ClickHouseStruct processEveryChangeRecord(Properties props, ChangeEvent<
if (DDL != null && DDL.isEmpty() == false)
{
log.info("***** DDL received, Flush all existing records");
this.executor.shutdown();
this.executor.awaitTermination(60, TimeUnit.SECONDS);
// this.executor.shutdown();
// this.executor.awaitTermination(60, TimeUnit.SECONDS);
// //HikariDbSource.closeDatabaseConnection(BaseDbWriter.SYSTEM_DB);
this.executor.pause();

//this.writer = null;
//this.writer = null;
performDDLOperation(DDL, props, sr, config, recordCommitter, record, lastRecordInBatch);
setupProcessingThread(config);
this.executor.resume();
//HikariDbSource.closeDatabaseConnection(BaseDbWriter.SYSTEM_DB);
//setupProcessingThread(config);
}

} else {
Expand Down Expand Up @@ -383,18 +390,19 @@ private void createDatabaseForDebeziumStorage(ClickHouseSinkConnectorConfig conf
DBCredentials dbCredentials = parseDBConfiguration(config);

String jdbcUrl = BaseDbWriter.getConnectionString(dbCredentials.getHostName(), dbCredentials.getPort(),
"system");
ClickHouseConnection conn = BaseDbWriter.createConnection(jdbcUrl, "Client_1", dbCredentials.getUserName(), dbCredentials.getPassword(), config);
BaseDbWriter.SYSTEM_DB);
Connection conn = BaseDbWriter.createConnection(jdbcUrl, BaseDbWriter.DATABASE_CLIENT_NAME,
dbCredentials.getUserName(), dbCredentials.getPassword(), BaseDbWriter.SYSTEM_DB, config);
BaseDbWriter writer = new BaseDbWriter(dbCredentials.getHostName(), dbCredentials.getPort(),
"system", dbCredentials.getUserName(),
BaseDbWriter.SYSTEM_DB, dbCredentials.getUserName(),
dbCredentials.getPassword(), config, conn);

Pair<String, String> tableNameDatabaseName = getDebeziumOffsetStorageDatabaseName(props);
String databaseName = tableNameDatabaseName.getRight();

String createDbQuery = String.format("create database if not exists %s", databaseName);
log.info("CREATING DEBEZIUM STORAGE Database: " + createDbQuery);
writer.executeQuery(createDbQuery);
writer.executeSystemQuery(createDbQuery);

break;
} catch (Exception e) {
Expand Down Expand Up @@ -422,13 +430,14 @@ private void createSchemaHistoryTable(ClickHouseSinkConnectorConfig config, Prop
DBCredentials dbCredentials = parseDBConfiguration(config);
String jdbcUrl = BaseDbWriter.getConnectionString(dbCredentials.getHostName(), dbCredentials.getPort(),
"system");
ClickHouseConnection conn = BaseDbWriter.createConnection(jdbcUrl, "Client_1",dbCredentials.getUserName(), dbCredentials.getPassword(), config);
Connection conn = BaseDbWriter.createConnection(jdbcUrl, BaseDbWriter.DATABASE_CLIENT_NAME,
dbCredentials.getUserName(), dbCredentials.getPassword(), BaseDbWriter.SYSTEM_DB, config);
BaseDbWriter writer = new BaseDbWriter(dbCredentials.getHostName(), dbCredentials.getPort(),
"system", dbCredentials.getUserName(),
dbCredentials.getPassword(), config, conn);

try {
writer.executeQuery(createSchemaHistoryTable);
writer.executeSystemQuery(createSchemaHistoryTable);
} catch(Exception e) {
log.error("Error creating schema history table", e);
}
Expand All @@ -449,7 +458,8 @@ private void createViewForShowReplicaStatus(ClickHouseSinkConnectorConfig config

String jdbcUrl = BaseDbWriter.getConnectionString(dbCredentials.getHostName(), dbCredentials.getPort(),
"system");
ClickHouseConnection conn = BaseDbWriter.createConnection(jdbcUrl, "Client_1",dbCredentials.getUserName(), dbCredentials.getPassword(), config);
Connection conn = BaseDbWriter.createConnection(jdbcUrl, BaseDbWriter.DATABASE_CLIENT_NAME,
dbCredentials.getUserName(), dbCredentials.getPassword(), BaseDbWriter.SYSTEM_DB, config);
BaseDbWriter writer = new BaseDbWriter(dbCredentials.getHostName(), dbCredentials.getPort(),
"system", dbCredentials.getUserName(),
dbCredentials.getPassword(), config, conn);
Expand All @@ -462,7 +472,7 @@ private void createViewForShowReplicaStatus(ClickHouseSinkConnectorConfig config
// Remove quotes.
formattedView = formattedView.replace("\"", "");
try {
writer.executeQuery(formattedView);
writer.executeSystemQuery(formattedView);
} catch(Exception e) {
log.error("**** Error creating VIEW **** " + formattedView);
}
Expand Down Expand Up @@ -539,8 +549,9 @@ public String getDebeziumStorageStatus(ClickHouseSinkConnectorConfig config, Pro
log.error("**** Connection to ClickHouse is not established, re-initiating ****");
String jdbcUrl = BaseDbWriter.getConnectionString(dbCredentials.getHostName(), dbCredentials.getPort(),
databaseName);
ClickHouseConnection conn = BaseDbWriter.createConnection(jdbcUrl, "Client_1",
dbCredentials.getUserName(), dbCredentials.getPassword(), config);
Connection conn = BaseDbWriter.createConnection(jdbcUrl, BaseDbWriter.DATABASE_CLIENT_NAME,
dbCredentials.getUserName(), dbCredentials.getPassword(),
BaseDbWriter.SYSTEM_DB, config);
writer = new BaseDbWriter(dbCredentials.getHostName(), dbCredentials.getPort(),
databaseName, dbCredentials.getUserName(),
dbCredentials.getPassword(), config, conn);
Expand Down Expand Up @@ -956,6 +967,7 @@ DBCredentials parseDBConfiguration(ClickHouseSinkConnectorConfig config) {
*/
private void setupProcessingThread(ClickHouseSinkConnectorConfig config) {


if(config.getBoolean(ClickHouseSinkConnectorConfigVariables.SINGLE_THREADED.toString())) {
log.info("********* Running in Single Threaded mode *********");
singleThreadedWriter = new ClickHouseBatchWriter(config, new HashMap());
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ public void deleteOffsetStorageRow(String offsetKey,

// String connectorName = config.getString("connector.name");
String debeziumStorageStatusQuery = String.format("delete from %s where offset_key='%s'" , tableName, offsetKey);
writer.executeQuery(debeziumStorageStatusQuery);
writer.executeSystemQuery(debeziumStorageStatusQuery);
}

/**
Expand All @@ -67,7 +67,7 @@ public void deleteSchemaHistoryTable(String offsetKey,

String debeziumStorageStatusQuery = String.format("delete from `%s` where JSONExtractRaw(JSONExtractRaw(history_data,'source'), 'server')='%s'" , tableName, offsetKey);
log.info("Deleting schema history table query: " + debeziumStorageStatusQuery);
writer.executeQuery(debeziumStorageStatusQuery);
writer.executeSystemQuery(debeziumStorageStatusQuery);
}
/**
* Function to get the latest timestamp of the record in the table
Expand All @@ -81,7 +81,7 @@ public String getDebeziumLatestRecordTimestamp(Properties props, BaseDbWriter wr
JdbcOffsetBackingStoreConfig.PROP_TABLE_NAME.name());

String debeziumLatestRecordTimestampQuery = String.format("select max(record_insert_ts) from %s" , tableName);
return writer.executeQuery(debeziumLatestRecordTimestampQuery);
return writer.executeSystemQuery(debeziumLatestRecordTimestampQuery);
}

public String getDebeziumStorageStatusQuery(
Expand All @@ -92,7 +92,7 @@ public String getDebeziumStorageStatusQuery(
String offsetKey = getOffsetKey(props);
// String connectorName = config.getString("connector.name");
String debeziumStorageStatusQuery = String.format("select offset_val from %s where offset_key='%s'" , tableName, offsetKey);
return writer.executeQuery(debeziumStorageStatusQuery);
return writer.executeSystemQuery(debeziumStorageStatusQuery);
}

/**
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,7 @@ public class Constants {


public static final String CREATE_DATABASE = "CREATE DATABASE IF NOT EXISTS %s";
public static final String DROP_DATABASE = "DROP DATABASE IF EXISTS %s";

public static final String DROP_COLUMN = "DROP COLUMN %s";

Expand Down
Loading
Loading