Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
17 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions bom/application/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -193,8 +193,8 @@
<log4j2-api.version>2.25.4</log4j2-api.version>
<avro.version>1.12.1</avro.version>
<apicurio-registry.version>3.1.7</apicurio-registry.version>
<testcontainers.version>2.0.3</testcontainers.version> <!-- Make sure to also update docker-java.version to match its needs -->
<docker-java.version>3.7.0</docker-java.version> <!-- must be the version Testcontainers use: https://central.sonatype.com/artifact/org.testcontainers/testcontainers -->
<testcontainers.version>2.0.4</testcontainers.version> <!-- Make sure to also update docker-java.version to match its needs -->
<docker-java.version>3.7.1</docker-java.version> <!-- must be the version Testcontainers use: https://central.sonatype.com/artifact/org.testcontainers/testcontainers -->
<!-- Check the compatibility matrix (https://github.com/opensearch-project/opensearch-testcontainers) before upgrading: -->
<opensearch-testcontainers.version>2.1.3</opensearch-testcontainers.version>
<com.dajudge.kindcontainer>2.0.0</com.dajudge.kindcontainer>
Expand Down
8 changes: 8 additions & 0 deletions docs/src/main/asciidoc/datasource.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -672,6 +672,14 @@ Even with all the tracing infrastructure in place, the datasource tracing is not
quarkus.datasource.jdbc.telemetry=true
----

By default, only SQL statement executions are traced.
Connection acquisition from the datasource (`getConnection()` calls) is not traced.
To also trace connection acquisition, enable it explicitly:
[source,properties]
----
quarkus.datasource.jdbc.telemetry.trace-connection=true
----

=== Narayana transaction manager integration

Integration is automatic if the Narayana JTA extension is also available.
Expand Down
2 changes: 2 additions & 0 deletions docs/src/main/asciidoc/opentelemetry-tracing.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -208,6 +208,8 @@ quarkus.datasource.db-kind=postgresql
quarkus.datasource.jdbc.url=jdbc:postgresql://localhost:5432/mydatabase
----

For more details check the xref:datasource.adoc#datasource-tracing[Datasource tracing] documentation.

== Additional configuration
Some use cases will require custom configuration of OpenTelemetry.
These sections will outline what is necessary to properly configure it.
Expand Down
25 changes: 16 additions & 9 deletions docs/src/main/asciidoc/transaction.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -286,7 +286,7 @@ You cannot use `UserTransaction` in a method having a transaction started by a `
====

== Configuring the transaction timeout
You can configure the default transaction timeout, the timeout that applies to all transactions managed by the transaction manager, via the property `quarkus.transaction-manager.default-transaction-timeout`, specified as a duration.
You can configure the default transaction timeout, the timeout that applies to all transactions managed by the transaction manager, via the property <<quarkus-narayana-jta_quarkus-transaction-manager-default-transaction-timeout,`quarkus.transaction-manager.default-transaction-timeout`>>, specified as a duration.

include::{includes}/duration-format-note.adoc[]

Expand All @@ -306,11 +306,12 @@ to roll back the transaction counterparts during recovery.
The node name identifier needs to be unique per transaction manager deployment.
And the node identifier needs to be stable over the transaction manager restarts.

The node name identifier may be configured via the property `quarkus.transaction-manager.node-name`.
The node name identifier may be configured via the property <<quarkus-narayana-jta_quarkus-transaction-manager-node-name,`quarkus.transaction-manager.node-name`>>.

[NOTE]
====
The node name cannot be longer than 28 bytes.
To automatically shorten names longer than 28 bytes, set `quarkus.transaction-manager.shorten-node-name-if-necessary` to `true`.
To automatically shorten names longer than 28 bytes, set <<quarkus-narayana-jta_quarkus-transaction-manager-shorten-node-name-if-necessary,`quarkus.transaction-manager.shorten-node-name-if-necessary`>> to `true`.

Shortening is implemented by hashing the node name, encoding the hash to Base64 and then truncating the result. As with all hashes, the resulting shortened node name could potentially conflict with another shortened node name, but it is https://github.com/quarkusio/quarkus/issues/30491#issuecomment-1537247764[very unlikely].
====
Expand Down Expand Up @@ -383,19 +384,19 @@ While the JDBC object store provides a stable storage, users must still plan how

Quarkus, after you evaluate whether using a database to store transaction logs is right for you, allows the following JDBC-specific configuration of the object store included in `quarkus.transaction-manager.object-store._<property>_` properties, where _<property>_ can be:

* `type` (_string_): Configure this property to `jdbc` to enable usage of a Quarkus JDBC datasource for storing transaction logs.
* <<quarkus-narayana-jta_quarkus-transaction-manager-object-store-type,`type`>> (_string_): Configure this property to `jdbc` to enable usage of a Quarkus JDBC datasource for storing transaction logs.
The default value is `file-system`.

* `datasource` (_string_): Specify the name of the datasource for the transaction log storage.
* <<quarkus-narayana-jta_quarkus-transaction-manager-object-store-datasource,`datasource`>> (_string_): Specify the name of the datasource for the transaction log storage.
If no value is provided for the `datasource` property, Quarkus uses the xref:datasource.adoc#configure-datasources[default datasource].

* `create-table` (_boolean_): When set to `true`, the transaction log table gets automatically created if it does not already exist.
* <<quarkus-narayana-jta_quarkus-transaction-manager-object-store-create-table,`create-table`>> (_boolean_): When set to `true`, the transaction log table gets automatically created if it does not already exist.
The default value is `false`.

* `drop-table` (_boolean_): When set to `true`, the tables are dropped on startup if they already exist.
* <<quarkus-narayana-jta_quarkus-transaction-manager-object-store-drop-table,`drop-table`>> (_boolean_): When set to `true`, the tables are dropped on startup if they already exist.
The default value is `false`.

* `table-prefix` (string): Specify the prefix for a related table name.
* <<quarkus-narayana-jta_quarkus-transaction-manager-object-store-table-prefix,`table-prefix`>> (string): Specify the prefix for a related table name.
The default value is `quarkus_`.

For more configuration information, see the *Narayana JTA - Transaction manager* section of the Quarkus xref:all-config.adoc[All configuration options] reference.
Expand All @@ -410,7 +411,7 @@ For more configuration information, see the *Narayana JTA - Transaction manager*
** ActiveMQ Artemis is part of `quarkus-pooled-jms`, and it needs to use `quarkus.pooled-jms.transaction=XA`.

* The transaction recovery service is automatically enabled when XA JDBC datasources are detected (i.e. `quarkus.datasource.jdbc.transactions=XA`).
For other XA resource providers such as `quarkus-pooled-jms`, set `quarkus.transaction-manager.enable-recovery=true` to enable recovery.
For other XA resource providers such as `quarkus-pooled-jms`, set <<quarkus-narayana-jta_quarkus-transaction-manager-enable-recovery,`quarkus.transaction-manager.enable-recovery=true`>> to enable recovery.
You can also set it to `false` to explicitly disable recovery even when XA datasources are present.

[NOTE]
Expand Down Expand Up @@ -463,3 +464,9 @@ It's not a mess in Quarkus :)
Resource-level was introduced to support Jakarta Persistence in a non-managed environment.
But Quarkus is both lean and a managed environment, so we can safely always assume we are in JTA mode.
The end result is that the difficulties of running Hibernate ORM + CDI + a transaction manager in Java SE mode are solved by Quarkus.


[[configuration-reference]]
== Configuration Reference for Transactions

include::{generated-dir}/config/quarkus-narayana-jta.adoc[opts=optional, leveloffset=+2]
Original file line number Diff line number Diff line change
@@ -1,23 +1,23 @@
package io.quarkus.agroal.runtime;

import java.util.function.Function;

import jakarta.inject.Inject;

import io.agroal.api.AgroalDataSource;
import io.opentelemetry.api.OpenTelemetry;
import io.opentelemetry.instrumentation.jdbc.datasource.JdbcTelemetry;
import io.opentelemetry.instrumentation.jdbc.datasource.OpenTelemetryDataSource;

public class AgroalOpenTelemetryWrapper implements Function<AgroalDataSource, AgroalDataSource> {
public class AgroalOpenTelemetryWrapper {

@Inject
OpenTelemetry openTelemetry;

@Override
public AgroalDataSource apply(AgroalDataSource originalDataSource) {
public AgroalDataSource wrap(AgroalDataSource originalDataSource,
DataSourceJdbcRuntimeConfig dataSourceJdbcRuntimeConfig) {
OpenTelemetryDataSource otelDataSource = (OpenTelemetryDataSource) JdbcTelemetry
.create(openTelemetry)
.builder(openTelemetry)
.setDataSourceInstrumenterEnabled(dataSourceJdbcRuntimeConfig.telemetryTraceConnection())
.build()
.wrap(originalDataSource);
return new OpenTelemetryAgroalDataSource(originalDataSource, otelDataSource);
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -180,4 +180,12 @@ public interface DataSourceJdbcRuntimeConfig {
@ConfigDocDefault("false if quarkus.datasource.jdbc.telemetry=false and true if quarkus.datasource.jdbc.telemetry=true")
Optional<Boolean> telemetry();

/**
* Enable tracing of the connection acquisition from the datasource.
* When enabled, a span is created for each {@code getConnection()} call.
*/
@WithName("telemetry.trace-connection")
@WithDefault("false")
boolean telemetryTraceConnection();

}
Original file line number Diff line number Diff line change
Expand Up @@ -232,7 +232,7 @@ public AgroalDataSource createDataSource(String dataSourceName, boolean otelEnab
otelEnabled) {
// activate OpenTelemetry JDBC instrumentation by wrapping AgroalDatasource
// use an optional CDI bean as we can't reference optional OpenTelemetry classes here
dataSource = agroalOpenTelemetryWrapper.get().apply(dataSource);
dataSource = agroalOpenTelemetryWrapper.get().wrap(dataSource, dataSourceJdbcRuntimeConfig);
}

return dataSource;
Expand Down
22 changes: 22 additions & 0 deletions extensions/avro/deployment/src/test/avro/protocol.avdl
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
/**
* An example protocol in Avro IDL
*/
@namespace("org.apache.avro.test")
protocol Simple {
// Import the example file above
import idl "schema.avdl";

/** Errors are records that can be thrown from a method */
error TestError {
string message;
}

string hello(string greeting);
/** Return what was given. Demonstrates the use of backticks to name types/fields/messages/parameters after keywords */
TestRecord echo(TestRecord `record`);
int add(int arg1, int arg2);
bytes echoBytes(bytes data);
void `error`() throws TestError;
// The oneway keyword forces the method to return null.
void ping() oneway;
}
53 changes: 53 additions & 0 deletions extensions/avro/deployment/src/test/avro/protocol.avpr
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
{
"protocol": "ProtocolTest",
"namespace": "test",
"types": [
{
"type": "enum",
"name": "ProtocolPrivacy",
"symbols": [
"Public",
"Private"
]
},
{
"type": "record",
"namespace": "test",
"name": "ProtocolUser",
"doc": "User Test Bean",
"fields": [
{
"name": "id",
"type": [
"null",
"string"
],
"default": null
},
{
"name": "createdOn",
"type": [
"null",
"long"
],
"default": null
},
{
"name": "privacy",
"type": [
"null",
"ProtocolPrivacy"
],
"default": null
},
{
"name": "modifiedOn",
"type": {
"type": "long",
"logicalType": "timestamp-millis"
}
}
]
}
]
}
36 changes: 36 additions & 0 deletions extensions/avro/deployment/src/test/avro/schema.avdl
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
// Optional default namespace (if absent, the default namespace is the null namespace).
namespace org.apache.avro.test;

// Optional main schema definition; if used, the IDL file is equivalent to a .avsc file.
schema TestRecord;

/** Documentation for the enum type Kind */
@aliases(["org.foo.KindOf"])
enum Kind {
FOO,
BAR, // the bar enum value
BAZ
} = FOO;

// For schema evolution purposes, unmatched values do not throw an error, but are resolved to FOO.
/** MD5 hash; good enough to avoid most collisions, and smaller than (for example) SHA256. */
fixed MD5(16);

record TestRecord {
/** Record name; has no intrinsic order */
string @order("ignore") name;

Kind @order("descending") kind;

MD5 hash;
/*
Note that 'null' is the first union type. Just like .avsc / .avpr files, the default value must be of the first union type.
*/
union{null, MD5}
/** Optional field */
@aliases(["hash"]) nullableHash = null;
// Shorthand syntax; the null in this union is placed based on the default value (or first is there's no default).
MD5? anotherNullableHash = null;

array<long> arrayOfLongs;
}
20 changes: 20 additions & 0 deletions extensions/avro/deployment/src/test/avro/schema.avsc
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
{
"type": "record",
"name": "LongList",
"aliases": [
"LinkedLongs"
],
"fields": [
{
"name": "value",
"type": "long"
},
{
"name": "next",
"type": [
"null",
"LongList"
]
}
]
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
package io.quarkus.avro.deployment;

import static org.junit.jupiter.api.Assertions.assertDoesNotThrow;

import java.nio.file.Path;

import org.eclipse.microprofile.config.Config;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.io.TempDir;

import io.smallrye.config.SmallRyeConfigBuilder;

public class CompilationTest {

private static final Config DEFAULT_CONFIG = new SmallRyeConfigBuilder()
.withDefaultValue("avro.codegen.stringType", "String")
.build();

@TempDir
Path outputDir;

@Test
public void testCanCompileIdlSchema() {
compileFile(new AvroIDLCodeGenProvider(), "src/test/avro/schema.avdl");
}

@Test
public void testCanCompileIdlProtocol() {
compileFile(new AvroIDLCodeGenProvider(), "src/test/avro/protocol.avdl");
}

@Test
public void testCanCompileAvscSchema() {
compileFile(new AvroSchemaCodeGenProvider(), "src/test/avro/schema.avsc");
}

@Test
public void testCanCompileAvprProtocol() {
compileFile(new AvroProtocolCodeGenProvider(), "src/test/avro/protocol.avpr");
}

private void compileFile(AvroCodeGenProviderBase provider, String filePath) {
AvroCodeGenProviderBase.AvroOptions options = provider.new AvroOptions(DEFAULT_CONFIG);
Path sourceFile = Path.of(filePath).toAbsolutePath();
assertDoesNotThrow(() -> provider.compileSingleFile(sourceFile, outputDir, options));
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -412,7 +412,7 @@
},
"id": 163,
"panels": [],
"title": "HTTP Edpoints",
"title": "HTTP Endpoints",
"type": "row"
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -412,7 +412,7 @@
},
"id": 163,
"panels": [],
"title": "HTTP Edpoints",
"title": "HTTP Endpoints",
"type": "row"
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -440,7 +440,7 @@
"refId": "A"
}
],
"title": "HTTP Edpoints",
"title": "HTTP Endpoints",
"type": "row"
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@
import java.util.function.Consumer;

import io.quarkus.arc.Arc;
import io.quarkus.arc.ArcContainer;
import io.quarkus.arc.InstanceHandle;
import io.quarkus.dev.spi.HotReplacementContext;
import io.quarkus.dev.spi.HotReplacementSetup;
import io.quarkus.qute.Engine;
Expand All @@ -18,10 +20,17 @@ public void setupHotDeployment(HotReplacementContext context) {
@Override
public void accept(Set<String> files) {
// Make sure all templates are reloaded
Engine engine = Arc.container().instance(Engine.class).get();
engine.clearTemplates();
TemplateProducer templateProducer = Arc.container().instance(TemplateProducer.class).get();
templateProducer.clearInjectedTemplates();
ArcContainer container = Arc.container();
if (container != null) {
InstanceHandle<Engine> engineHandle = container.instance(Engine.class);
if (engineHandle.isAvailable()) {
engineHandle.get().clearTemplates();
}
InstanceHandle<TemplateProducer> templateProducerHandle = container.instance(TemplateProducer.class);
if (templateProducerHandle.isAvailable()) {
templateProducerHandle.get().clearInjectedTemplates();
}
}
}
});
}
Expand Down
Loading
Loading