-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add ActiveMQ classic monitoring #12109
Conversation
...erver-starter/src/main/resources/ui-initialized-templates/activemq/activemq-destination.json
Outdated
Show resolved
Hide resolved
The UI submodule seems to be changed unexpectedly? |
...er/server-starter/src/main/resources/ui-initialized-templates/activemq/activemq-cluster.json
Outdated
Show resolved
Hide resolved
Another question is, what is the status of service hierarchy about this new active MQ monitoring? |
now: Currently, it is not compatible with building service hierarchy by using servicename. |
That rule should base on k8s deployment. Not VM. |
f5a0c50
to
b28d8dd
Compare
Reverted. |
ActiveMQ agent side set Tags.MQ_BROKER = raw IP address:port, rather than hostname:port in K8S: |
@wankai123 Do we have IP-service name resolver to support in hierarchy detection? I think that would be useful. |
So, the name of the ActiveMQ service is hard coded only? |
@CzyerChen As when ActiveMQ is deployed in k8s, OTEL collector should be able to use sidecar mode to get service name aligned with target service's name, and instance name as pod name. @weixiang1862 Do you know how to set up sidecar? |
About this, in k8s,
|
Where is the |
yes |
Data reported by activeMQ plugin(Java agent). |
Let's separate this in two ways.
|
So, the |
|
How could this be in k8s? I think K8s should be DNS oriented right? Even with a cluster, client should use service address like VM deployment is complex, we are hard to make service hierarchy working in that deployment out-of-box. |
@CzyerChen How about the latest question? |
Normally we use failover transport to connect server node on client side, like In Master-Slave or Broker cluster mode, If just configure one node of Broker on client side, the failover will not work automatically. |
Are you saying this is recommended deployment for ActiveMQ on k8s? |
Typically in k8s, two or more deployment could be as one service, and failover could be done internally and natively. |
It needs to list the addresses of all brokers in the cluster now in ActiveMQ Classic(no specific description for K8S deployment). But after introducing For the Classic, there is no |
OK, @wankai123 Let us skip that. |
...ver/server-starter/src/main/resources/ui-initialized-templates/activemq/activemq-broker.json
Outdated
Show resolved
Hide resolved
The others are LGTM. |
# Conflicts: # docs/en/changes/changes.md
…mq-monitoring # Conflicts: # docs/en/changes/changes.md
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
@CzyerChen, Hi, I found some expressions didi not add |
In Master-Slave mode, cluster metrics only report from Master node, but JVM metrics will report from all nodes. So JVM metrics add |
#12063
ActiveMQ single-node/ 2 single-nodes/ shared-file-master-slave have been tested.
CHANGES
log.For ActiveMQ Classic cluster:
For ActiveMQ Classic broker:
For ActiveMQ Classic destination:
Due to the shortcomings of the activemq-5.x agent plugin, it can not suit hierarchy definition now.
I have to fix the agent part later, then push the hierarchy definition.