Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add ActiveMQ classic monitoring #12109

Merged
merged 17 commits into from
Apr 18, 2024
Merged

Conversation

CzyerChen
Copy link
Contributor

@CzyerChen CzyerChen commented Apr 15, 2024

#12063

  • Update the documentation to include this new feature.
  • Tests(including UT, IT, E2E) are added to verify the new feature.
    ActiveMQ single-node/ 2 single-nodes/ shared-file-master-slave have been tested.
  • If it's UI related, attach the screenshots below.
  • Update the CHANGES log.

For ActiveMQ Classic cluster:

image
  • System Load: range in [0, 100].
  • Thread Count: the number of threads currently used by the JVM.
  • Heap Memory: capacity of heap memory.
  • GC: memory of ActiveMQ is managed by Java's garbage collection (GC) process. Compatible with JDK 1.6-17 with ActiveMQ classic(5.10.x ~ 6.1.0)
  • Enqueue/Dequeue/Dispatch/Expired Rate: growth rate of messages in different states.
  • Average/Max Enqueue Time: time taken to join the queue.

For ActiveMQ Classic broker:

image
  • Uptime: duration of the node.
  • State: 1 = slave node, 0 = master node.
  • Current Connentions: number of connections.
  • Current Producer/Consumer Count: number of current producers/consumers.
  • Increased Producer/Consumer Count: number of increased producers/consumers.
  • Enqueue/Dequeue Count: number of enqueue and dequeue.
  • Enqueue/Dequeue Rate: rate of enqueue and dequeue.
  • Memory Percent Usage: amount of memory space used by undelivered messages.
  • Store Percent Usage: space used by pending persistent messages.
  • Temp Percent Usage: space used by non-persistent messages.
  • Average/Max Message Size: number of messages.
  • Queue Size: number of messages in the queue.

For ActiveMQ Classic destination:

image
  • Produser/Consumer Count: number of producers/Consumers.
  • Queue Size: unacknowledged messages of the queue.
  • Memory usage: usage of memory.
  • Enqueue/Dequeue/Dispatch/Expired/Inflight Count: number of messages in different states.
  • Average/Max Message Size: number of messages.
  • Average/Max Enqueue Time: time taken to join the queue.

Due to the shortcomings of the activemq-5.x agent plugin, it can not suit hierarchy definition now.
I have to fix the agent part later, then push the hierarchy definition.

@wu-sheng wu-sheng added this to the 10.0.0 milestone Apr 16, 2024
@wu-sheng wu-sheng added backend OAP backend related. feature New feature labels Apr 16, 2024
@wu-sheng
Copy link
Member

The UI submodule seems to be changed unexpectedly?

@wu-sheng
Copy link
Member

Another question is, what is the status of service hierarchy about this new active MQ monitoring?

@CzyerChen
Copy link
Contributor Author

Another question is, what is the status of service hierarchy about this new active MQ monitoring?

now:
virtual_mq service name: 192.168.208.2:61616
agent peer: 192.168.208.2:61616
dashboard service name: activemq.skywaling-showcase.svc.local

Currently, it is not compatible with building service hierarchy by using servicename.

@wu-sheng
Copy link
Member

That rule should base on k8s deployment. Not VM.

@CzyerChen
Copy link
Contributor Author

The UI submodule seems to be changed unexpectedly?

Reverted.

@CzyerChen
Copy link
Contributor Author

That rule should base on k8s deployment. Not VM.

ActiveMQ agent side set Tags.MQ_BROKER = raw IP address:port, rather than hostname:port

in K8S:
ACTIVEMQ-SERVER(ip:port) -> VIRTUAL_MQ( ip:port ) : support service hierarchy
VIRTUAL_MQ( ip:port ) -> ACTIVEMQ cluster monitoring(activemq.skywalking-showcase) : can not support service hierarchy
ACTIVEMQ cluster monitoring(activemq.skywalking-showcase) -> K8S_SERVICE(skywalking-showcase::activemq.skywalking-showcase): support service hierarchy

@wu-sheng
Copy link
Member

wu-sheng commented Apr 16, 2024

@wankai123 Do we have IP-service name resolver to support in hierarchy detection? I think that would be useful.

@wu-sheng
Copy link
Member

That rule should base on k8s deployment. Not VM.

ActiveMQ agent side set Tags.MQ_BROKER = raw IP address:port, rather than hostname:port

in K8S: ACTIVEMQ-SERVER(ip:port) -> VIRTUAL_MQ( ip:port ) : support service hierarchy VIRTUAL_MQ( ip:port ) -> ACTIVEMQ cluster monitoring(activemq.skywalking-showcase) : can not support service hierarchy ACTIVEMQ cluster monitoring(activemq.skywalking-showcase) -> K8S_SERVICE(skywalking-showcase::activemq.skywalking-showcase): support service hierarchy

So, the name of the ActiveMQ service is hard coded only?

@wu-sheng
Copy link
Member

@CzyerChen As when ActiveMQ is deployed in k8s, OTEL collector should be able to use sidecar mode to get service name aligned with target service's name, and instance name as pod name.

@weixiang1862 Do you know how to set up sidecar?

@wu-sheng
Copy link
Member

That rule should base on k8s deployment. Not VM.

ActiveMQ agent side set Tags.MQ_BROKER = raw IP address:port, rather than hostname:port

in K8S: ACTIVEMQ-SERVER(ip:port) -> VIRTUAL_MQ( ip:port ) : support service hierarchy VIRTUAL_MQ( ip:port ) -> ACTIVEMQ cluster monitoring(activemq.skywalking-showcase) : can not support service hierarchy ACTIVEMQ cluster monitoring(activemq.skywalking-showcase) -> K8S_SERVICE(skywalking-showcase::activemq.skywalking-showcase): support service hierarchy

About this, in k8s,

  • VIRTUAL_MQ( ip:port ) could be service name, is the activeMQ plugin(Java agent) resolving the Domain name and gets the IP?
  • ACTIVEMQ cluster monitoring(activemq.skywalking-showcase) should be k8s service name, right?

@wankai123
Copy link
Member

@CzyerChen

ACTIVEMQ-SERVER(ip:port) -> VIRTUAL_MQ( ip:port )

Where is the ACTIVEMQ-SERVER(ip:port) data from?

@CzyerChen
Copy link
Contributor Author

That rule should base on k8s deployment. Not VM.

ActiveMQ agent side set Tags.MQ_BROKER = raw IP address:port, rather than hostname:port
in K8S: ACTIVEMQ-SERVER(ip:port) -> VIRTUAL_MQ( ip:port ) : support service hierarchy VIRTUAL_MQ( ip:port ) -> ACTIVEMQ cluster monitoring(activemq.skywalking-showcase) : can not support service hierarchy ACTIVEMQ cluster monitoring(activemq.skywalking-showcase) -> K8S_SERVICE(skywalking-showcase::activemq.skywalking-showcase): support service hierarchy

About this, in k8s,

  • VIRTUAL_MQ( ip:port ) could be service name, is the activeMQ plugin(Java agent) resolving the Domain name and gets the IP?
  • ACTIVEMQ cluster monitoring(activemq.skywalking-showcase) should be k8s service name, right?

yes

@CzyerChen
Copy link
Contributor Author

@CzyerChen

ACTIVEMQ-SERVER(ip:port) -> VIRTUAL_MQ( ip:port )

Where is the ACTIVEMQ-SERVER(ip:port) data from?

Data reported by activeMQ plugin(Java agent).

@wu-sheng
Copy link
Member

That rule should base on k8s deployment. Not VM.

ActiveMQ agent side set Tags.MQ_BROKER = raw IP address:port, rather than hostname:port
in K8S: ACTIVEMQ-SERVER(ip:port) -> VIRTUAL_MQ( ip:port ) : support service hierarchy VIRTUAL_MQ( ip:port ) -> ACTIVEMQ cluster monitoring(activemq.skywalking-showcase) : can not support service hierarchy ACTIVEMQ cluster monitoring(activemq.skywalking-showcase) -> K8S_SERVICE(skywalking-showcase::activemq.skywalking-showcase): support service hierarchy

About this, in k8s,

  • VIRTUAL_MQ( ip:port ) could be service name, is the activeMQ plugin(Java agent) resolving the Domain name and gets the IP?
  • ACTIVEMQ cluster monitoring(activemq.skywalking-showcase) should be k8s service name, right?

yes

Let's separate this in two ways.

  1. Service hierarchy is still working if you are building ActiveMQ server monitoring with K8s service monitoring. The hierarchy between those two isn't affected by the agent use case.
  2. About the agent is reporting the service IP, is this a good practice? I am thinking this because usually virtual MQ should be a logic concept, if the MQ is restarted by k8s, the IP could change, but DNS name would not. Nowadays, k8s deployment is very popular, and it is the key use case of the hierarchy feature, we only need to focus on that. How about add remote.ip tag for the actual IP, and use peer for domain name?

@wankai123
Copy link
Member

@CzyerChen

ACTIVEMQ-SERVER(ip:port) -> VIRTUAL_MQ( ip:port )

Where is the ACTIVEMQ-SERVER(ip:port) data from?

Data reported by activeMQ plugin(Java agent).

So, the ACTIVEMQ-SERVER should be an instance rather than a service? If not, who does this IP belong to, a k8s service?

@CzyerChen
Copy link
Contributor Author

part2 agent side needs to upgrade

I think we could put the rules here once for all? As currently, the agent side has a totally different release period. The next(9.3) agent release could be 3 months later.

- VIRTUAL_MQ -> ACTIVEMQ may have to check cluster mode, rather than single node. if client connected by failover:(tcp://amq1:61616,tcp://amq2:61617), it may hard to continue service hierarchy.

@wu-sheng
Copy link
Member

- VIRTUAL_MQ -> ACTIVEMQ may have to check cluster mode, rather than single node. if client connected by failover:(tcp://amq1:61616,tcp://amq2:61617), it may hard to continue service hierarchy.

How could this be in k8s? I think K8s should be DNS oriented right? Even with a cluster, client should use service address like activemq-svr.namespace.cluster.

VM deployment is complex, we are hard to make service hierarchy working in that deployment out-of-box.

docs/en/swip/readme.md Outdated Show resolved Hide resolved
@wu-sheng
Copy link
Member

@CzyerChen How about the latest question?

@CzyerChen
Copy link
Contributor Author

- VIRTUAL_MQ -> ACTIVEMQ may have to check cluster mode, rather than single node. if client connected by failover:(tcp://amq1:61616,tcp://amq2:61617), it may hard to continue service hierarchy.

How could this be in k8s? I think K8s should be DNS oriented right? Even with a cluster, client should use service address like activemq-svr.namespace.cluster.

VM deployment is complex, we are hard to make service hierarchy working in that deployment out-of-box.

Normally we use failover transport to connect server node on client side, like failover:(tcp://broker1.namespace.cluster:61616,tcp://broker2.namespace.cluster:61616,tcp://broker3.namespace.cluster:61616)?transportOptions in K8S.

In Master-Slave or Broker cluster mode, If just configure one node of Broker on client side, the failover will not work automatically.

@wu-sheng
Copy link
Member

Are you saying this is recommended deployment for ActiveMQ on k8s?
No unified domain namr for its cloud native deployment by adopting k8s DNS resolving mechanism?

@wu-sheng
Copy link
Member

Typically in k8s, two or more deployment could be as one service, and failover could be done internally and natively.

@CzyerChen
Copy link
Contributor Author

Are you saying this is recommended deployment for ActiveMQ on k8s? No unified domain namr for its cloud native deployment by adopting k8s DNS resolving mechanism?

It needs to list the addresses of all brokers in the cluster now in ActiveMQ Classic(no specific description for K8S deployment). But after introducing JGroups auto-discovery from Artemis, the client no longer needs to list the addresses of all brokers in the cluster, only need the logical address of the cluster.

For the Classic, there is no nameserver to provide a unified connection portal.

@wu-sheng
Copy link
Member

OK, @wankai123 Let us skip that.

@wu-sheng wu-sheng requested a review from wankai123 April 18, 2024 02:54
@wankai123
Copy link
Member

The others are LGTM.

Copy link
Member

@wu-sheng wu-sheng left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@wu-sheng wu-sheng merged commit 7e97826 into apache:master Apr 18, 2024
153 checks passed
@wankai123
Copy link
Member

@CzyerChen, Hi, I found some expressions didi not add service_instance_id in activemq-cluster.yaml, is that expected?

@CzyerChen
Copy link
Contributor Author

@CzyerChen, Hi, I found some expressions didi not add service_instance_id in activemq-cluster.yaml, is that expected?

In Master-Slave mode, cluster metrics only report from Master node, but JVM metrics will report from all nodes. So JVM metrics add service_instance_id, cluster metrics not.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backend OAP backend related. feature New feature
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants