Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why odyssey return empty metrics? #651

Closed
sheldygg opened this issue Aug 7, 2024 · 20 comments
Closed

Why odyssey return empty metrics? #651

sheldygg opened this issue Aug 7, 2024 · 20 comments

Comments

@sheldygg
Copy link

sheldygg commented Aug 7, 2024

I build odyssey with C Prometheus client library
Enable metrics in config with parameters:

promhttp_server_port 3422
log_general_stats_prom yes
log_route_stats_prom yes

When I try export metrics by enpdoint I receive empty metrics

curl 127.0.0.1:3422/metrics
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 100000

# HELP process_virtual_memory_max_bytes Maximum amount of virtual memory available in bytes.
# TYPE process_virtual_memory_max_bytes gauge
process_virtual_memory_max_bytes -1

# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total gauge
process_cpu_seconds_total 0

# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 1378160640

# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 82218460

# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 536

# HELP database_len Total databases count
# TYPE database_len gauge

# HELP server_pool_active Active servers count
# TYPE server_pool_active gauge

# HELP server_pool_idle Idle servers count
# TYPE server_pool_idle gauge

# HELP user_len Total users count
# TYPE user_len gauge

# HELP msg_allocated Messages allocated
# TYPE msg_allocated gauge

# HELP msg_cache_count Messages cached
# TYPE msg_cache_count gauge

# HELP msg_cache_gc_count Messages freed
# TYPE msg_cache_gc_count gauge

# HELP msg_cache_size Messages cache size
# TYPE msg_cache_size gauge

# HELP count_coroutine Coroutines running
# TYPE count_coroutine gauge

# HELP count_coroutine_cache Coroutines cached
# TYPE count_coroutine_cache gauge

# HELP clients_processed Number of processed clients
# TYPE clients_processed gauge

# HELP client_pool_total Total database clients count
# TYPE client_pool_total gauge

# HELP avg_tx_count Average transactions count per second
# TYPE avg_tx_count gauge

# HELP avg_tx_time Average transaction time in usec
# TYPE avg_tx_time gauge

# HELP avg_query_count Average query count per second
# TYPE avg_query_count gauge

# HELP avg_query_time Average query time in usec
# TYPE avg_query_time gauge

# HELP avg_recv_client Average in bytes/sec
# TYPE avg_recv_client gauge

# HELP avg_recv_server Average out bytes/sec
# TYPE avg_recv_server gauge

What's wrong?

@sheldygg
Copy link
Author

sheldygg commented Aug 8, 2024

Someone can expain?
@x4m Please :)

@sheldygg
Copy link
Author

@rkhapov @reshke
Sorry for ping, but this is important issue

@sheldygg
Copy link
Author

Anyone want explain what's wrong?
@x4m @reshke @rkhapov @chipitsine @aidekqz @mialinx

@visill
Copy link
Contributor

visill commented Oct 2, 2024

Try it with the latest version of odyssey. Maybe you don't have PROMHTTP installed
#697

@sheldygg
Copy link
Author

sheldygg commented Oct 2, 2024

I have built with this dockerfile

FROM debian:bookworm-slim as builder

WORKDIR /tmp/

RUN apt-get update && apt-get upgrade -y && apt-get install -y --no-install-recommends \
    curl \
    lsb-release \
    ca-certificates \
    libssl-dev \
    gnupg \
    openssl \
    wget

RUN curl https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add - && \
    sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'

RUN set -ex \
    && apt-get update \
    && apt-get install -y --no-install-recommends \
       build-essential \
       cmake \
       gcc \
       gdb \
       git \
       libpam-dev \
       libzstd-dev \
       zlib1g-dev \
       valgrind \
       libpq-dev \
       vim \
       postgresql-common \
       postgresql-server-dev-all \
       libmicrohttpd-dev \
    && wget https://github.com/digitalocean/prometheus-client-c/releases/download/v0.1.3/libprom-dev-0.1.3-Linux.deb \
    && dpkg -i libprom-dev-0.1.3-Linux.deb \
    && wget https://github.com/digitalocean/prometheus-client-c/releases/download/v0.1.3/libpromhttp-dev-0.1.3-Linux.deb \
    && dpkg -i libpromhttp-dev-0.1.3-Linux.deb || apt --fix-broken install -y \
    && dpkg -i libpromhttp-dev-0.1.3-Linux.deb \
    && git clone https://github.com/yandex/odyssey.git \
    && cd odyssey \
    && mkdir build \
    && cd build \
    && cmake -DCMAKE_BUILD_TYPE=Release -DBUILD_DEBIAN=1 -DUSE_SCRAM=1 .. \
    && make

And how I can see from build log promhttp is finded

#8 89.66 -- Found PROM: /usr/lib/libprom.so  
#8 89.66 -- Found PROM: /usr/lib/libprom.so
#8 89.66 -- Found PROMHTTP: /usr/lib/libpromhttp.so  
#8 89.66 -- Found PROMHTTP: /usr/lib/libpromhttp.so

And build can be runned with paramer in config

But there is warning about cron and prom

#8 112.4 /tmp/odyssey/sources/cron.c: In function 'od_cron_stat_cb':
#8 112.4 /tmp/odyssey/sources/cron.c:75:68: warning: passing argument 6 of 'od_logger_write_plain' discards 'const' qualifier from pointer target type [-Wdiscarded-qualifiers]
#8 112.4    75 |                                               "stats", NULL, NULL, prom_log);
#8 112.4       |                                                                    ^~~~~~~~
#8 112.4 In file included from /tmp/odyssey/sources/odyssey.h:40,
#8 112.4                  from /tmp/odyssey/sources/cron.c:10:
#8 112.4 /tmp/odyssey/sources/logger.h:58:51: note: expected 'char *' but argument is of type 'const char *'
#8 112.4    58 |                                   void *, void *, char *);
#8 112.4       |                                                   ^~~~~~
#8 112.4 /tmp/odyssey/sources/cron.c:76:30: warning: passing argument 1 of 'free' discards 'const' qualifier from pointer target type [-Wdiscarded-qualifiers]
#8 112.4    76 |                         free(prom_log);
#8 112.4       |                              ^~~~~~~~
#8 112.4 In file included from /tmp/odyssey/third_party/kiwi/kiwi.h:10,
#8 112.4                  from /tmp/odyssey/sources/cron.c:8:
#8 112.4 /usr/include/stdlib.h:568:25: note: expected 'void *' but argument is of type 'const char *'
#8 112.4   568 | extern void free (void *__ptr) __THROW;
#8 112.4       |                   ~~~~~~^~~~~
#8 112.4 /tmp/odyssey/sources/cron.c: In function 'od_cron_stat':
#8 112.4 /tmp/odyssey/sources/cron.c:125:33: warning: initialization discards 'const' qualifier from pointer target type [-Wdiscarded-qualifiers]
#8 112.4   125 |                                 od_prom_metrics_get_stat(cron->metrics);
#8 112.4       |                                 ^~~~~~~~~~~~~~~~~~~~~~~~

@ramili4
Copy link

ramili4 commented Oct 7, 2024

Hi! Any updates on that issue? I've built it with a similar dockerfile, but it only lists metrics, LOL.

@sheldygg
Copy link
Author

sheldygg commented Oct 7, 2024

Hi! Any updates on that issue? I've built it with a similar dockerfile, but it only lists metrics, LOL.

Hi, idk what happening, I built with with this docker file and run at ubuntu 22.04 and now try at ubuntu 24.04

Endpoint return empty metrics :(

@ramili4
Copy link

ramili4 commented Oct 7, 2024

How did you get libprom and libpromhttp libraries?

@sheldygg
Copy link
Author

sheldygg commented Oct 7, 2024

wget https://github.com/digitalocean/prometheus-client-c/releases/download/v0.1.3/libprom-dev-0.1.3-Linux.deb
sudo dpkg -i libprom-dev-0.1.3-Linux.deb
wget https://github.com/digitalocean/prometheus-client-c/releases/download/v0.1.3/libpromhttp-dev-0.1.3-Linux.deb
sudo dpkg -i libpromhttp-dev-0.1.3-Linux.deb
sudo apt install libmicrohttpd-dev
sudo apt --fix-broken install
sudo dpkg -i libpromhttp-dev-0.1.3-Linux.deb

@ramili4
Copy link

ramili4 commented Oct 8, 2024

I'd suggest stopping firewalld, ufw or iptables (whatever you have in place). Might be useful for debugging purposes

@sheldygg
Copy link
Author

sheldygg commented Oct 8, 2024

I disabled ufw and try get metrics, still empty

@ramili4
Copy link

ramili4 commented Oct 8, 2024

check if iptables is running

@sheldygg
Copy link
Author

sheldygg commented Oct 8, 2024

sheldy@skb-second:~$ sudo ufw status
[sudo] password for sheldy: 
Status: inactive
sheldy@skb-second:~$ sudo iptables -S
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-N DOCKER
-N DOCKER-ISOLATION-STAGE-1
-N DOCKER-ISOLATION-STAGE-2
-N DOCKER-USER
-N ufw-after-forward
-N ufw-after-input
-N ufw-after-logging-forward
-N ufw-after-logging-input
-N ufw-after-logging-output
-N ufw-after-output
-N ufw-before-forward
-N ufw-before-input
-N ufw-before-logging-forward
-N ufw-before-logging-input
-N ufw-before-logging-output
-N ufw-before-output
-N ufw-reject-forward
-N ufw-reject-input
-N ufw-reject-output
-N ufw-track-forward
-N ufw-track-input
-N ufw-track-output
sheldy@skb-second:~$ curl localhost:3423/metrics
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 1024

# HELP process_virtual_memory_max_bytes Maximum amount of virtual memory available in bytes.
# TYPE process_virtual_memory_max_bytes gauge
process_virtual_memory_max_bytes -1

# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total gauge
process_cpu_seconds_total 0

# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 1375244288

# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 123230402

# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 65

# HELP database_len Total databases count
# TYPE database_len gauge

# HELP server_pool_active Active servers count
# TYPE server_pool_active gauge

# HELP server_pool_idle Idle servers count
# TYPE server_pool_idle gauge

# HELP user_len Total users count
# TYPE user_len gauge

# HELP msg_allocated Messages allocated
# TYPE msg_allocated gauge

# HELP msg_cache_count Messages cached
# TYPE msg_cache_count gauge

# HELP msg_cache_gc_count Messages freed
# TYPE msg_cache_gc_count gauge

# HELP msg_cache_size Messages cache size
# TYPE msg_cache_size gauge

# HELP count_coroutine Coroutines running
# TYPE count_coroutine gauge

# HELP count_coroutine_cache Coroutines cached
# TYPE count_coroutine_cache gauge

# HELP clients_processed Number of processed clients
# TYPE clients_processed gauge

# HELP client_pool_total Total database clients count
# TYPE client_pool_total gauge

# HELP avg_tx_count Average transactions count per second
# TYPE avg_tx_count gauge

# HELP avg_tx_time Average transaction time in usec
# TYPE avg_tx_time gauge

# HELP avg_query_count Average query count per second
# TYPE avg_query_count gauge

# HELP avg_query_time Average query time in usec
# TYPE avg_query_time gauge

# HELP avg_recv_client Average in bytes/sec
# TYPE avg_recv_client gauge

# HELP avg_recv_server Average out bytes/sec
# TYPE avg_recv_server gauge

@ramili4
Copy link

ramili4 commented Oct 8, 2024

That's what I got
metrics_exxample.txt

I simply allowed all traffic on port 7777

@ramili4
Copy link

ramili4 commented Oct 10, 2024

What is the size of your image? I'm looking for ways to reduce the size of a final image

@sheldygg
Copy link
Author

Hi, I created repository which reproduce my problem https://github.com/sheldygg/odyssey-metrics-issue
Size of build: 122 MB

@PashaKirillov
Copy link

@sheldygg try to change log_stats value from no to yes in odyssey.conf and wait at least 15 seconds before curling metrics according to stats_interval 15

@sheldygg
Copy link
Author

@sheldygg try to change log_stats value from no to yes in odyssey.conf and wait at least 15 seconds before curling metrics according to stats_interval 15

Looks like this help

curl localhost:3422/metrics
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 1048576

# HELP process_virtual_memory_max_bytes Maximum amount of virtual memory available in bytes.
# TYPE process_virtual_memory_max_bytes gauge
process_virtual_memory_max_bytes -1

# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total gauge
process_cpu_seconds_total 0

# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 2594840576

# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 33417787

# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 67

# HELP database_len Total databases count
# TYPE database_len gauge

# HELP server_pool_active Active servers count
# TYPE server_pool_active gauge

# HELP sever_pool_idle Idle servers count
# TYPE sever_pool_idle gauge

# HELP user_len Total users count
# TYPE user_len gauge

# HELP msg_allocated Messages allocated
# TYPE msg_allocated gauge
msg_allocated{worker="general"} 64
msg_allocated{worker="worker[0]"} 0
msg_allocated{worker="worker[2]"} 0
msg_allocated{worker="worker[4]"} 0
msg_allocated{worker="worker[3]"} 0
msg_allocated{worker="worker[7]"} 0
msg_allocated{worker="worker[1]"} 0
msg_allocated{worker="worker[6]"} 0
msg_allocated{worker="worker[5]"} 0

# HELP msg_cache_count Messages cached
# TYPE msg_cache_count gauge
msg_cache_count{worker="general"} 0
msg_cache_count{worker="worker[0]"} 0
msg_cache_count{worker="worker[2]"} 0
msg_cache_count{worker="worker[4]"} 0
msg_cache_count{worker="worker[3]"} 0
msg_cache_count{worker="worker[7]"} 0
msg_cache_count{worker="worker[1]"} 0
msg_cache_count{worker="worker[6]"} 0
msg_cache_count{worker="worker[5]"} 0

# HELP msg_cache_gc_count Messages freed
# TYPE msg_cache_gc_count gauge
msg_cache_gc_count{worker="general"} 1
msg_cache_gc_count{worker="worker[0]"} 7
msg_cache_gc_count{worker="worker[2]"} 7
msg_cache_gc_count{worker="worker[4]"} 7
msg_cache_gc_count{worker="worker[3]"} 7
msg_cache_gc_count{worker="worker[7]"} 7
msg_cache_gc_count{worker="worker[1]"} 7
msg_cache_gc_count{worker="worker[6]"} 7
msg_cache_gc_count{worker="worker[5]"} 7

# HELP msg_cache_size Messages cache size
# TYPE msg_cache_size gauge
msg_cache_size{worker="general"} 0
msg_cache_size{worker="worker[0]"} 0
msg_cache_size{worker="worker[2]"} 0
msg_cache_size{worker="worker[4]"} 0
msg_cache_size{worker="worker[3]"} 0
msg_cache_size{worker="worker[7]"} 0
msg_cache_size{worker="worker[1]"} 0
msg_cache_size{worker="worker[6]"} 0
msg_cache_size{worker="worker[5]"} 0

# HELP count_coroutine Coroutines running
# TYPE count_coroutine gauge
count_coroutine{worker="general"} 3
count_coroutine{worker="worker[0]"} 1
count_coroutine{worker="worker[2]"} 1
count_coroutine{worker="worker[4]"} 1
count_coroutine{worker="worker[3]"} 1
count_coroutine{worker="worker[7]"} 1
count_coroutine{worker="worker[1]"} 1
count_coroutine{worker="worker[6]"} 1
count_coroutine{worker="worker[5]"} 1

# HELP count_coroutine_cache Coroutines cached
# TYPE count_coroutine_cache gauge
count_coroutine_cache{worker="general"} 1
count_coroutine_cache{worker="worker[0]"} 0
count_coroutine_cache{worker="worker[4]"} 0
count_coroutine_cache{worker="worker[3]"} 0
count_coroutine_cache{worker="worker[7]"} 0
count_coroutine_cache{worker="worker[2]"} 0
count_coroutine_cache{worker="worker[1]"} 0
count_coroutine_cache{worker="worker[6]"} 0
count_coroutine_cache{worker="worker[5]"} 0

# HELP clients_processed Number of processed clients
# TYPE clients_processed gauge
clients_processed{worker="worker[0]"} 0
clients_processed{worker="worker[4]"} 0
clients_processed{worker="worker[3]"} 0
clients_processed{worker="worker[7]"} 0
clients_processed{worker="worker[2]"} 0
clients_processed{worker="worker[1]"} 0
clients_processed{worker="worker[6]"} 0
clients_processed{worker="worker[5]"} 0

# HELP client_pool_total Total database clients count
# TYPE client_pool_total gauge

# HELP avg_tx_count Average transactions count per second
# TYPE avg_tx_count gauge

# HELP avg_tx_time Average transaction time in usec
# TYPE avg_tx_time gauge

# HELP avg_query_count Average query count per second
# TYPE avg_query_count gauge

# HELP avg_query_time Average query time in usec
# TYPE avg_query_time gauge

# HELP avg_recv_client Average in bytes/sec
# TYPE avg_recv_client gauge

# HELP avg_recv_server Average out bytes/sec
# TYPE avg_recv_server gauge

@ramili4
Copy link

ramili4 commented Oct 23, 2024

Don't you have Segmentation Fault when getting metrics?

@sheldygg
Copy link
Author

Don't you have Segmentation Fault when getting metrics?

No, all is ok

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants