Skip to content

save the last active_node in send() request #20

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
dberardo-com opened this issue Oct 2, 2024 · 1 comment
Closed

save the last active_node in send() request #20

dberardo-com opened this issue Oct 2, 2024 · 1 comment

Comments

@dberardo-com
Copy link

when using the Sender in cluster mode it seems like the python client always try to check all nodes from the beginning till a match is found every single time. this is a waste.

is it possible to store the latest active_node per cluster and try onto that one and check all others only in case of timeouts ?

@aiantsen
Copy link
Contributor

aiantsen commented May 7, 2025

Hi @dberardo-com, thanks for your report.

Unfortunately, I couldn't reproduce the described problem. I used the following code sample to check nodes' order:

from zabbix_utils import Sender

sender = Sender(use_config=True)

for _ in range(5):
    print(sender.clusters)
    sender.send_value('host', 'item.key', 'value', 1695713666, 30)

and got the following result:

[[["zabbix.cluster.node1", 10052], ["zabbix.cluster.node2", 10051], ["127.0.0.1", 10051]]]
[[["127.0.0.1", 10051], ["zabbix.cluster.node2", 10051], ["zabbix.cluster.node1", 10052]]]
[[["127.0.0.1", 10051], ["zabbix.cluster.node2", 10051], ["zabbix.cluster.node1", 10052]]]
[[["127.0.0.1", 10051], ["zabbix.cluster.node2", 10051], ["zabbix.cluster.node1", 10052]]]
[[["127.0.0.1", 10051], ["zabbix.cluster.node2", 10051], ["zabbix.cluster.node1", 10052]]]

So, the main point here is that the active_node is stored in the object Sender. Using a single Sender object allows you to keep the latest active_node and make the library connect to this node first.
You probably used several Sender instances that resulted in ordering cluster nodes every time from scratch.

I hope my explanation will help you deal with the problem mentioned. Whatever, feel free to create another issue if the problem still exists.

@aiantsen aiantsen closed this as completed May 7, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants