Replies: 2 comments 4 replies
-
|
Hi @bhver3593, The problem may not be related to gRPC at all. If possible, try running your test application on the CompactRIO controller. This may rule out gRPC and grpc-device entirely. It sounds like you're using a continuous buffered acquisition, as described in Continuous Acquisition and Generation with Finite Buffer Size. The DAQmx task allocates a circular buffer. The CompactRIO chassis driver writes to this buffer, and the For best performance, the buffer size should be larger than the number of samples per read. This is important because both the reader and writer lock the portion of the buffer that they are accessing. If both the reader and writer lock the entire buffer, then the reader can't access the buffer while the writer is accessing it. However, if this was the problem, I think you would probably also get buffer underflow or buffer overwrite errors (unless you are configuring the Overwrite Mode). When you call Note that DAQmx has an alternative to buffered mode that is optimized for latency: Hardware Timed Single Point. This mode disables the circular buffer, but the wait mode and sleep time still apply. Hardware Timed Single Point (HWTSP) is commonly used with an explicit wait, performed using the DAQmxWaitForNextSampleClock function. |
Beta Was this translation helpful? Give feedback.
-
These are great steps towards isolating the problem. By testing single-point DIO, you have removed analog input buffering from the picture. All of the DAQ-specific sources of latency I described in my previous reply are no longer possible. Your DIO test is explicitly starting the task, which eliminates another common source of latency (auto-start). By testing the DAQmx C API running directly on the cRIO, you have confirmed that the issue is related to gRPC, NI gRPC Device Server, or networking in general.
I'm not sure what you're describing here. Are you testing with a non-gRPC TCP client and server? That would rule out networking in general, and continue to point towards something specific about gRPC or NI gRPC Device Server. Here are some ways to further isolate the problem:
Note about support venue: I am a software developer at NI who has worked on both DAQmx and NI gRPC Device Server in the past. I am in touch with the Field Applications Engineer who has been helping you. Please continue to work closely with him so that he can reproduce your problem, escalate it as needed, and get it resolved. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello, I am experiencing latency issues when sending NiDAQmx gRPC commands from a computer to a CompactRIO system hosting a gRPC device server over an Ethernet cable. The CompactRIO has been chosen as the hardware device for this system and cannot be changed.
I've created a C++ wrapper to interact with the NI 9221 analog input card. During initialization, the wrapper connects to the gRPC server address, configures the AI channels and clock timing, and starts the task. For card parameters, the sample mode is set to continuous sampling acquisition, with a clock sampling rate set to 100 Samples/second per channel and an internal buffer size of 1. After setup, I can successfully read samples from all 8 AI card channels.
To measure the latency when reading analog input samples from all 8 channels, I created a simple performance test where I set up and call the gRPC client command
ReadAnalogF641000 times while collecting 1 sample per channel. I also added a sleep of 10 milliseconds between each function call. Here are my performance results:To resolve this issue, I've adjusted the timeout parameter for the NIDAQmx gRPC command, as well as other read parameters, but the latency spikes persist without any gRPC or NiDAQmx errors messages. However, updating the
set_deadlinefunction of the grpc::ClientContext resulted in a significant increase in latency spikes and flooding of gRPC error messages.Interestingly, the number of function calls between spikes seems to be related to the sleep duration. For example:
Testing with different sleep durations suggests that these latency spikes occur at a consistent rate. I'm looking for insights into the cause of these latency spikes and potential solutions to resolve them.
Beta Was this translation helpful? Give feedback.
All reactions