-
Notifications
You must be signed in to change notification settings - Fork 423
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I get error unavailable (14): Transport became inactive
with a big number (>20) of unary requests
#2202
Comments
If the client closed the connection because of a local error condition we should see that. So the working hypothesis is that the server dropped the connection. It's unclear why that's the case. Could you provide the complete trace logs for when this happens? It might shed some light on this. |
@glbrntt Is that valuable logs?
|
No, that will be a result of the connection being dropped, not the reason it drops. |
In my code, I do the following
I noticed that the error appears when I make a cancel for the task on which I execute the request, but by removing the cancellation of the task I found that the number of executed requests goes up to 1000+ and there is no problem. Why is this? cc: @glbrntt |
I also found out that if timeout is triggered in requests, my channel also goes down as a result |
It looks like the main problem is that in the list of streams is not found a particular stream after which the channel is closed. If I load data into a cell and cancel tasks that are no longer needed, I always catch error on stream 29.
Why is that happening? cc: @glbrntt |
Found related issue: #1421 |
I tried a solution to create 1 channel for 1 request. In the end, it helped. Well, it's impossible to work with this because the loading time is incredibly long. I really need some kind of solution that will allow me not to close the channel with a large number of requests. I believe that this behavior is indeed a bug. I also haven't found a way to optimize the connection time of the channel to the server in any way, so the question is relevant. I also can't use your new version of the library (2.0.0) because I see that it only supports iOS 18 and our project supports iOS 15. cc: @glbrntt |
Basically, I understand it like this, and I'm almost sure it's true. We cancel a stream, and it gets added to If we scroll quickly, this reset list fills up very fast, and streams get removed from it just as quickly. Then, packets arrive from old streams/requests, which we try to process. We check if the corresponding stream exists in As a result, we return a Oh, and yeah—this limit of |
Hey @sergeiromanchuk -- thanks for providing all of this info and good job on diagnosing the issue! I think allowing the max reset streams limit to be configurable and then exposing that config in gRPC is reasonable. I'll open a separate issue in https://github.com/apple/swift-nio-http2 to track doing that. We can track the gRPC changes in this issue. In the meantime you can workaround this by not cancelling the streams. |
Thanks for the reply @glbrntt. Unfortunately, it's not enough to just not cancel streams, I'll also have to limit the number of calls because the channel also drops with an error if some request is completed with timeout. I think it is another issue. |
Here's the http/2 issue: apple/swift-nio-http2#493 |
Can you elaborate on this one? An RPC timing out won't close the connection, but an RPC may time out because of a connection timeout. |
When I sent a lot of tasks without canceling ~1000, the connection would process them all for probably about 3 minutes and they would drop with a timout error. After all the streams were processed, the connection would also close with |
Describe the bug
I have a table in which in each cell I send two queries to GRPC using
async let
. If I scroll through the table slowly, I almost never encounter any problems. But if I start scrolling the table quickly, every query I send throws the errorunavailable (14): Transport became inactive.
This usually happens when the number of streams in use exceeds 20.
channel log
[{Default}] connectionUtilizationChanged(id:streamsUsed:streamCapacity:)::ChannelProvider.swift::191:19: [API v2] → ChannelProvider::connectionUtilizationChanged(id:streamsUsed:streamCapacity:). Connection ID: ObjectIdentifier(0x000000011ee58900). Stream capacity: 2147483647. Streams used: 21
channel error
[{Default}] connectionClosed(id:error:)::ChannelProvider.swift::205:19: [API v2] → ChannelProvider::connectionClosed(id:error:). Connection ID: ObjectIdentifier(0x000000011ee58900). Error: Optional(NIOCore.ChannelError.ioOnClosedChannel)
request error
[{Default}] xxx()::XXXService.swift::67:23: [API] → XXXService::xxx(). XXX service completed with result: success(unavailable (14): Transport became inactive)
trace
2025-02-27T09:45:36+0100 trace [CallOptions] : call_state=closed grpc.conn.addr_local=10.179.0.18 grpc.conn.addr_remote=10.229.0.160 grpc_request_id=5BFD6881-842F-42AB-95F1-AC0E322817FE [GRPC] failing buffered writes
SPM:
Protobuf 1.25.2
gRPC 1.65.1
gRPC-Swift 1.21.0
Platform: iOS
Expected behaviour
I expect that the channel will normally handle this number of requests as we can see that it is only 20-30 requests.
Additional information
Options for each unary call:
Keepalive
GRPCChannelPool
configuration:Request example:
Generated code:
The text was updated successfully, but these errors were encountered: