Skip to content

Conversation

markdroth
Copy link
Contributor

@markdroth markdroth commented Mar 14, 2025

Commit Message: xDS: ext_proc: add GRPC body send mode
Additional Description: Adds a new body send mode for gRPC traffic. Also adds a safe way for the ext_proc server to return OK status without losing data in FULL_DUPLEX_STREAMED and GRPC modes. See grpc/proposal#484 for context.
Risk Level: Low
Testing: N/A
Docs Changes: Included in PR
Release Notes: N/A
Platform Specific Features: N/A

Copy link

As a reminder, PRs marked as draft will not be automatically assigned reviewers,
or be handled by maintainer-oncall triage.

Please mark your PR as ready when you want it to be reviewed!

🐱

Caused by: #38753 was opened by markdroth.

see: more, trace.

Copy link

CC @envoyproxy/api-shepherds: Your approval is needed for changes made to (api/envoy/|docs/root/api-docs/).
envoyproxy/api-shepherds assignee is @abeyad
CC @envoyproxy/api-watchers: FYI only for changes made to (api/envoy/|docs/root/api-docs/).

🐱

Caused by: #38753 was opened by markdroth.

see: more, trace.

Copy link

This pull request has been automatically marked as stale because it has not had activity in the last 30 days. It will be closed in 7 days if no further activity occurs. Please feel free to give a status update now, ping for review, or re-open when it's ready. Thank you for your contributions!

@github-actions github-actions bot added the stale stalebot believes this issue/PR has not been touched recently label Apr 13, 2025
@markdroth markdroth added no stalebot Disables stalebot from closing an issue and removed stale stalebot believes this issue/PR has not been touched recently labels Apr 17, 2025
Signed-off-by: Mark D. Roth <[email protected]>
Signed-off-by: Mark D. Roth <[email protected]>
Signed-off-by: Mark D. Roth <[email protected]>
Signed-off-by: Mark D. Roth <[email protected]>
Signed-off-by: Mark D. Roth <[email protected]>
@stevenzzzz
Copy link
Contributor

stevenzzzz commented Sep 10, 2025

Could you clarify what's the gap this PR is trying to cover?
IIUC when HTTP body-chunks gets accepted by Envoy, it sends as is to ext_proc server. If the downstream is a grpc server, grpc messages follow grpc framing is sent via http body, when they arrive at the Envoy, the http data chunk might not align with message boundary already, but if Envoy send it as is (STREAMED, BUFFERED), the ext-proc server should be able to understand and unpack the packets following grpc framing.

I don't quite follow why the unpacking&repacking at Envoy is necessary.

@stevenzzzz
Copy link
Contributor

@adisuissa @yanjunxiang-google FYI

@stevenzzzz
Copy link
Contributor

/assign @yanjunxiang-google

@markdroth
Copy link
Contributor Author

@stevenzzzz For an explanation of why the GRPC processing mode makes sense, see my pending gRFC for ext_proc support in gRPC:

https://github.com/grpc/proposal/blob/634cfec6f18bf12d09948faa94b69b2b21d9c259/A93-xds-ext-proc.md#payload-handling

Signed-off-by: Mark D. Roth <[email protected]>
Signed-off-by: Mark D. Roth <[email protected]>
…ut_waiting_for_header_response does not work for GRPC

Signed-off-by: Mark D. Roth <[email protected]>
Signed-off-by: Mark D. Roth <[email protected]>
…s not support mode override

Signed-off-by: Mark D. Roth <[email protected]>
Signed-off-by: Mark D. Roth <[email protected]>
// However, if the half-close happens after the last message on the
// stream was already sent, then this field will be true to indicate an
// end-of-stream with *no* message (as opposed to an empty message).
bool end_of_stream_without_message = 3;
Copy link
Contributor

@yanjunxiang-google yanjunxiang-google Sep 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This makes the protocol very complicated with Envoy and gRPC are doing different things. Can we have a separate API for gRPC?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All this is really doing is indicating the difference between unset and empty for the body field. I would have preferred to change the type of the body field to use a wrapper message similar to google.protobuf.StringValue, but that would be a breaking change.

I don't see why this field specifically adds a lot of complexity. I agree that there is a bit of work that needs to happen to support the GRPC body send mode in general, but this field specifically doesn't add very much complexity on top of that at all.

Note that there is no proposal here for Envoy and gRPC to do different things. This PR is about how to handle gRPC traffic, regardless of which data plane is handling that traffic. We would want Envoy to support this body send mode as well, since Envoy does often handle gRPC traffic.

I think that inventing a completely new ext_proc API for gRPC traffic doesn't make sense. The goal here is to leverage the existing ext_proc API, not build a competitor to it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In gRPC mode is empty body with end_of_stream == true distinct from empty body with end_of_stream_without_message == true? If so, how?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In gRPC, if the client may send a half-close without sending a message, or long after the last message was sent. On the wire, that's represented as an HTTP/2 DATA frame that is empty but has the EOS bit set. But note that an empty HTTP/2 DATA frame is not the same as sending an empty gRPC message -- if there was another message sent, even if that message was empty, the empty message would still be framed within the HTTP/2 DATA frame.

In other words, there are 3 possible cases for a DATA frame with the EOS bit set:

  1. DATA frame with size 0 and EOS. This means that the client sent a half-close without sending a message.
  2. DATA frame with size 5 and EOS, where the contents are a gRPC message frame indicating a 0-length message. This means that the client has sent an empty message and EOS.
  3. DATA frame with size 15 and EOS, where the contents are a gRPC message frame indicating a 10-byte message. This means that the client has sent a non-empty message and EOS.

Note that on the wire between gRPC client and server, the difference between cases 1 and 2 is whether the gRPC framing is present. However, in GRPC send mode, we are not including the gRPC framing when we send the message to the ext_proc server; we're sending only the deframed messages. This means that we need a way to differentiate between cases 1 and 2, which is what this field provides.

@KBaichoo
Copy link
Contributor

/wait

Signed-off-by: Mark D. Roth <[email protected]>
// is set to true.
envoy.extensions.filters.http.ext_proc.v3.ProcessingMode mode_override = 9;

// [#not-implemented-hide:]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Envoy dataplane has this planned, and is still working on the details. The behavior might be slightly different from gRPC client.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good to know. I think we should handle this the same way for both FULL_DUPLEX_STREAMED and GRPC modes, so if you have a slightly different approach in mind, I'm happy to discuss changes here.

@markdroth
Copy link
Contributor Author

/retest

@stevenzzzz
Copy link
Contributor

hi Mark, can we clarify where the deframing and framing of grpc messages should happen with the new mode. Folks were thinking that's something that should be built into the grpc-client underneath the ext_proc grpc-service.

But IIUC, you idea is that :

  1. the ext_proc filter itself needs to deframe the grpc-message in onData call, which may involve in
    1. buffering several HTTP body chunks to get the full message, i.e. N-onData() calls ==> 1 ext_proc grpc call; or
    2. splitting one HTTP body chunk into multiple grpc messages, each has their own ext_proc calls, i.e. 1 Http-onData ==> N ext_proc calls?

If the goal is "In GRPC mode, as each gRPC message arrives, it will be sent to the external processor", one option might be to have a grpc-deframing filter, which may need to deal with the deframing work, and a re-framing filter at the tail of the http-filter chain, such that we can sandwich all the other filters in between, that they could be least changed, and see a full grpc message per de/encodeData call?

This grpc-de/reframing could even be built into filterManager, such that it's configurable via HCM and no L7 layer filter should worry about the state-machine bookkeeping anymore?

@markdroth
Copy link
Contributor Author

hi Mark, can we clarify where the deframing and framing of grpc messages should happen with the new mode. Folks were thinking that's something that should be built into the grpc-client underneath the ext_proc grpc-service.

I don't see how it can possibly work at that layer. By the time that client sees the data, we've already constructed the ProcessingRequest message, so it's too late to do any of the buffering or deframing.

But IIUC, you idea is that :

  1. the ext_proc filter itself needs to deframe the grpc-message in onData call, which may involve in

    1. buffering several HTTP body chunks to get the full message, i.e. N-onData() calls ==> 1 ext_proc grpc call; or
    2. splitting one HTTP body chunk into multiple grpc messages, each has their own ext_proc calls, i.e. 1 Http-onData ==> N ext_proc calls?

Yes, that's correct.

If the goal is "In GRPC mode, as each gRPC message arrives, it will be sent to the external processor", one option might be to have a grpc-deframing filter, which may need to deal with the deframing work, and a re-framing filter at the tail of the http-filter chain, such that we can sandwich all the other filters in between, that they could be least changed, and see a full grpc message per de/encodeData call?

Yes, that's an option -- that's one of the ideas that @yanavlasov mentioned when we met earlier today. But I think there are a couple of down-sides to that approach, because it would be configured independently of ext_proc but would directly affect the way that ext_proc actually works. Specifically:

  • When the ext_proc server sees the request, the request_body_mode will not provide any indication that it's seeing the deframed gRPC message instead of the raw DATA frame contents. This means that there's no safe way to transition between the two modes in an ext_proc server.
  • If gRPC treats the gRPC deframing and re-framing filters as no-ops, then an xDS config that includes all three filters will behave the same way in both Envoy and gRPC. However, an xDS config that includes only the ext_proc filter without the gRPC deframing and re-framing filters will behave differently between Envoy and gRPC. This effectively means that from an xDS API perspective, the ext_proc config is under-specified, because it is not enough by itself to determine the actual ext_proc behavior. Speaking as an xDS API shepherd, I don't think that's acceptable.

I think that adding the GRPC body send mode avoids both of those problems.

This grpc-de/reframing could even be built into filterManager, such that it's configurable via HCM and no L7 layer filter should worry about the state-machine bookkeeping anymore?

I think this has the same problems as the gRPC framing and de-framing filter approach, and it introduces the additional problem of an HCM-level configuration that would cause behavior to differ across Envoy and gRPC.

@stevenzzzz
Copy link
Contributor

  • When the ext_proc server sees the request, the request_body_mode will not provide any indication that it's seeing the deframed gRPC message instead of the raw DATA frame contents. This means that there's no safe way to transition between the two modes in an ext_proc server.

This can be added in Envoy as a configurable switch, and the state can be latched to streamInfo or something else, which could be the indicator. I think this will cover your ask here: ext_proc sees a whole grpc-message per de/encodeData call, in the payload.
This should also make it work in the proxyless GRPC case, that ext_proc only need to care about sending the rpc-message, right?
I don't quite follow the "transition between the two modes in an ext_proc server." part. With de/reframing done before data enters filter-chain, ext_proc doesn't need "two modes" now.

If gRPC treats the gRPC deframing and re-framing filters as no-ops, then an xDS config that includes all three filters will behave the same way in both Envoy and gRPC. However, an xDS config that includes only the ext_proc filter without the gRPC deframing and re-framing filters will behave differently between Envoy and gRPC. This effectively means that from an xDS API perspective, the ext_proc config is under-specified, because it is not enough by itself to determine the actual ext_proc behavior. Speaking as an xDS API shepherd, I don't think that's acceptable.

could you clarify? preferably with an concrete foo-message example.

I thought why we need this change in ext_proc is because Envoy as a http-server/proxy doesn't support grpc-framing, such that in ext_proc decodeData, the data chunk contains grpc-framed grpc message, which we want the body chunk deframed to concrete grpc messages to be send as ProcessingRequest::request.

while in gRPC, the message is already in concrete grpc-message, that we can send as is within the ProcessingRequest::request field to external proc server?

@markdroth
Copy link
Contributor Author

I don't quite follow the "transition between the two modes in an ext_proc server." part. With de/reframing done before data enters filter-chain, ext_proc doesn't need "two modes" now.

Let's say that an existing ext_proc server is using FULL_DUPLEX_STREAMED mode and wants to migrate to GRPC mode. The process would be something like this:

  1. Add support for GRPC mode in the ext_proc server. (Note that the ext_proc request already tells the server what body send mode the client is using, so during this migration, the server can tell how it's supposed to handle each ext_proc stream.)
  2. Change the configuration of all data planes to use GRPC mode.
  3. Once all data planes have been reconfigured, it will be safe to remove support for other body send modes from the ext_proc server.

That migration is not possible if the ext_proc server cannot tell from the request whether the body data is raw HTTP/2 DATA frames or deframed gRPC messages. We need the ext_proc request to indicate to the ext_proc server which type of body data is being sent.

could you clarify? preferably with an concrete foo-message example.

Consider the following xDS filter chain:

  1. gRPC deframing filter
  2. ext_proc filter
  3. gRPC re-framing filter

If gRPC treats (1) and (3) as no-ops, then the above xDS configuration will work the same on both gRPC and on Envoy: both data planes will use the deframed gRPC messages in the ext_proc communication. So this configuration works fine.

However, now consider a filter chain that has only the ext_proc filter -- i.e., it does not have (1) or (3). In gRPC, because (1) and (3) are no-ops, the ext_proc filter will still use deframed gRPC messages in the ext_proc communication. However, because (1) and (3) are not no-ops in Envoy, Envoy would send raw HTTP/2 DATA frames in the ext_proc communication. This means that this configuration will result in different behavior in gRPC than in Envoy, which defeats the entire purpose of xDS as a common data plane API.

@stevenzzzz
Copy link
Contributor

However, now consider a filter chain that has only the ext_proc filter -- i.e., it does not have (1) or (3). In gRPC, because (1) and (3) are no-ops, the ext_proc filter will still use deframed gRPC messages in the ext_proc communication. However, because (1) and (3) are not no-ops in Envoy, Envoy would send raw HTTP/2 DATA frames in the ext_proc communication. This means that this configuration will result in different behavior in gRPC than in Envoy, which defeats the entire purpose of xDS as a common data plane API.

are you suggesting that we have to configure the grpc-server and the Envoy with the same filter-chain? we could probably avoid that.

The proposal I made is to move the deframing into place like source/common/http/conn_manager_impl.cc?l=1570-1578, it's post headers, so we know that it's grpc or http already, we also know the service/cluster settings, we can do deframing there and plumb grpc-mesage one by one as body-chunk into filter chain.
This way, there is no need for 1. and 3. as well.
It makes Envoy a "fully functioning " grpc proxy. And most importantly, all existing ext-proc filters don't need to care about the de/reframing-state-bookkeeping aforementioned.

@markdroth
Copy link
Contributor Author

are you suggesting that we have to configure the grpc-server and the Envoy with the same filter-chain? we could probably avoid that.

I'm saying that the xDS API for configuring the ext_proc filter has to have the same semantic meaning to any data plane -- i.e., it needs to work the same whether it's sent to Envoy or gRPC. This is a requirement of the xDS API; it's not something we can just ignore.

@stevenzzzz
Copy link
Contributor

I don't see the disconnection here, IF the ext_proc filter, with the proposal, sees exactly the same one grpc message at a time in de/encodeData() call, in both grpc server and in Envoy.
Can you help me understand the concern here?

@yanavlasov as well.

@markdroth
Copy link
Contributor Author

I don't see the disconnection here, IF the ext_proc filter, with the proposal, sees exactly the same one grpc message at a time in de/encodeData() call, in both grpc server and in Envoy.
Can you help me understand the concern here?

Okay, let me try saying this a different way.

No matter what approach we take here, there are two things that are required:

  1. There needs to be some configuration knob somewhere that dictates whether gRPC deframing happens.
  2. gRPC needs a way to reject a config where the gRPC deframing is not configured.

(2) is necessary to ensure that the same configuration does not behave differently in different data planes. If gRPC cannot reject the config when gRPC deframing is not configured, then that configuration will produce different results in different data planes (Envoy will send raw HTTP/2 DATA frames, and gRPC will send deframed gRPC messages). This is not acceptable from an xDS API perspective.

In this PR, the knob for (1) is directly in the ext_proc filter config as the body send mode, and if the xDS configuration sets the body send mode to something other than GRPC, gRPC will reject the config. So the only way that a single xDS config can be used for both Envoy and gRPC is if the body send mode is GRPC, and in that mode the behavior will be the same.

In your proposal, the knob for (1) will be somewhere other than the ext_proc filter config. It will either be in a separate set of gRPC deframing and re-framing filters, or it will be a knob in the HCM config itself. In either case, there is no way for gRPC to reject the config if those knobs are not set. We cannot reject a config just because the gRPC deframing and re-framing filters are not present, because those filters aren't needed at all without the ext_proc filter, and they're not required today. And adding a new knob to the HCM config won't help, because that knob doesn't exist today, and if gRPC suddenly started rejecting a config that doesn't have that knob set, we'd break backward compatibility.

Therefore, your proposal inherently means that if the xDS config does not enable gRPC deframing, gRPC will silently accept it and behave differently, rather than being able to reject it. That is not acceptable from an xDS API perspective.

@stevenzzzz
Copy link
Contributor

When you say " xDS configuration sets the body send mode to something other than GRPC, gRPC will reject the config", is that in grpc(proxyless grpc), or in Envoy?

I think there is some confusion.
Bear with me. It'd be nice if we can have a example for the use case we are trying to solve with this proposal.

IIUC, you request is to send ext_proc request to a 2nd external processor, when talking to the normal XDS server, both the external processor, and the XDS server are grpc server. External Processor speaks the ProcessingReqeuest/Response protos. The Normal XDS server speaks the usual XDS resources/protos.

In Google, this config path XDS channel in Envoy uses the Google-grpc impl to connect to the remote external processor, everything is already in GRPC, there is no httpserver involved. If we inject a ext_proc filter in between, (I don't know how), it would only see grpc messages in protobuf form only.
I don't know the case on the proxyless GRPC side tho, not sure if there is gonna be a full httpserver in grpc-client in order to use ext_proc filter.

Pls kindly let me know what I missed here.

My proposal was purely on the DATA path, for ext_proc, a HTTP filter to support deframed grpc message as payload from de/encodeData(), to talk to an external processor, which speaks ProcessingRequest proto, with the payload being concreate, deframed grpc messages, such that the remote external Processor when processing the protobuf, can treat the body chunk in ProcessIngRequest::request_body as grpc messages directly.

But for config path, I think there is no such need, the payload will be grpc-messages already.

@markdroth
Copy link
Contributor Author

@stevenzzzz I think we discussed most of this offline. But basically, the issue is that it's fine for there to be configs that work for only one of the two data planes, as long as the other data plane can NACK it. The problem with putting the knob in a place other than the ext_proc filter config is that gRPC would not be able to NACK if the knob is not set, since that would break backward compatibility.


// This message is sent to the external server when the HTTP request and
// response bodies are received.
message HttpBody {
Copy link
Contributor

@leonm1 leonm1 Oct 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This message is named HttpBody, but it does not contain any bytes or metadata related to HTTP in GRPC mode. It seems as though we should instead have a message like the following:

message GrpcMessage {

  google.type.Any payload = 1;

  // End of stream booleans  
}

This would also avoid the pitfall of a GRPC specific boolean in the HttpBody message.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Even in the new GRPC body send mode, the messages will still be sent as serialized bytes. They can't be sent as Any fields, because that requires encoding the protobuf type, and the ext_proc filter does not know the protobuf type to populate that. (This is true in both Envoy and gRPC.)

As I mentioned in another comment thread, I agree that the end_of_stream_without_message field is a little ugly, but I think the ideal solution would have been to simply use a wrapper type for the body field so that we could tell the difference between empty and unset. I don't think the two cases are really different enough to warrant a different message.

More generally, I do agree that there's a lot of tech debt in this API, and I wish it had been thought out better from the get-go rather than just evolving in place over time. But I don't think that's something we can fix at this point without giving up and creating a whole new API, which would fracture the ecosystem.

Copy link
Contributor

@paul-r-gall paul-r-gall Oct 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding an Any means that somewhere the Envoy ext_proc filter gRPC parsing code needs to be able to build the proto in question, which can only happen if that proto definition is linked into the Envoy. Rather than google.protobuf.Any it should probably just be bytes serialized_message = 1.

Copy link
Contributor

@paul-r-gall paul-r-gall Oct 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah sorry I responded to Matt without seeing Mark's response. Basically agree with Mark's comment.

}

// The body response message corresponding to FULL_DUPLEX_STREAMED body mode.
// The body response message corresponding to ``FULL_DUPLEX_STREAMED`` or ``GRPC`` body modes.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Each field in this message now has a comment above it to indicate that it means semantically different things based on a field contained in a different message.

This is saying to me that it belongs as a different message in a oneof at a higher level.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think they're really all that semantically different. The body field still contains the relevant part of the body, and the end_of_stream field always indicates EOS.

If I created another message here, it would look almost identical to this one, and I think that would cause more confusion than it would provide benefit at this point.

@yanavlasov yanavlasov self-assigned this Oct 7, 2025
@paul-r-gall
Copy link
Contributor

paul-r-gall commented Oct 7, 2025

@markdroth so gRPC framing goes <compression bit><message length (4 bytes)><compressed message> ... (repeat) ..., correct? Should the bytes sent to the Ext Proc server in gRPC mode be <compressed message> or uncompressed serialized message?

@markdroth
Copy link
Contributor Author

@markdroth so gRPC framing goes <compression bit><message length (4 bytes)><compressed message> ... (repeat) ..., correct? Should the bytes sent to the Ext Proc server in gRPC mode be <compressed message> or uncompressed serialized message?

Thanks for bringing this up! I'd actually forgotten that the compression bit was in there.

This unfortunately means that the story is not quite identical between Envoy and gRPC if compression is enabled. In gRPC, the ext_proc filter will be above (further from the network than) the compression/decompression code, which means that ext_proc will always see the uncompressed message -- sent messages won't be compressed until after the ext_proc filter, and received messages will be decompressed before the ext_proc filter. But in Envoy, it may see either compressed or decompressed messages, depending on whether the downstream client or upstream server compressed the message before sending it. So I've added a bit to both the ext_proc request and response to indicate whether the message is compressed. Envoy can set that bit based on the corresponding bit in the gRPC framing header, but gRPC will never set the bit, and gRPC will reject ext_proc responses if the bit is set.

I'm not thrilled with this difference, but I don't think there's a reasonable way to avoid it. The only way to completely avoid this would be to have Envoy decompress messages before sending to the ext_proc server and then recompress the messages that come back. But that would be fairly inefficient, because in the common case where the message is passed through as-is, it's more efficient to send it to the ext_proc server compressed, so that nothing has to re-compress it. Also, the reason that individual messages may not be compressed is that it is insecure to compress sensitive payloads, as it provides a side-channel to determine the unencrypted contents, so even if we wanted Envoy to do this, we would need yet another knob to indicate whether a given message should be re-compressed. I think this would wind up being too much complexity to add to Envoy.

Note that this problem won't come up at all if compression isn't enabled, which is the default in gRPC, so hopefully this won't cause too much confusion.

@botengyao
Copy link
Member

This PR seems ready, and oncall-ping @yanavlasov @abeyad @yanjunxiang-google, @stevenzzzz @paul-r-gall for any additional reviews.

Copy link
Contributor

@abeyad abeyad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm api

The API changes itself look good to me, but I defer to @yanavlasov and @yanjunxiang-google for ext_proc API expertise

Copy link
Contributor

@yanavlasov yanavlasov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/wait-any

// messages sent on the stream.
// In this mode, the client will send body and trailers to the server as
// they arrive.
GRPC = 5;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After thinking more and looking at other ext_proc implementations I think it would be preferable to use a separate field to indicate content and leave body sending mode as is. Proxyless gRPC would then use content_type: GRPC and ...body_mode: FULL_DUPLEX_STREAMED. Proxyless gRPC does need to implement other modes like BUFFERED or STREAMED.

The reason is we want to add support for other content types such as JSON-RPC and we want to be able to send them in all modes, most importantly in BUFFERED mode, since some ext_proc servers only support this mode and it will delay adoption if we have to force implementations to use FULL_DUPLEX_STREAMED to be able to process other content types.

If we put the GRPC content as the send mode then we have to add content-type for JSON-RPC anyway and then have to explain that it works in all cases except GRPC mode, which is not great.

Using content-type; GRPC and ..._body_mode: FULL_DUPLEX_STREAMED will work exactly the same way as GRPC body send mode without limiting implementation from using other send modes either for GRPC or for other protocols we are adding.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

no stalebot Disables stalebot from closing an issue

Projects

None yet

Development

Successfully merging this pull request may close these issues.

9 participants