-
Notifications
You must be signed in to change notification settings - Fork 49
Add OpenAPI task implementation #804
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
I'm gonna add more tests soon |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this class is already defined here https://github.com/serverlessworkflow/sdk-java/blob/main/impl/http/src/main/java/io/serverlessworkflow/impl/executors/http/HttpExecutor.java#L207-L218
I think we should make it public and reuse here
WorkflowContext workflowContext, TaskContext taskContext, WorkflowModel input) { | ||
String operationId = task.getWith().getOperationId(); | ||
URI openAPIEndpoint = targetSupplier.apply(workflowContext, taskContext, input); | ||
OpenAPIProcessor processor = new OpenAPIProcessor(operationId, openAPIEndpoint); | ||
OperationDefinition operation = processor.parse(); | ||
|
||
OperationPathResolver pathResolver = | ||
new OperationPathResolver( | ||
operation.getPath(), | ||
application, | ||
task.getWith().getParameters().getAdditionalProperties()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think all this stuff should be moved to init method (thy are always the same for different executions, so they only need to be processed once)
HttpCallAdapter httpCallAdapter = | ||
new HttpCallAdapter() | ||
.auth(task.getWith().getAuthentication()) | ||
.body(operation.getBody()) | ||
.contentType(operation.getContentType()) | ||
.headers( | ||
operation.getParameters().stream() | ||
.filter(p -> "header".equals(p.getIn())) | ||
.collect(Collectors.toUnmodifiableSet())) | ||
.method(operation.getMethod()) | ||
.query( | ||
operation.getParameters().stream() | ||
.filter(p -> "query".equals(p.getIn())) | ||
.collect(Collectors.toUnmodifiableSet())) | ||
.redirect(task.getWith().isRedirect()) | ||
.target(pathResolver.resolve(workflowContext, taskContext, input)) | ||
.workflowParams(task.getWith().getParameters().getAdditionalProperties()); | ||
|
||
WorkflowException workflowException = null; | ||
|
||
for (var server : operation.getServers()) { | ||
CallHTTP callHTTP = httpCallAdapter.server(server).build(); | ||
HttpExecutor executor = new HttpExecutor(); | ||
executor.init(callHTTP, workflow, application, resourceLoader); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same hre the HttpExecutor,init should be called on OpenApiExecutor.init.
The idea is to keep the apply method as minimum as possible
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I’ve significantly improved the class, but unfortunately I can’t do this in all cases because I need (WorkflowContext workflowContext, TaskContext taskContext, WorkflowModel input), which are only available in the apply() method
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not understand why you cannot process the schema file in the init method if the endpoint uri of the schema is static.
taskTimeout.withTaskTimeoutDefinition(timeout); | ||
TimeoutAfter timeoutAfter = new TimeoutAfter(); | ||
timeout.setAfter(timeoutAfter); | ||
timeoutAfter.withDurationExpression("PT30S"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be configurable (or deleted)
WorkflowContext workflowContext, TaskContext taskContext, WorkflowModel input) { | ||
|
||
OperationDefinitionSupplier operationDefinitionSupplier = | ||
new OperationDefinitionSupplier(application, task); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OperationDefinitionSupplier should be moved to init.
And I think the parsing of the openapi file can be done in OperationDefinitionSupplier, (If I interpreted the specification properly, the runtime of the schema is an static resource, not a dynamic one) so you do the parsing of the schema file only once, not for every call
The assumption Im doing here is that the URI of the openapi schema is fixed https://github.com/serverlessworkflow/specification/blob/main/dsl-reference.md#external-resource
The tricky part is authorization (which refers to authorization to access the schema). Authroization might be based on expressions (and expression evaluation requires WorkflowContext, TaskContext and the model) but I think in that case we should assume that the authorization part is also fixed.
Im pretty reluctant to parse the openapi file every time the function is called. So, if you prefer to reuse the existing auth code (which certainly and prorperly requires WorkflowContect and TaskContext) then you should cache the result of the parsing so subsequent calls does not retrieve the file again. But (since at the end, when you cache to only access the schema one, you will be assuming that the auth info remains constant) I think is worth to explore a refactor of the auth code in a way that it can be executed assuming there are not expressions in the auth strings.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or we can say that we accessing the schema every time the function is called in case the schema has changed, but this is dangerous because the reality is that the user invoking the schema has defined the workflow (the input and output of the call) based on a particular schema version. It is kind of weird that the schema changes after writing the flow, it will be like compiling java program against a dynamically changing java library. I have raised the point in the specs forums
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, for now, just move OperationDefinitionSupplier to the init, plus anything else that you detect does not depend on the runtime context (aka: does not require expression evaluation). We can always refactor later assuming a static schema URI.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After checking in the CNCF forums, my assumption that the endpoint is static is wrong.
The endpoint indeed might change.
What I feel we should do then, is to use HTTP cache through HEAD to avoid parsing the same file again and again if the endpoint does not change.
We can do that with a different issue, but I want to be sure we are aligned on this
Signed-off-by: Dmitrii Tikhomirov <[email protected]>
Signed-off-by: Dmitrii Tikhomirov <[email protected]>
Signed-off-by: Dmitrii Tikhomirov <[email protected]>
Signed-off-by: Dmitrii Tikhomirov <[email protected]>
Signed-off-by: Dmitrii Tikhomirov <[email protected]>
[Many thanks for submitting your Pull Request ❤️!
What this PR does / why we need it:
Fix #477
Special notes for reviewers:
Additional information (if needed):]