Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multi-threaded evaluation? #457

Open
derrickburns opened this issue Sep 23, 2020 · 2 comments
Open

Multi-threaded evaluation? #457

derrickburns opened this issue Sep 23, 2020 · 2 comments

Comments

@derrickburns
Copy link

Since jsonnet is a pure functional language, there are many ways to optimize execution that are not available to languages with side effects. One idea is to parallelize functions like std.map. Have you considered this?

Alternatively, have you considered adding parallel versions of existing standard library functions, e.g. std.pmap as a parallel version of std.map.

This is not a theoretical request. I have 140+ different jsonnet objects to materialize. When I try materializing serially, it takes minutes. What I have resorted to is to parallelize the materialization, which gets the time down to tens of seconds. When I look at packages like tanka, I see that they don't offer any way to parallelize materialization. It would be a great performance improvement to offer this in go-jsonnet at the standard library level.

@sbarzowski
Copy link
Collaborator

sbarzowski commented Sep 23, 2020

Thanks for the suggestion @derrickburns. I agree that there is an opportunity there. The tricky part is of course caching and choosing the parallelization points. The latter is especially important, if cache was not shared at all between the threads (they would operate with the snapshot before splitting or something).

Your suggestion to give responsibility of choosing the parallelization point to the user makes a lot of sense. Semantically std.pmap would be the same as std.map, but could be a hint for the interpreter.

This is definitely something worth exploring.

That said, there is still a lot of room for single-threaded optimization. We had improved the performance literally orders of magnitude in many cases over the last 2 years and we have still a lot of ideas for significant improvements. Admittedly it was terribly slow before and now it's just slow :-).

@derrickburns
Copy link
Author

@sbarzowski Thank you for your quick response. I appreciate the work that you are doing to improve performance.

In the interim, I will continue with the parallelization of materialization that I do using bash (gasp!).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants