Replies: 1 comment 1 reply
-
|
Not sure I agree with you on "this would speed things up easily" (both the 'speed' and the 'easy' part 😄 ) , although I'm interested to hear if you have a specific use case in mind. My understanding is that these lower precision floating point types are great for vectorized (SIMD) operations (i.e. big matrix multiplication) but as the type of an individual stream in csp I don't see much point. Plus, if/when we convert back to a Python You can propagate numpy arrays through a csp graph like any Python object, so if you are doing some heavier ML/linear algebra based workflows in csp and want to use |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi csp team, I noticed that csp reinforces double64 and int64 in the computation core to prioritize numerical stability and consistency. But if we could allow a config to use float32/float16 and int32, which is quite sufficient for vast majority of the use cases, this would speed things up easily. I know this won't be a trivial task. Just think it's a very natural extension. I want to put the idea forward and see how you guys think about it.
Beta Was this translation helpful? Give feedback.
All reactions