Skip to content

Conversation

@AngelEzquerra
Copy link
Contributor

Note: This is a DRAFT because I'd like to reduce the number of functions that are needed. At the moment I had to create a lot of versions of each operator, to support all combinations of (Tensor-Tensor / Tensor-Scalar / Scalar-Tensor) x (Real-Real / Real-Complex / Complex-Real).

This change adds support (almost all of) the same mixed Complex-Real operations that nim's complex module support.

This also changes a bit some of the existing mixed ops. In particular, we used to support mixed ops of Complex64 with any kind of number (including integers) but they did not support Complex32 - float32 ops. While being able to mix Complex64 with ints (for example) was nice, it was not consistent with nim's own complex library and also it would have required adding many more operator overloads, so I decided to just be consistent at the cost of a little usability in a small number of cases.

This change adds support (almost all of) the same mixed Complex-Real operations that nim's complex module support.

This also changes a bit some of the existing mixed ops. In particular, we used to support mixed ops of Complex64 with _any_ kind of number (including integers) but they did not support Complex32 - float32 ops. While being able to mix Complex64 with ints (for example) was nice, it was not consistent with nim's own complex library and also it would have required adding many more operator overloads, so I decided to just be consistent at the cost of a little usability in a small number of cases.
@AngelEzquerra AngelEzquerra force-pushed the more_mixed_complex_tensor_ops branch from 6dd675e to 3147a4f Compare March 26, 2024 09:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant