-
-
Notifications
You must be signed in to change notification settings - Fork 99
Implement reference semantics for all Tensors #160
Copy link
Copy link
Closed
Description
Current value semantic / copy-on-assignment is not good enough performance-wise. It requires unsafe all over the place which is not ergonomic at all.
I tried copy-on-write as well, in-depth monologue discussion in this issue #157. You can implement COW with atomic reference counting or a shared/non-shared boolean but it has a few problems detailed here:
- Refcounting/isShared troubles when wrapped in a container
- Performance predictability "when will it copy or share", compared to always copy or always share
- Workaroundability: since
=is overloaded, it is non-trivial to avoid COW,let a = b.unsafeViewwon't work
So Arraymancer will move to reference semantics (share Tensor data by default, copy must be made explicit).
Benefits:
- CudaTensor already had this semantic
- No need to sprinkle
unsafeSliceall over the place for performance- All the
unsafeproc can be removed - Much less code to maintain
- All the
- Numpy and Julia already work like this
- Most copies are explicit (except
asContiguousandreshape- debugging copy issues will be just
grep clone *.nim
- debugging copy issues will be just
Disadvantages
- Sharing is implicit: users might forget to use
cloneand share data by mistake.- debugging sharing issues will be harder than
grep unsafe *.nim
- debugging sharing issues will be harder than
In the wild
Numpy and Julia have reference semantics, Matlab and R have copy-on-write.
Reactions are currently unavailable