-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Closed
Labels
high prioritymodule: dependency bugProblem is not caused by us, but caused by an upstream library we useProblem is not caused by us, but caused by an upstream library we use
Description
e.g.:
>>> torch.zeros(3,4).scatter_(0,torch.LongTensor([[0,0,0,0],[1,1,1,1],[2,2,2,2]]),torch.arange(1,9).resize_(2,4))
1.0000e+00 2.0000e+00 3.0000e+00 4.0000e+00
5.0000e+00 6.0000e+00 7.0000e+00 8.0000e+00
2.7716e+20 7.2128e+22 4.4653e+30 7.2708e+31
[torch.FloatTensor of size 3x4]
valgrind reports reading off the end of the tensor:
==2959389== Invalid read of size 4
==2959389== at 0x210EE636: THFloatTensor_scatter (in /data/users/gchanan/pytorch6/torch/lib/libTH.so.1)
==2959389== by 0x169365A8: THPFloatTensor_scatter_(_object*, _object*, _object*) (TensorMethods.cpp:3493)
==2959389== by 0x4F00901: _PyCFunction_FastCallDict (methodobject.c:231)
==2959389== by 0x4F85F4B: call_function (ceval.c:4788)
==2959389== by 0x4F88BBC: _PyEval_EvalFrameDefault (ceval.c:3275)
==2959389== by 0x4F844BF: _PyEval_EvalCodeWithName (ceval.c:4119)
==2959389== by 0x4F84942: PyEval_EvalCodeEx (ceval.c:4140)
==2959389== by 0x4F8498A: PyEval_EvalCode (ceval.c:695)
==2959389== by 0x4FB9345: run_mod (pythonrun.c:980)
==2959389== by 0x4FB9345: PyRun_InteractiveOneObject (pythonrun.c:233)
==2959389== by 0x4FB96AD: PyRun_InteractiveLoopFlags (pythonrun.c:112)
==2959389== by 0x4FB97EB: PyRun_AnyFileExFlags (pythonrun.c:74)
==2959389== by 0x4FD3BA8: run_file (main.c:320)
==2959389== by 0x4FD3BA8: Py_Main (main.c:781)
==2959389== Address 0x12f19840 is 0 bytes after a block of size 32 alloc'd
==2959389== at 0x4C26B0F: malloc (in /usr/local/fbcode/gcc-5-glibc-2.23/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==2959389== by 0x2106CC79: THAlloc (in /data/users/gchanan/pytorch6/torch/lib/libTH.so.1)
==2959389== by 0x2106E037: THFloatStorage_resize (in /data/users/gchanan/pytorch6/torch/lib/libTH.so.1)
==2959389== by 0x21087194: THFloatTensor_resize4d (in /data/users/gchanan/pytorch6/torch/lib/libTH.so.1)
==2959389== by 0x2110DEF6: THFloatTensor_range (in /data/users/gchanan/pytorch6/torch/lib/libTH.so.1)
==2959389== by 0x16981F53: THPFloatTensor_stateless_arange(_object*, _object*, _object*) (TensorMethods.cpp:3428)
==2959389== by 0x4F00CD8: PyCFunction_Call (methodobject.c:98)
==2959389== by 0x4EA9CE5: PyObject_Call (abstract.c:2246)
==2959389== by 0x167EEAB1: THPUtils_dispatchStateless(_object*, char const*, _object*, _object*) (utils.cpp:154)
==2959389== by 0x167C221F: THPModule_arange(_object*, _object*, _object*) (Module.cpp:300)
==2959389== by 0x4F00901: _PyCFunction_FastCallDict (methodobject.c:231)
==2959389== by 0x4F85F4B: call_function (ceval.c:4788)
==2959389==
Metadata
Metadata
Assignees
Labels
high prioritymodule: dependency bugProblem is not caused by us, but caused by an upstream library we useProblem is not caused by us, but caused by an upstream library we use