Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix atomic.wait, get wasi_ctx exit code and thread mgr issues #2024

Merged
merged 4 commits into from
Mar 14, 2023

Conversation

wenyongh
Copy link
Contributor

@wenyongh wenyongh commented Mar 13, 2023

  • Remove notify_stale_threads_on_exception and change atomic.wait
    to be interruptible by keep waiting and checking every one second,
    like the implementation of poll_oneoff in libc-wasi
  • Wait all other threads exit and then get wasi exit_code to avoid
    getting invalid value
  • Inherit suspend_flags of parent thread while creating new thread to
    avoid terminated flag isn't set for new thread
  • Fix wasi-threads test case update_shared_data_and_alloc_heap
  • Add "Lib wasi-threads enabled" prompt for cmake
  • Fix aot get exception, use aot_copy_exception instead

@g0djan
Copy link
Contributor

g0djan commented Mar 13, 2023

Noticed that every run of test main_proc_exit_wait with this changes takes about 1 sec, after the fix we merged it was not hanging at all and completing immediately. Is it supposed to be like that?

@g0djan
Copy link
Contributor

g0djan commented Mar 13, 2023

Okay I missed that

Remove notify_stale_threads_on_exception and change atomic.wait
to be interruptible by keep waiting and checking every one second,
like the implementation of poll_oneoff in libc-wasi

What's the advantage?

@g0djan
Copy link
Contributor

g0djan commented Mar 13, 2023

Ran each internal test 100 times compiled with classic_interpreter with this change under TSAN and filtered warnings about “LOAD/STORE” and all other data races seems to be gone. Great job!

@@ -60,6 +57,11 @@ main(int argc, char **argv)
assert(count != NULL && "Failed to call calloc");
assert(pthread_mutex_init(&mutex, NULL) == 0 && "Failed to init mutex");

for (int i = 0; i < NUM_THREADS; i++) {
vals[i] = malloc(sizeof(int *));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
vals[i] = malloc(sizeof(int *));
vals[i] = malloc(sizeof(int));

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The point of this part of the test was to try heap allocation from the spawned thread, that's why I was putting it in __wasi_thread_start_C

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The issue is that malloc function is not only called here, but also called in __wasi_thread_spawn when allocating aux stack:
https://github.com/bytecodealliance/wasm-micro-runtime/blob/main/core/iwasm/libraries/thread-mgr/thread_manager.c#L147

Two threads may call malloc simultaneously.

If the case isn't modified, out of bounds memory access exception is often thrown.

@eloparco
Copy link
Contributor

Would a race condition be possible between

exec_env_tls->module_inst = (WASMModuleInstanceCommon *)module_inst;
and
return exec_env->module_inst;
?

I was updating the tests to avoid using pthread sync primitives (since they seem to cause all the load/store warnings in the sanitizer when in classic interpreter mode) and I noticed those. But I didn't finish validating the changes, I'll push a PR once I'm done and if necessary we can address possible issues separately.

@wenyongh
Copy link
Contributor Author

Okay I missed that

Remove notify_stale_threads_on_exception and change atomic.wait
to be interruptible by keep waiting and checking every one second,
like the implementation of poll_oneoff in libc-wasi

What's the advantage?

Letting the thread positively checking the suspend flags should be better than passively waiting, for the latter, there may be several potential issues:
(1) It may fail to allocate memory in notify_stale_threads_on_exception, and if it fails, the atomic waiting threads won't be notified.
(2) The notify_stale_threads_on_exception is called only once, it may be not easy to ensure after that, another atomic waiting occurs again, e.g., a follow-up atomic.wait opcode or a newly created thread enters into atomic.wait, and if new atomic waiting occurs, they may be not notified.

@wenyongh
Copy link
Contributor Author

wenyongh commented Mar 14, 2023

Would a race condition be possible between

exec_env_tls->module_inst = (WASMModuleInstanceCommon *)module_inst;

and

return exec_env->module_inst;

?
I was updating the tests to avoid using pthread sync primitives (since they seem to cause all the load/store warnings in the sanitizer when in classic interpreter mode) and I noticed those. But I didn't finish validating the changes, I'll push a PR once I'm done and if necessary we can address possible issues separately.

I don't think there will be, for the former, the exec_env_tls is the parent thread's exec_env, but here the cluster->lock is locked, and I don't see when the other thread needs to get the exec_env->module_inst in the former. In thread manager, the wasm_exec_env_get_module_inst is called in thread_manager_start_routine, allocate_aux_stack and free_aux_stack, but it should be not related to the issue.

@g0djan
Copy link
Contributor

g0djan commented Mar 14, 2023

(2) The notify_stale_threads_on_exception is called only once, it may be not easy to ensure after that, another atomic waiting occurs again, e.g., a follow-up atomic.wait opcode or a newly created thread enters into atomic.wait, and if new atomic waiting occurs, they may be not notified.

That was a thing that me and @eloparco were concerned about, but it was sorted out in #2016.

Let's say there is such a situation:

There are only 3 possibilities, because both critical sections for main and 3rd thread are under the same lock:

  • 3rd thread enters the section first and if the flag is already set so it won't wait
  • 3rd thread enters the section first and the flag is not set, so 3rd thread starts waiting on the cond_var, so it releases the lock and main thread enters the critical section and notifies cond_var, so 3rd thread wakes up and won't wait any more
  • main thread enters the critical section first and notifies the cond_var when 3rd thread hasn't started waiting on it yet, then main thread exits the critical section and 3rd thread enters the critical section, but it's guranteed that flag was already set because it happens before notifying and it will be checked before waiting starts so 3rd thread won't wait

So a newly created thread won't wait in any case and the situation is not possible actually. In testing 100+ times it never hanged at all after the fix

@g0djan
Copy link
Contributor

g0djan commented Mar 14, 2023

(1) It may fail to allocate memory in notify_stale_threads_on_exception, and if it fails, the atomic waiting threads won't be notified.

Yeah it would be bad. I suppose it's the only place where we now allocate memory after setting flag, but before notifying. Another way to avoid it would be allocating it in advance.

Looks reasonable for me to merge then, but I would consider return to using cond_var and allocating memory in advance.

@wenyongh
Copy link
Contributor Author

wenyongh commented Mar 14, 2023

(2) The notify_stale_threads_on_exception is called only once, it may be not easy to ensure after that, another atomic waiting occurs again, e.g., a follow-up atomic.wait opcode or a newly created thread enters into atomic.wait, and if new atomic waiting occurs, they may be not notified.

That was a thing that me and @eloparco were concerned about, but it was sorted out in #2016.

Let's say there is such a situation:

There are only 3 possibilities, because both critical sections for main and 3rd thread are under the same lock:

  • 3rd thread enters the section first and if the flag is already set so it won't wait
  • 3rd thread enters the section first and the flag is not set, so 3rd thread starts waiting on the cond_var, so it releases the lock and main thread enters the critical section and notifies cond_var, so 3rd thread wakes up and won't wait any more
  • main thread enters the critical section first and notifies the cond_var when 3rd thread hasn't started waiting on it yet, then main thread exits the critical section and 3rd thread enters the critical section, but it's guranteed that flag was already set because it happens before notifying and it will be checked before waiting starts so 3rd thread won't wait

So a newly created thread won't wait in any case and the situation is not possible actually. In testing 100+ times it never hanged at all after the fix

OK, sounds reasonable, I was just concerned about that. It is good if this is not an issue. Thanks for the detailed explanation.

@wenyongh
Copy link
Contributor Author

(1) It may fail to allocate memory in notify_stale_threads_on_exception, and if it fails, the atomic waiting threads won't be notified.

Yeah it would be bad. I suppose it's the only place where we now allocate memory after setting flag, but before notifying. Another way to avoid it would be allocating it in advance.

Looks reasonable for me to merge then, but I would consider return to using cond_var and allocating memory in advance.

Yes, but we cannot know the count of atomic waiting threads in advance, and don't know how much memory should be allocated previously. And even if we can, the memory allocation failure may occur also.

My personal opinion is to use the new idea, one is to keep the same strategy with what we handled in poll_oneoff in libc-wasi:
https://github.com/bytecodealliance/wasm-micro-runtime/blob/main/core/iwasm/libraries/libc-wasi/libc_wasi_wrapper.c#L1017-L1048
The other is that the algorithm is simpler than before, at least with less code and no memory allocation.

@g0djan
Copy link
Contributor

g0djan commented Mar 14, 2023

Okay, wait a bit before merge. @hritikgupta suppose he found some problems, I will try to reproduce

@hritikgupta
Copy link
Contributor

hritikgupta commented Mar 14, 2023

Hi @wenyongh, we tested the changes with internal tests, and it looks like it's getting stuck on main_proc_exit_wait, main_proc_exit_sleep, main_trap_sleep, on executing it for say >=300 iterations (sometimes it hangs within a few iterations), we are continuing to test more. cc @g0djan

@g0djan
Copy link
Contributor

g0djan commented Mar 14, 2023

At least next tests get stuck on this PR if run enough iterations

  • main_proc_exit_sleep
  • main_trap_sleep
  • main_proc_exit_wait
  • nonmain_proc_exit_wait

I checked with gdb what's going on and apparently all these test stuck the same way - 4 threads including the main one are infinitely waiting on a cond_var(I suppose it's the same one for 3 threads and another one for the last thread).

Fortunately these tests stuck the same way on the main branch too, so I believe it means this bug is not coming from this PR.

Also nonmain_proc_exit_busy almost constantly fails on the main branch, but it properly works on this PR.

Below I attached the backtrace of the main thread from gdb, for others it stuck on the same line only the beginning of backtrace differs because other threads don't start in main. I saved a coredump(8GB) that I'm going to dig deeper later.

__futex_abstimed_wait_common64 (private=344888800, cancel=true, abstime=0x7ffc23c289f0, op=393, expected=0, futex_word=0x564914930cf0) at ./nptl/futex-internal.c:57
57      ./nptl/futex-internal.c: No such file or directory.
(gdb) info threads
  Id   Target Id                                 Frame 
* 1    Thread 0x7f50ae06a880 (LWP 12295) "iwasm" __futex_abstimed_wait_common64 (private=344888800, cancel=true, abstime=0x7ffc23c289f0, op=393, expected=0, 
    futex_word=0x564914930cf0) at ./nptl/futex-internal.c:57
  2    Thread 0x7f50ae061640 (LWP 12296) "iwasm" __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0x7f50ae05ffc0, op=393, expected=0, futex_word=0x7f4ea8000cd0)
    at ./nptl/futex-internal.c:57
  3    Thread 0x7f50ae050640 (LWP 12297) "iwasm" __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0x7f50ae04efc0, op=393, expected=0, futex_word=0x7f4ea0000cd0)
    at ./nptl/futex-internal.c:57
  4    Thread 0x7f50ae03f640 (LWP 12298) "iwasm" __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0x7f50ae03dfc0, op=393, expected=0, futex_word=0x7f4ea4000cd0)
    at ./nptl/futex-internal.c:57
(gdb) bt
#0  __futex_abstimed_wait_common64 (private=344888800, cancel=true, abstime=0x7ffc23c289f0, op=393, expected=0, futex_word=0x564914930cf0) at ./nptl/futex-internal.c:57
#1  __futex_abstimed_wait_common (cancel=true, private=344888800, abstime=0x7ffc23c289f0, clockid=1074462730, expected=0, futex_word=0x564914930cf0) at ./nptl/futex-internal.c:87
#2  __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x564914930cf0, expected=expected@entry=0, clockid=clockid@entry=0, 
    abstime=abstime@entry=0x7ffc23c289f0, private=private@entry=0) at ./nptl/futex-internal.c:139
#3  0x00007f50ae100f1b in __pthread_cond_wait_common (abstime=0x7ffc23c289f0, clockid=0, mutex=0x564914930ca0, cond=0x564914930cc8) at ./nptl/pthread_cond_wait.c:503
#4  ___pthread_cond_timedwait64 (cond=0x564914930cc8, mutex=0x564914930ca0, abstime=0x7ffc23c289f0) at ./nptl/pthread_cond_wait.c:652
#5  0x0000564912dfc77c in os_cond_reltimedwait (cond=0x564914930cc8, mutex=0x564914930ca0, useconds=1000000)
    at /workspaces/wasm-micro-runtime/core/shared/platform/common/posix/posix_thread.c:298
#6  0x0000564912de8f18 in wasm_runtime_atomic_wait (module=0x5649148e80a0, address=0x7f4eae000f90, expect=1, timeout=-1, wait64=false)
    at /workspaces/wasm-micro-runtime/core/iwasm/common/wasm_shared_memory.c:448
#7  0x0000564912e132ee in wasm_interp_call_func_bytecode (module=0x5649148e80a0, exec_env=0x5649148e95e0, cur_func=0x5649148e9230, prev_frame=0x5649148ea980)
    at /workspaces/wasm-micro-runtime/core/iwasm/interpreter/wasm_interp_classic.c:3429
#8  0x0000564912e16ce0 in wasm_interp_call_wasm (module_inst=0x5649148e80a0, exec_env=0x5649148e95e0, function=0x5649148e8600, argc=0, argv=0x0)
    at /workspaces/wasm-micro-runtime/core/iwasm/interpreter/wasm_interp_classic.c:4228
#9  0x0000564912dec8d3 in call_wasm_with_hw_bound_check (module_inst=0x5649148e80a0, exec_env=0x5649148e95e0, function=0x5649148e8600, argc=0, argv=0x0)
    at /workspaces/wasm-micro-runtime/core/iwasm/interpreter/wasm_runtime.c:2281
--Type <RET> for more, q to quit, c to continue without paging--
#10 0x0000564912deca05 in wasm_call_function (exec_env=0x5649148e95e0, function=0x5649148e8600, argc=0, argv=0x0)
    at /workspaces/wasm-micro-runtime/core/iwasm/interpreter/wasm_runtime.c:2345
#11 0x0000564912de513d in wasm_runtime_call_wasm (exec_env=0x5649148e95e0, function=0x5649148e8600, argc=0, argv=0x0)
    at /workspaces/wasm-micro-runtime/core/iwasm/common/wasm_runtime_common.c:1927
#12 0x0000564912de2143 in execute_main (module_inst=0x5649148e80a0, argc=1, argv=0x7ffc23c29c58) at /workspaces/wasm-micro-runtime/core/iwasm/common/wasm_application.c:110
#13 0x0000564912de257f in wasm_application_execute_main (module_inst=0x5649148e80a0, argc=1, argv=0x7ffc23c29c58)
    at /workspaces/wasm-micro-runtime/core/iwasm/common/wasm_application.c:210
#14 0x0000564912ddfc87 in app_instance_main (module_inst=0x5649148e80a0) at /workspaces/wasm-micro-runtime/product-mini/platforms/linux/../posix/main.c:103
#15 0x0000564912de11f3 in main (argc=1, argv=0x7ffc23c29c58) at /workspaces/wasm-micro-runtime/product-mini/platforms/linux/../posix/main.c:740

@g0djan
Copy link
Contributor

g0djan commented Mar 14, 2023

@hritikgupta thanks for spotting the problem!
@wenyongh I think the PR is good to go

If anyone would like to investigate the problem I suggest you do it without this change because it would reproduce way faster

@wenyongh
Copy link
Contributor Author

@hritikgupta thanks for spotting the problem! @wenyongh I think the PR is good to go

OK, thanks, let's merge this PR first and then investigate the reported issue later.

@wenyongh wenyongh merged commit bab2402 into bytecodealliance:main Mar 14, 2023
@wenyongh
Copy link
Contributor Author

wenyongh commented Mar 15, 2023

At least next tests get stuck on this PR if run enough iterations

  • main_proc_exit_sleep
  • main_trap_sleep
  • main_proc_exit_wait
  • nonmain_proc_exit_wait

I checked with gdb what's going on and apparently all these test stuck the same way - 4 threads including the main one are infinitely waiting on a cond_var(I suppose it's the same one for 3 threads and another one for the last thread).

Fortunately these tests stuck the same way on the main branch too, so I believe it means this bug is not coming from this PR.

Also nonmain_proc_exit_busy almost constantly fails on the main branch, but it properly works on this PR.

Below I attached the backtrace of the main thread from gdb, for others it stuck on the same line only the beginning of backtrace differs because other threads don't start in main. I saved a coredump(8GB) that I'm going to dig deeper later.

__futex_abstimed_wait_common64 (private=344888800, cancel=true, abstime=0x7ffc23c289f0, op=393, expected=0, futex_word=0x564914930cf0) at ./nptl/futex-internal.c:57
57      ./nptl/futex-internal.c: No such file or directory.
(gdb) info threads
  Id   Target Id                                 Frame 
* 1    Thread 0x7f50ae06a880 (LWP 12295) "iwasm" __futex_abstimed_wait_common64 (private=344888800, cancel=true, abstime=0x7ffc23c289f0, op=393, expected=0, 
    futex_word=0x564914930cf0) at ./nptl/futex-internal.c:57
  2    Thread 0x7f50ae061640 (LWP 12296) "iwasm" __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0x7f50ae05ffc0, op=393, expected=0, futex_word=0x7f4ea8000cd0)
    at ./nptl/futex-internal.c:57
  3    Thread 0x7f50ae050640 (LWP 12297) "iwasm" __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0x7f50ae04efc0, op=393, expected=0, futex_word=0x7f4ea0000cd0)
    at ./nptl/futex-internal.c:57
  4    Thread 0x7f50ae03f640 (LWP 12298) "iwasm" __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0x7f50ae03dfc0, op=393, expected=0, futex_word=0x7f4ea4000cd0)
    at ./nptl/futex-internal.c:57
(gdb) bt
#0  __futex_abstimed_wait_common64 (private=344888800, cancel=true, abstime=0x7ffc23c289f0, op=393, expected=0, futex_word=0x564914930cf0) at ./nptl/futex-internal.c:57
#1  __futex_abstimed_wait_common (cancel=true, private=344888800, abstime=0x7ffc23c289f0, clockid=1074462730, expected=0, futex_word=0x564914930cf0) at ./nptl/futex-internal.c:87
#2  __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x564914930cf0, expected=expected@entry=0, clockid=clockid@entry=0, 
    abstime=abstime@entry=0x7ffc23c289f0, private=private@entry=0) at ./nptl/futex-internal.c:139
#3  0x00007f50ae100f1b in __pthread_cond_wait_common (abstime=0x7ffc23c289f0, clockid=0, mutex=0x564914930ca0, cond=0x564914930cc8) at ./nptl/pthread_cond_wait.c:503
#4  ___pthread_cond_timedwait64 (cond=0x564914930cc8, mutex=0x564914930ca0, abstime=0x7ffc23c289f0) at ./nptl/pthread_cond_wait.c:652
#5  0x0000564912dfc77c in os_cond_reltimedwait (cond=0x564914930cc8, mutex=0x564914930ca0, useconds=1000000)
    at /workspaces/wasm-micro-runtime/core/shared/platform/common/posix/posix_thread.c:298
#6  0x0000564912de8f18 in wasm_runtime_atomic_wait (module=0x5649148e80a0, address=0x7f4eae000f90, expect=1, timeout=-1, wait64=false)
    at /workspaces/wasm-micro-runtime/core/iwasm/common/wasm_shared_memory.c:448
#7  0x0000564912e132ee in wasm_interp_call_func_bytecode (module=0x5649148e80a0, exec_env=0x5649148e95e0, cur_func=0x5649148e9230, prev_frame=0x5649148ea980)
    at /workspaces/wasm-micro-runtime/core/iwasm/interpreter/wasm_interp_classic.c:3429
#8  0x0000564912e16ce0 in wasm_interp_call_wasm (module_inst=0x5649148e80a0, exec_env=0x5649148e95e0, function=0x5649148e8600, argc=0, argv=0x0)
    at /workspaces/wasm-micro-runtime/core/iwasm/interpreter/wasm_interp_classic.c:4228
#9  0x0000564912dec8d3 in call_wasm_with_hw_bound_check (module_inst=0x5649148e80a0, exec_env=0x5649148e95e0, function=0x5649148e8600, argc=0, argv=0x0)
    at /workspaces/wasm-micro-runtime/core/iwasm/interpreter/wasm_runtime.c:2281
--Type <RET> for more, q to quit, c to continue without paging--
#10 0x0000564912deca05 in wasm_call_function (exec_env=0x5649148e95e0, function=0x5649148e8600, argc=0, argv=0x0)
    at /workspaces/wasm-micro-runtime/core/iwasm/interpreter/wasm_runtime.c:2345
#11 0x0000564912de513d in wasm_runtime_call_wasm (exec_env=0x5649148e95e0, function=0x5649148e8600, argc=0, argv=0x0)
    at /workspaces/wasm-micro-runtime/core/iwasm/common/wasm_runtime_common.c:1927
#12 0x0000564912de2143 in execute_main (module_inst=0x5649148e80a0, argc=1, argv=0x7ffc23c29c58) at /workspaces/wasm-micro-runtime/core/iwasm/common/wasm_application.c:110
#13 0x0000564912de257f in wasm_application_execute_main (module_inst=0x5649148e80a0, argc=1, argv=0x7ffc23c29c58)
    at /workspaces/wasm-micro-runtime/core/iwasm/common/wasm_application.c:210
#14 0x0000564912ddfc87 in app_instance_main (module_inst=0x5649148e80a0) at /workspaces/wasm-micro-runtime/product-mini/platforms/linux/../posix/main.c:103
#15 0x0000564912de11f3 in main (argc=1, argv=0x7ffc23c29c58) at /workspaces/wasm-micro-runtime/product-mini/platforms/linux/../posix/main.c:740

Roughly checked the issue, all threads got stuck in pthread_barrier_wait and the wasi_proc_exit hasn't been executed yet.

@wenyongh wenyongh deleted the fix_thread_mgr branch March 15, 2023 07:22
@eloparco
Copy link
Contributor

eloparco commented Mar 15, 2023

Roughly checked the issue, all threads got stuck in pthread_barrier_wait and the wasi_proc_exit hasn't been executed yet.

Yes, underneath pthread_barrier_wait uses atomic wait / notify operations https://github.com/WebAssembly/wasi-libc/blob/f2a35a454e8472b63831885daacc7f5fadd46747/libc-top-half/musl/src/thread/pthread_barrier_wait.c#L12 so maybe something is still broken in wasm_shared_memory.c.

Actually, I was seeing the same problem yesterday in #2028 even when implementing barrier_wait manually (using atomic primitives), initially I thought it was a bug in my implementation, not sure anymore.

@wenyongh
Copy link
Contributor Author

Roughly checked the issue, all threads got stuck in pthread_barrier_wait and the wasi_proc_exit hasn't been executed yet.

Yes, underneath pthread_barrier_wait uses atomic wait / notify operations https://github.com/WebAssembly/wasi-libc/blob/f2a35a454e8472b63831885daacc7f5fadd46747/libc-top-half/musl/src/thread/pthread_barrier_wait.c#L12 so maybe something is still broken in wasm_shared_memory.c.

Actually, I was seeing the same problem yesterday in #2028 even when implementing barrier_wait manually (using atomic primitives), initially I thought it was a bug in my implementation, not sure anymore.

@eloparco @g0djan @hritikgupta I think I found the root cause of hang, it is due to that the opcodes generated by wasi-libc's a_store are not atomic operations. I submitted a PR to wasi-libc to fix it:
WebAssembly/wasi-libc#403
And also submitted a PR to WAMR to fix/refine the code:
#2044

Now it seems the hang didn't occur again.

victoryang00 pushed a commit to victoryang00/wamr-aot-gc-checkpoint-restore that referenced this pull request May 27, 2024
…dealliance#2024)

- Remove notify_stale_threads_on_exception and change atomic.wait
  to be interruptible by keep waiting and checking every one second,
  like the implementation of poll_oneoff in libc-wasi
- Wait all other threads exit and then get wasi exit_code to avoid
  getting invalid value
- Inherit suspend_flags of parent thread while creating new thread to
  avoid terminated flag isn't set for new thread
- Fix wasi-threads test case update_shared_data_and_alloc_heap
- Add "Lib wasi-threads enabled" prompt for cmake
- Fix aot get exception, use aot_copy_exception instead
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants