Skip to content
This repository was archived by the owner on Feb 25, 2025. It is now read-only.

Conversation

@gaaclarke
Copy link
Member

@gaaclarke gaaclarke commented Apr 26, 2023

This gives ownership of the worker concurrent loop to the ContextVK. This guarantees that the concurrent message loop is destroyed before the ContextVK is destroyed. Without it there are random crashes when tasks on the current message loop are executing while or after the ContextVK is destroyed.

fixes flutter/flutter#125522
fixes flutter/flutter#125519

Test coverage: Covered by existing tests (see linked issue)

Pre-launch Checklist

  • I read the Contributor Guide and followed the process outlined there for submitting PRs.
  • I read the Tree Hygiene wiki page, which explains my responsibilities.
  • I read and followed the Flutter Style Guide and the C++, Objective-C, Java style guides.
  • I listed at least one issue that this PR fixes in the description above.
  • I added new tests to check the change I am making or feature I am adding, or Hixie said the PR is test-exempt. See testing the engine for instructions on writing and running engine tests.
  • I updated/added relevant documentation (doc comments with ///).
  • I signed the CLA.
  • All existing and new tests are passing.

If you need help, consider asking for advice on the #hackers-new channel on Discord.

@gaaclarke gaaclarke force-pushed the fix-concurrent-task-queue-vulkan branch 2 times, most recently from 0e58418 to fe0ab34 Compare April 26, 2023 21:17
@gaaclarke gaaclarke force-pushed the fix-concurrent-task-queue-vulkan branch from fe0ab34 to b8b9a82 Compare April 26, 2023 21:30
Comment on lines -57 to -65
// Don't just drop tasks on the floor in case of shutdown.
if (shutdown_) {
FML_DLOG(WARNING)
<< "Tried to post a task to shutdown concurrent message "
"loop. The task will be executed on the callers thread.";
lock.unlock();
task();
return;
}
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tasks could always be dropped when shutting down a ConcurrentMessageLoop, there was no guarantee that they would all be executed despite a few places that attempted to make that a thing. Specifically if more than N tasks were queued up and the loop was deleted, task[N+1, ..] would be dropped.

Dropping late tasks matches he behavior elsewhere in the engine (for example platform channel messages).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mainly notes for myself:

  • This is ok because we don't give these workers to Dart. We only use them internally for SKP warmup and for image decoding.

@gaaclarke gaaclarke marked this pull request as ready for review April 26, 2023 21:47
loop->PostTask(task);
return;
{
std::scoped_lock lock(weak_loop_mutex_);
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This adds a small amount of overhead compared to the shared_ptr, but we could share the tasks_mutex_ if we wanted to add extra complexity.

Copy link
Member

@jason-simmons jason-simmons left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd like to avoid the complexity of managing the circular reference between the ConcurrentTaskLoop and the ConcurrentTaskRunner and the associated object lifetime issues.

Would it be feasible to have a design where the ContextVK provides an interface to the PipelineLibraryVK for queueing tasks so the PipelineLibraryVK does not need to directly hold a reference to the ConcurrentTaskRunner?

With that, the ContextVK would hold the only reference to the ConcurrentTaskRunner and could safely shut down the ConcurrentTaskRunner before deleting the Vulkan device.

{
std::scoped_lock lock(weak_loop_mutex_);
if (weak_loop_) {
weak_loop_->PostTask(task);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There may be undefined behavior if a method like ConcurrentMessageLoop::PostTask is called on one thread after the object has begun deletion on a different thread.

At the point where this call happens, another thread may have already dropped the last reference to the ConcurrentTaskLoop, invoked delete, and called into the ConcurrentTaskLoop destructor.

The ConcurrentTaskLoop destructor will then try to acquire the weak_loop_mutex_, so the destructor will not make any further progress before the ConcurrentMessageLoop::PostTask call completes. However, I don't think it's guaranteed to be safe to execute the ConcurrentMessageLoop::PostTask call while the ConcurrentMessageLoop is in that state.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There may be undefined behavior if a method like ConcurrentMessageLoop::PostTask is called on one thread after the object has begun deletion on a different thread.

Yep, good observation. In order for it to get undefined behavior ConcurrentTaskLoop would need virtual methods and a subclass. That would mean that the subclass could already have been deallocated while another thread is executing PostTask. ConcurrentTaskLoop doesn't have virtual methods but that's easy to add and introduce undefined behavior on accident so I've made the class final to avoid this.

The order of operations of deallocation is well defined. At the point where we are clearing out the reference the ConcurrentMessageLoop instance variables will still be valid and the memory for this will still be around. Since it is final we can know this is the very first thing that will be called.

Let me know if you you think we are still missing something.

@gaaclarke
Copy link
Member Author

Thanks, Jason. Responses are inline.

I'd like to avoid the complexity of managing the circular reference between the ConcurrentTaskLoop and the ConcurrentTaskRunner and the associated object lifetime issues.

That's how the existing design worked. I think the lifetime issues should be resolved by making the class final (as noted above). The fact that it is cyclical doesn't bother me because these classes are friends and in the same file and can't be created without without one another, so this tight coupling is well managed. We could make ConcurrentTaskRunner hold an interface instead of a direct reference to a ConcurrentMessageLoop but that would actually be more complicated without benefit.

Would it be feasible to have a design where the ContextVK provides an interface to the PipelineLibraryVK for queueing tasks so the PipelineLibraryVK does not need to directly hold a reference to the ConcurrentTaskRunner?

With that, the ContextVK would hold the only reference to the ConcurrentTaskRunner and could safely shut down the ConcurrentTaskRunner before deleting the Vulkan device.

That wouldn't work unfortunately because the ContextVK cannot safely shutdown a ConcurrentTaskRunner. The only safe way to shut down a ConcurrentTaskRunner is to shut down the ConcurrentMessageLoop. The thread deleting the ContextVK needs to synchronize on shutting down the ConcurrentMessageLoop (and all of its threads) before it can delete the ContextVK. That's why ownership of the ConcurrentMessageLoop was given to the ContextVK.

@jason-simmons
Copy link
Member

Sorry - got confused between ConcurrentTaskRunner and ConcurrentMessageLoop.

As you noted, ConcurrentMessageLoop contains the mechanism for shutting down the worker threads. So to use the current implementation of ConcurrentMessageLoop, the ContextVK would need to hold the reference to the ConcurrentMessageLoop and shut down the worker threads before deleting the Vulkan device.

Embedders that create a ContextVK would need to know that ContextVK shuts down its ConcurrentMessageLoop and that the ConcurrentMessageLoop given to the ContextVK should not be used for any other purpose. However, I think that rule will be easier to understand and maintain versus understanding the new object relationships and lifetimes in this PR.

Some other potential ideas:

  • The ContextVK could create the ConcurrentMessageLoop internally instead of receiving it from the embedder.
  • The vk::Device could be wrapped in a shared object. Tasks queued by classes like PipelineLibraryVK could be given a reference to the device. (This assumes that it is safe to for a task to continue using its reference to the vk::Device beyond the lifetime of the ContextVK)

cc @chinmaygarde @dnfield

@gaaclarke
Copy link
Member Author

Embedders that create a ContextVK would need to know that ContextVK shuts down its ConcurrentMessageLoop and that the ConcurrentMessageLoop given to the ContextVK should not be used for any other purpose.

Remember that the ConcurrentMessageLoop is a unique_ptr, so it can't be used after its given to the ContextVK.

The ContextVK could create the ConcurrentMessageLoop internally instead of receiving it from the embedder.

I thought about that. The only reason to pass the ConcurrentMessageLoop in is for testing or if you want to control how many workers it has. It is never used for testing and we could surface the worker parameter to the ContextVK constructor if we want. I just errored on keeping the design more similar to what was already there.

@chinmaygarde chinmaygarde changed the title [Impeller] Gives ownership of the worker concurrent loop to the context [Impeller] Gives ownership of the worker concurrent loop to the context. Apr 28, 2023
@flutter-dashboard
Copy link

It looks like this pull request may not have tests. Please make sure to add tests before merging. If you need an exemption to this rule, contact Hixie on the #hackers channel in Chat (don't just cc him here, he won't see it! He's on Discord!).

If you are not sure if you need tests, consider this rule of thumb: the purpose of a test is to make sure someone doesn't accidentally revert the fix. Ask yourself, is there anything in your PR that you feel it is important we not accidentally revert back to how it was before your fix?

Reviewers: Read the Tree Hygiene page and make sure this patch meets those guidelines before LGTMing.

Copy link
Contributor

@dnfield dnfield left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remember that the ConcurrentMessageLoop is a unique_ptr, so it can't be used after its given to the ContextVK.

This is true as long as ContextVK takes a unique_ptr. There are other implementations in the engine using it as a shared_ptr, specifically dart_vm.h/cc. Why aren't we changing that to also use a unique_ptr?


std::weak_ptr<ConcurrentMessageLoop> weak_loop_;
// Raw pointer that is cleared out in ~ConcurrentMessageLoop.
ConcurrentMessageLoop* weak_loop_;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider adding a macro like IPLR_GUARDED_BY to fml/macros.h and annotating this.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(and defining the mutex first)

Copy link
Member Author

@gaaclarke gaaclarke May 1, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IPLR_GUARDED_BY is impeller only, it's in //impeller/base/thread_safety.h

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, which is why I'm saying to add something like FML_GUARDED_BY to //fml/macros.h

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, now it's done (i had added it but forgot to commit the change where it is used).

Comment on lines 114 to 115
// Shut down the worker message loop before the context starts getting torn
// down.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure how to read this comment correctly.

The context is currently getting torn down right?

The extra reset here is guarding us against what? If someone adds another ivar after worker_message_loop_ that could somehow ref it?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've reworded the comment, let me know if that is more clear.

The context is currently getting torn down right?

Right, but once we hit the closing bracket the ivars will start getting deleted in a undefined order (or at least a non-obvious order). We want the threads all joined before we start doing that.

The extra reset here is guarding us against what?

Guarding us from an unclear order of operations and adding more deterministic destruction. This, for example, makes sure that the shader_library_ is destructed on the raster thread instead of on some worker thread.

@gaaclarke
Copy link
Member Author

This is true as long as ContextVK takes a unique_ptr. There are other implementations in the engine using it as a shared_ptr, specifically dart_vm.h/cc. Why aren't we changing that to also use a unique_ptr?

Using it as a shared_ptr doesn't guarantee there is a problem. For example, if every task on the ConcurrentMessageLoop maintains a strong reference to everything it needs, it will not crash. The problem is you can't enforce that statically though, making it a unique_ptr allows us to determine at compilation time what is actually happening and when.

The linked issue had clear reproductions steps and is demonstrably broken. So, that is what is addressed here, all while trying to make sure the changes make sense from ConcurrentMessageLoop's perspective. I didn't reason about dart_vm's usage since there was no filed issue that I know of. If there was a problem I'd expect it to manifest in its tests.

@gaaclarke gaaclarke requested a review from dnfield May 1, 2023 20:27
Copy link
Contributor

@dnfield dnfield left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM with the nit about a new macro for FML.

Would appreciate a review from @chinmaygarde

@jason-simmons
Copy link
Member

@chinmaygarde @dnfield I'd like to get consensus on the general design for how to avoid races between Impeller Vulkan background tasks and the lifetime of the Vulkan device.

In particular I'm still concerned about the potential risk of calling a method on weak_loop_ if another thread has already started a delete of the ConcurrentMessageLoop and entered its destructor.

Of the options discussed here I think the simplest would be having the ContextVK create its ConcurrentMessageLoop internally. The ContextVK would then be responsible for ordering the shutdown of the ConcurrentMessageLoop thread pool before deleting the vk::Device.

@gaaclarke gaaclarke force-pushed the fix-concurrent-task-queue-vulkan branch from 993b513 to 09d6627 Compare May 1, 2023 20:47
@dnfield
Copy link
Contributor

dnfield commented May 1, 2023

In particular I'm still concerned about the potential risk of calling a method on weak_loop_ if another thread has already started a delete of the ConcurrentMessageLoop and entered its destructor.

Won't that be prevented by the mutex usage?

Of the options discussed here I think the simplest would be having the ContextVK create its ConcurrentMessageLoop internally. The ContextVK would then be responsible for ordering the shutdown of the ConcurrentMessageLoop thread pool before deleting the vk::Device.

I can't think of a good reason not to do this either though.

@jason-simmons
Copy link
Member

The weak_loop_mutex_ will ensure that the ConcurrentMessageLoop destructor does not make any further progress before the other thread's call to PostTask completes. But I'm still unsure whether the PostTask call is guaranteed to be safe if deletion has already started.

@gaaclarke
Copy link
Member Author

gaaclarke commented May 1, 2023

In particular I'm still concerned about the potential risk of calling a method on weak_loop_ if another thread has already started a delete of the ConcurrentMessageLoop and entered its destructor.

Can you explain more about how this is a risk? Maybe it's not a risk or maybe you're seeing something I don't. You agree that calling a non-virtual method on this in a destructor is safe, right? We are doing that in ~ConcurrentMessageLoop(). Isn't that the equivalent to another thread calling a non-virtual method on this from a different thread while protected by a mutex?

@gaaclarke
Copy link
Member Author

Of the options discussed here I think the simplest would be having the ContextVK create its ConcurrentMessageLoop internally. The ContextVK would then be responsible for ordering the shutdown of the ConcurrentMessageLoop thread pool before deleting the vk::Device.

I can't think of a good reason not to do this either though.

That's what we have except we use a std::move instead of letting the context create it.

Is the idea that if we shut things down in the correct order we wouldn't need to clear out ConcurrentTaskRunner::weak_loop_ (ie delete ConcurrentTaskRunner before ConcurrentMessageLoop)? We can't statically assert that the ConcurrentTaskRunner is deleted before the ConcurrentMessageLoop without migrating ConcurrentTaskRunner to a unique_ptr too. That's possible but I'd like to establish that the reason for it is well established since it's a large refactor =)

@jason-simmons
Copy link
Member

Calling a non-virtual method within a destructor is safe. But in this case thread A is calling the non-virtual method while thread B has invoked the delete operator and called into the destructor. I don't know if there is any explicit guarantee that thread A's concurrent usage of the object is safe.

The current design of ConcurrentMessageLoop avoids this kind of issue because all access is done through shared_ptr and no object needs to hold a raw pointer.

If ContextVK creates the ConcurrentMessageLoop itself, then it can keep the shared_ptr<ConcurrentMessageLoop> private and ensure that nobody else holds a reference to it.

There may also need to be a different interface between PipelineLibraryVK and ContextVK in order to coordinate the object lifetimes. For example, PipelineLibraryVK could hold a weak_ptr to ContextVK and use that to call a ContextVK method that queues the task and hands the vk::Device to the task.

With something like this, the PipelineLibraryVK could avoid holding a copy of a vk::Device that could become invalidated. It also would not need to hold a reference to a ConcurrentTaskRunner - instead, all use of the device and the concurrent message loop would be centralized within ContextVK.

@gaaclarke
Copy link
Member Author

Having a synchronization on deleting the ConcurrentMessageLoop in ContextVK also allows us to clean up our CommandPoolVK instances cleanly as well: #41651 (fixes issue flutter/flutter#125571)

@gaaclarke
Copy link
Member Author

gaaclarke commented May 1, 2023

Dan, Jason and I had a chat offline about this and what we decided is we'd really like to get @chinmaygarde 's opinion about this. So, this PR gives us the ability to shutdown the ConcurrentMessageLoop before the we shut down the device_ in the ContextVK. But since anyone can query the ContextVK for the device_, is that even sufficient?

It is demonstrably sufficient in solving the flake linked in the issue, and the CommandPoolVK leak, but we'd like to make sure there isn't more work that needs to be done to manage the lifetime of the device.

I would really like to see a deterministic ~ContextVK that makes sure everything is cleaned up synchronously. That's what this PR attempts to do. The question was raised though if that is properly happening here or do we need to switch to having more shared references to make sure that lifetimes are long enough to avoid crashes? Specifically a shared reference to the VkDevice. My concern on that is that it is harder to maintain and a dying ContextVK may interfere with a new ContextVK.

cc @jason-simmons @dnfield (let me know if I missed anything)

@dnfield
Copy link
Contributor

dnfield commented May 2, 2023

I tried this patch:

diff --git a/impeller/renderer/backend/vulkan/context_vk.cc b/impeller/renderer/backend/vulkan/context_vk.cc
index d41ccd0a5f..2d175d6489 100644
--- a/impeller/renderer/backend/vulkan/context_vk.cc
+++ b/impeller/renderer/backend/vulkan/context_vk.cc
@@ -108,7 +108,7 @@ ContextVK::ContextVK() : hash_(CalculateHash(this)) {}
 
 ContextVK::~ContextVK() {
   if (device_) {
-    [[maybe_unused]] auto result = device_->waitIdle();
+    [[maybe_unused]] auto result = device_.waitIdle();
   }
   CommandPoolVK::ClearAllPools(this);
 }
@@ -273,7 +273,7 @@ void ContextVK::Setup(Settings settings) {
   device_info.setPEnabledFeatures(&required_features.value());
   // Device layers are deprecated and ignored.
 
-  auto device = physical_device->createDeviceUnique(device_info);
+  auto device = physical_device->createDevice(device_info);
   if (device.result != vk::Result::eSuccess) {
     VALIDATION_LOG << "Could not create logical device.";
     return;
@@ -291,7 +291,7 @@ void ContextVK::Setup(Settings settings) {
       weak_from_this(),                  //
       application_info.apiVersion,       //
       physical_device.value(),           //
-      device.value.get(),                //
+      device.value,                      //
       instance.value.get(),              //
       dispatcher.vkGetInstanceProcAddr,  //
       dispatcher.vkGetDeviceProcAddr     //
@@ -306,7 +306,7 @@ void ContextVK::Setup(Settings settings) {
   /// Setup the pipeline library.
   ///
   auto pipeline_library = std::shared_ptr<PipelineLibraryVK>(
-      new PipelineLibraryVK(device.value.get(),                   //
+      new PipelineLibraryVK(device.value,                         //
                             caps,                                 //
                             std::move(settings.cache_directory),  //
                             settings.worker_task_runner           //
@@ -317,11 +317,11 @@ void ContextVK::Setup(Settings settings) {
     return;
   }
 
-  auto sampler_library = std::shared_ptr<SamplerLibraryVK>(
-      new SamplerLibraryVK(device.value.get()));
+  auto sampler_library =
+      std::shared_ptr<SamplerLibraryVK>(new SamplerLibraryVK(device.value));
 
   auto shader_library = std::shared_ptr<ShaderLibraryVK>(
-      new ShaderLibraryVK(device.value.get(),              //
+      new ShaderLibraryVK(device.value,                    //
                           settings.shader_libraries_data)  //
   );
 
@@ -334,7 +334,7 @@ void ContextVK::Setup(Settings settings) {
   /// Create the fence waiter.
   ///
   auto fence_waiter =
-      std::shared_ptr<FenceWaiterVK>(new FenceWaiterVK(device.value.get()));
+      std::shared_ptr<FenceWaiterVK>(new FenceWaiterVK(device.value));
   if (!fence_waiter->IsValid()) {
     VALIDATION_LOG << "Could not create fence waiter.";
     return;
@@ -343,7 +343,7 @@ void ContextVK::Setup(Settings settings) {
   //----------------------------------------------------------------------------
   /// Fetch the queues.
   ///
-  QueuesVK queues(device.value.get(),      //
+  QueuesVK queues(device.value,            //
                   graphics_queue.value(),  //
                   compute_queue.value(),   //
                   transfer_queue.value()   //
@@ -363,7 +363,7 @@ void ContextVK::Setup(Settings settings) {
   instance_ = std::move(instance.value);
   debug_report_ = std::move(debug_report);
   physical_device_ = physical_device.value();
-  device_ = std::move(device.value);
+  device_ = device.value;
   allocator_ = std::move(allocator);
   shader_library_ = std::move(shader_library);
   sampler_library_ = std::move(sampler_library);
@@ -378,7 +378,7 @@ void ContextVK::Setup(Settings settings) {
   /// Label all the relevant objects. This happens after setup so that the debug
   /// messengers have had a chance to be setup.
   ///
-  SetDebugName(device_.get(), device_.get(), "ImpellerDevice");
+  SetDebugName(device_, device_, "ImpellerDevice");
 }
 
 // |Context|
@@ -422,7 +422,7 @@ vk::Instance ContextVK::GetInstance() const {
 }
 
 vk::Device ContextVK::GetDevice() const {
-  return *device_;
+  return device_;
 }
 
 std::unique_ptr<Surface> ContextVK::AcquireNextSurface() {
@@ -489,7 +489,7 @@ std::unique_ptr<CommandEncoderVK> ContextVK::CreateGraphicsCommandEncoder()
     return nullptr;
   }
   auto encoder = std::unique_ptr<CommandEncoderVK>(new CommandEncoderVK(
-      *device_,                //
+      device_,                 //
       queues_.graphics_queue,  //
       tls_pool,                //
       fence_waiter_            //
diff --git a/impeller/renderer/backend/vulkan/context_vk.h b/impeller/renderer/backend/vulkan/context_vk.h
index cc1376649b..e36a689894 100644
--- a/impeller/renderer/backend/vulkan/context_vk.h
+++ b/impeller/renderer/backend/vulkan/context_vk.h
@@ -77,7 +77,7 @@ class ContextVK final : public Context, public BackendCast<ContextVK, Context> {
 
   template <typename T>
   bool SetDebugName(T handle, std::string_view label) const {
-    return SetDebugName(*device_, handle, label);
+    return SetDebugName(device_, handle, label);
   }
 
   template <typename T>
@@ -126,7 +126,7 @@ class ContextVK final : public Context, public BackendCast<ContextVK, Context> {
   vk::UniqueInstance instance_;
   std::unique_ptr<DebugReportVK> debug_report_;
   vk::PhysicalDevice physical_device_;
-  vk::UniqueDevice device_;
+  vk::Device device_;
   std::shared_ptr<Allocator> allocator_;
   std::shared_ptr<ShaderLibraryVK> shader_library_;
   std::shared_ptr<SamplerLibraryVK> sampler_library_;

On top of main and I am still able to easily reproduce a segfault when running ./out/host_debug_unopt_arm64/impeller_unittests --gtest_filter="*/Vulkan"

@dnfield
Copy link
Contributor

dnfield commented May 2, 2023

(The theory with that patch was maybe we shouldn't use the vk::UniqueDevice and instead just use vk::Device in hopes that Vulkan would help keep things alive in a shared_ptr-like way, but it doesn't seem to work)

@dnfield
Copy link
Contributor

dnfield commented May 2, 2023

Here's a BT with that change:

* thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x0)
  * frame #0: 0x0000000122c46658 libMoltenVK.dylib`MVKCommandPool::freeCommandBuffers(unsigned int, VkCommandBuffer_T* const*) + 272
    frame #1: 0x0000000122c1f484 libMoltenVK.dylib`vkFreeCommandBuffers + 72
    frame #2: 0x0000000102a04d88 impeller_unittests`void impeller::vk::PoolFree<impeller::vk::Device, impeller::vk::CommandPool, impeller::vk::DispatchLoaderDynamic>::destroy<impeller::vk::CommandBuffer>(impeller::vk::CommandBuffer) [inlined] void impeller::vk::Device::free<impeller::vk::DispatchLoaderDynamic>(this=0x000000010e5c5570, commandPool=(m_commandPool = 0x0000000110035200), commandBuffers=0x000000016fdfcab8, d=0x00000001043d40a0) const at vulkan_funcs.hpp:4217:5
    frame #3: 0x0000000102a04ca8 impeller_unittests`void impeller::vk::PoolFree<impeller::vk::Device, impeller::vk::CommandPool, impeller::vk::DispatchLoaderDynamic>::destroy<impeller::vk::CommandBuffer>(this=0x000000010e5c5570, t=(m_commandBuffer = 0x000000010e724eb8)) at vulkan.hpp:6140:7
    frame #4: 0x0000000102a0506c impeller_unittests`impeller::vk::UniqueHandle<impeller::vk::CommandBuffer, impeller::vk::DispatchLoaderDynamic>::~UniqueHandle(this=0x000000010e5c5570) at vulkan.hpp:1202:15
    frame #5: 0x0000000102a03710 impeller_unittests`impeller::vk::UniqueHandle<impeller::vk::CommandBuffer, impeller::vk::DispatchLoaderDynamic>::~UniqueHandle(this=0x000000010e5c5570) at vulkan.hpp:1199:5
    frame #6: 0x0000000102a1356c impeller_unittests`impeller::SharedObjectVKT<impeller::vk::CommandBuffer>::~SharedObjectVKT(this=0x000000010e5c5568) at shared_object_vk.h:20:7
    frame #7: 0x0000000102a13374 impeller_unittests`impeller::SharedObjectVKT<impeller::vk::CommandBuffer>::~SharedObjectVKT(this=0x000000010e5c5568) at shared_object_vk.h:20:7
    frame #8: 0x0000000102a1305c impeller_unittests`std::_LIBCPP_ABI_NAMESPACE::__shared_ptr_emplace<impeller::SharedObjectVKT<impeller::vk::CommandBuffer>, std::_LIBCPP_ABI_NAMESPACE::allocator<impeller::SharedObjectVKT<impeller::vk::CommandBuffer> > >::__on_zero_shared(this=0x000000010e5c5550) at shared_ptr.h:311:24
    frame #9: 0x00000001000290c4 impeller_unittests`std::_LIBCPP_ABI_NAMESPACE::__shared_count::__release_shared[abi:v15000](this=0x000000010e5c5550) at shared_ptr.h:174:9
    frame #10: 0x0000000100029030 impeller_unittests`std::_LIBCPP_ABI_NAMESPACE::__shared_weak_count::__release_shared[abi:v15000](this=0x000000010e5c5550) at shared_ptr.h:215:27
    frame #11: 0x0000000102a234a4 impeller_unittests`std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::CommandBuffer> >::~shared_ptr[abi:v15000](this=0x000000010e5ae2b0) at shared_ptr.h:702:23
    frame #12: 0x0000000102a116c0 impeller_unittests`std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::CommandBuffer> >::~shared_ptr[abi:v15000](this=0x000000010e5ae2b0) at shared_ptr.h:700:5
    frame #13: 0x0000000102a20618 impeller_unittests`void std::_LIBCPP_ABI_NAMESPACE::allocator_traits<std::_LIBCPP_ABI_NAMESPACE::allocator<std::_LIBCPP_ABI_NAMESPACE::__tree_node<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::CommandBuffer> >, void*> > >::destroy[abi:v15000]<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::CommandBuffer> >, void, void>((null)=0x000000010e721548, __p=0x000000010e5ae2b0) at allocator_traits.h:319:15
    frame #14: 0x0000000102a204e0 impeller_unittests`std::_LIBCPP_ABI_NAMESPACE::__tree<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::CommandBuffer> >, std::_LIBCPP_ABI_NAMESPACE::less<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::CommandBuffer> > >, std::_LIBCPP_ABI_NAMESPACE::allocator<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::CommandBuffer> > > >::destroy(this=0x000000010e721540, __nd=0x000000010e5ae290) at __tree:1799:9
    frame #15: 0x0000000102a24ff0 impeller_unittests`std::_LIBCPP_ABI_NAMESPACE::__tree<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::CommandBuffer> >, std::_LIBCPP_ABI_NAMESPACE::less<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::CommandBuffer> > >, std::_LIBCPP_ABI_NAMESPACE::allocator<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::CommandBuffer> > > >::clear(this=0x000000010e721540) at __tree:1836:5
    frame #16: 0x0000000102a1171c impeller_unittests`std::_LIBCPP_ABI_NAMESPACE::set<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::CommandBuffer> >, std::_LIBCPP_ABI_NAMESPACE::less<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::CommandBuffer> > >, std::_LIBCPP_ABI_NAMESPACE::allocator<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::CommandBuffer> > > >::clear[abi:v15000](this=0x000000010e721540) at set:766:37
    frame #17: 0x0000000102a10cd0 impeller_unittests`impeller::CommandPoolVK::GarbageCollectBuffersIfAble(this=0x000000010e7214d0) at command_pool_vk.cc:134:23
    frame #18: 0x0000000102a11540 impeller_unittests`impeller::CommandPoolVK::CollectGraphicsCommandBuffer(this=0x000000010e7214d0, buffer=impeller::vk::UniqueCommandBuffer @ 0x000000016fdfcff0) at command_pool_vk.cc:127:3
    frame #19: 0x0000000102a05318 impeller_unittests`impeller::TrackedObjectsVK::~TrackedObjectsVK(this=0x000000010e723b68) at command_encoder_vk.cc:42:11
    frame #20: 0x0000000102a05214 impeller_unittests`impeller::TrackedObjectsVK::~TrackedObjectsVK(this=0x000000010e723b68) at command_encoder_vk.cc:32:23
    frame #21: 0x0000000102a02f70 impeller_unittests`std::_LIBCPP_ABI_NAMESPACE::__shared_ptr_emplace<impeller::TrackedObjectsVK, std::_LIBCPP_ABI_NAMESPACE::allocator<impeller::TrackedObjectsVK> >::__on_zero_shared(this=0x000000010e723b50) at shared_ptr.h:311:24
    frame #22: 0x00000001000290c4 impeller_unittests`std::_LIBCPP_ABI_NAMESPACE::__shared_count::__release_shared[abi:v15000](this=0x000000010e723b50) at shared_ptr.h:174:9
    frame #23: 0x0000000100029030 impeller_unittests`std::_LIBCPP_ABI_NAMESPACE::__shared_weak_count::__release_shared[abi:v15000](this=0x000000010e723b50) at shared_ptr.h:215:27
    frame #24: 0x0000000102a062cc impeller_unittests`std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::TrackedObjectsVK>::~shared_ptr[abi:v15000](this=0x000000010e723cd8) at shared_ptr.h:702:23
    frame #25: 0x00000001029f8aec impeller_unittests`std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::TrackedObjectsVK>::~shared_ptr[abi:v15000](this=0x000000010e723cd8) at shared_ptr.h:700:5
    frame #26: 0x00000001029fb2fc impeller_unittests`impeller::CommandEncoderVK::Submit(this=0x000000010e723cd8)::$_1::~$_1() at command_encoder_vk.cc:153:25
    frame #27: 0x00000001029f9a2c impeller_unittests`impeller::CommandEncoderVK::Submit(this=0x000000010e723cd8)::$_1::~$_1() at command_encoder_vk.cc:153:25
    frame #28: 0x0000000102a0cd38 impeller_unittests`std::_LIBCPP_ABI_NAMESPACE::__compressed_pair_elem<impeller::CommandEncoderVK::Submit()::$_1, 0, false>::~__compressed_pair_elem(this=0x000000010e723cd8) at compressed_pair.h:30:8
    frame #29: 0x0000000102a0ccd8 impeller_unittests`std::_LIBCPP_ABI_NAMESPACE::__compressed_pair<impeller::CommandEncoderVK::Submit()::$_1, std::_LIBCPP_ABI_NAMESPACE::allocator<impeller::CommandEncoderVK::Submit()::$_1> >::~__compressed_pair(this=0x000000010e723cd8) at compressed_pair.h:83:7
    frame #30: 0x0000000102a0cc78 impeller_unittests`std::_LIBCPP_ABI_NAMESPACE::__compressed_pair<impeller::CommandEncoderVK::Submit()::$_1, std::_LIBCPP_ABI_NAMESPACE::allocator<impeller::CommandEncoderVK::Submit()::$_1> >::~__compressed_pair(this=0x000000010e723cd8) at compressed_pair.h:83:7
    frame #31: 0x0000000102a0e984 impeller_unittests`std::_LIBCPP_ABI_NAMESPACE::__function::__alloc_func<impeller::CommandEncoderVK::Submit()::$_1, std::_LIBCPP_ABI_NAMESPACE::allocator<impeller::CommandEncoderVK::Submit()::$_1>, void ()>::destroy[abi:v15000](this=0x000000010e723cd8) at function.h:204:38
    frame #32: 0x0000000102a0c05c impeller_unittests`std::_LIBCPP_ABI_NAMESPACE::__function::__func<impeller::CommandEncoderVK::Submit()::$_1, std::_LIBCPP_ABI_NAMESPACE::allocator<impeller::CommandEncoderVK::Submit()::$_1>, void ()>::destroy(this=0x000000010e723cd0) at function.h:341:10
    frame #33: 0x0000000100388a44 impeller_unittests`std::_LIBCPP_ABI_NAMESPACE::__function::__value_func<void ()>::~__value_func[abi:v15000](this=0x000000010e723cd0) at function.h:471:19
    frame #34: 0x00000001003889b8 impeller_unittests`std::_LIBCPP_ABI_NAMESPACE::__function::__value_func<void ()>::~__value_func[abi:v15000](this=0x000000010e723cd0) at function.h:469:5
    frame #35: 0x0000000100388958 impeller_unittests`std::_LIBCPP_ABI_NAMESPACE::function<void ()>::~function(this=0x000000010e723cd0) at function.h:1174:43
    frame #36: 0x0000000100384078 impeller_unittests`std::_LIBCPP_ABI_NAMESPACE::function<void ()>::~function(this=0x000000010e723cd0) at function.h:1174:42
    frame #37: 0x0000000102a65008 impeller_unittests`std::_LIBCPP_ABI_NAMESPACE::pair<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::Fence> > const, std::_LIBCPP_ABI_NAMESPACE::function<void ()> >::~pair(this=0x000000010e723cc0) at pair.h:40:29
    frame #38: 0x0000000102a64fa4 impeller_unittests`std::_LIBCPP_ABI_NAMESPACE::pair<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::Fence> > const, std::_LIBCPP_ABI_NAMESPACE::function<void ()> >::~pair(this=0x000000010e723cc0) at pair.h:40:29
    frame #39: 0x0000000102a64d68 impeller_unittests`void std::_LIBCPP_ABI_NAMESPACE::allocator_traits<std::_LIBCPP_ABI_NAMESPACE::allocator<std::_LIBCPP_ABI_NAMESPACE::__hash_node<std::_LIBCPP_ABI_NAMESPACE::__hash_value_type<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::Fence> >, std::_LIBCPP_ABI_NAMESPACE::function<void ()> >, void*> > >::destroy[abi:v15000]<std::_LIBCPP_ABI_NAMESPACE::pair<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::Fence> > const, std::_LIBCPP_ABI_NAMESPACE::function<void ((null)=0x000000010e66ed50, __p=0x000000010e723cc0)> >, void, void>(std::_LIBCPP_ABI_NAMESPACE::allocator<std::_LIBCPP_ABI_NAMESPACE::__hash_node<std::_LIBCPP_ABI_NAMESPACE::__hash_value_type<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::Fence> >, std::_LIBCPP_ABI_NAMESPACE::function<void ()> >, void*> >&, std::_LIBCPP_ABI_NAMESPACE::pair<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::Fence> > const, std::_LIBCPP_ABI_NAMESPACE::function<void ()> >*) at allocator_traits.h:319:15
    frame #40: 0x0000000102a64b1c impeller_unittests`std::_LIBCPP_ABI_NAMESPACE::__hash_table<std::_LIBCPP_ABI_NAMESPACE::__hash_value_type<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::Fence> >, std::_LIBCPP_ABI_NAMESPACE::function<void ()> >, std::_LIBCPP_ABI_NAMESPACE::__unordered_map_hasher<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::Fence> >, std::_LIBCPP_ABI_NAMESPACE::__hash_value_type<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::Fence> >, std::_LIBCPP_ABI_NAMESPACE::function<void ()> >, std::_LIBCPP_ABI_NAMESPACE::hash<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::Fence> > >, std::_LIBCPP_ABI_NAMESPACE::equal_to<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::Fence> > >, true>, std::_LIBCPP_ABI_NAMESPACE::__unordered_map_equal<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::Fence> >, std::_LIBCPP_ABI_NAMESPACE::__hash_value_type<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::Fence> >, std::_LIBCPP_ABI_NAMESPACE::function<void ()> >, std::_LIBCPP_ABI_NAMESPACE::equal_to<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::Fence> > >, std::_LIBCPP_ABI_NAMESPACE::hash<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::Fence> > >, true>, std::_LIBCPP_ABI_NAMESPACE::allocator<std::_LIBCPP_ABI_NAMESPACE::__hash_value_type<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::Fence> >, std::_LIBCPP_ABI_NAMESPACE::function<void ()> > > >::__deallocate_node(this=0x000000010e66ed40, __np=0x000000010e723cb0)> >, void*>*>*) at __hash_table:1523:9
    frame #41: 0x0000000102a64a58 impeller_unittests`std::_LIBCPP_ABI_NAMESPACE::__hash_table<std::_LIBCPP_ABI_NAMESPACE::__hash_value_type<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::Fence> >, std::_LIBCPP_ABI_NAMESPACE::function<void ()> >, std::_LIBCPP_ABI_NAMESPACE::__unordered_map_hasher<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::Fence> >, std::_LIBCPP_ABI_NAMESPACE::__hash_value_type<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::Fence> >, std::_LIBCPP_ABI_NAMESPACE::function<void ()> >, std::_LIBCPP_ABI_NAMESPACE::hash<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::Fence> > >, std::_LIBCPP_ABI_NAMESPACE::equal_to<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::Fence> > >, true>, std::_LIBCPP_ABI_NAMESPACE::__unordered_map_equal<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::Fence> >, std::_LIBCPP_ABI_NAMESPACE::__hash_value_type<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::Fence> >, std::_LIBCPP_ABI_NAMESPACE::function<void ()> >, std::_LIBCPP_ABI_NAMESPACE::equal_to<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::Fence> > >, std::_LIBCPP_ABI_NAMESPACE::hash<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::Fence> > >, true>, std::_LIBCPP_ABI_NAMESPACE::allocator<std::_LIBCPP_ABI_NAMESPACE::__hash_value_type<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::Fence> >, std::_LIBCPP_ABI_NAMESPACE::function<void ()> > > >::~__hash_table(this=0x000000010e66ed40) at __hash_table:1464:5
    frame #42: 0x0000000102a649e4 impeller_unittests`std::_LIBCPP_ABI_NAMESPACE::__hash_table<std::_LIBCPP_ABI_NAMESPACE::__hash_value_type<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::Fence> >, std::_LIBCPP_ABI_NAMESPACE::function<void ()> >, std::_LIBCPP_ABI_NAMESPACE::__unordered_map_hasher<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::Fence> >, std::_LIBCPP_ABI_NAMESPACE::__hash_value_type<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::Fence> >, std::_LIBCPP_ABI_NAMESPACE::function<void ()> >, std::_LIBCPP_ABI_NAMESPACE::hash<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::Fence> > >, std::_LIBCPP_ABI_NAMESPACE::equal_to<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::Fence> > >, true>, std::_LIBCPP_ABI_NAMESPACE::__unordered_map_equal<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::Fence> >, std::_LIBCPP_ABI_NAMESPACE::__hash_value_type<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::Fence> >, std::_LIBCPP_ABI_NAMESPACE::function<void ()> >, std::_LIBCPP_ABI_NAMESPACE::equal_to<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::Fence> > >, std::_LIBCPP_ABI_NAMESPACE::hash<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::Fence> > >, true>, std::_LIBCPP_ABI_NAMESPACE::allocator<std::_LIBCPP_ABI_NAMESPACE::__hash_value_type<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::Fence> >, std::_LIBCPP_ABI_NAMESPACE::function<void ()> > > >::~__hash_table(this=0x000000010e66ed40) at __hash_table:1456:1
    frame #43: 0x0000000102a64984 impeller_unittests`std::_LIBCPP_ABI_NAMESPACE::unordered_map<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::Fence> >, std::_LIBCPP_ABI_NAMESPACE::function<void ()>, std::_LIBCPP_ABI_NAMESPACE::hash<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::Fence> > >, std::_LIBCPP_ABI_NAMESPACE::equal_to<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::Fence> > >, std::_LIBCPP_ABI_NAMESPACE::allocator<std::_LIBCPP_ABI_NAMESPACE::pair<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::Fence> > const, std::_LIBCPP_ABI_NAMESPACE::function<void ()> > > >::~unordered_map[abi:v15000](this=0x000000010e66ed40) at unordered_map:1153:5
    frame #44: 0x0000000102a60244 impeller_unittests`std::_LIBCPP_ABI_NAMESPACE::unordered_map<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::Fence> >, std::_LIBCPP_ABI_NAMESPACE::function<void ()>, std::_LIBCPP_ABI_NAMESPACE::hash<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::Fence> > >, std::_LIBCPP_ABI_NAMESPACE::equal_to<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::Fence> > >, std::_LIBCPP_ABI_NAMESPACE::allocator<std::_LIBCPP_ABI_NAMESPACE::pair<std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::SharedObjectVKT<impeller::vk::Fence> > const, std::_LIBCPP_ABI_NAMESPACE::function<void ()> > > >::~unordered_map[abi:v15000](this=0x000000010e66ed40) at unordered_map:1151:22
    frame #45: 0x0000000102a600c0 impeller_unittests`impeller::FenceWaiterVK::~FenceWaiterVK(this=0x000000010e66ecc0) at fence_waiter_vk.cc:24:1
    frame #46: 0x0000000102a602a4 impeller_unittests`impeller::FenceWaiterVK::~FenceWaiterVK(this=0x000000010e66ecc0) at fence_waiter_vk.cc:19:33
    frame #47: 0x0000000102a46668 impeller_unittests`std::_LIBCPP_ABI_NAMESPACE::default_delete<impeller::FenceWaiterVK>::operator(this=0x000000010e66fc78, __ptr=0x000000010e66ecc0)[abi:v15000](impeller::FenceWaiterVK*) const at unique_ptr.h:48:5
    frame #48: 0x0000000102a46214 impeller_unittests`std::_LIBCPP_ABI_NAMESPACE::__shared_ptr_pointer<impeller::FenceWaiterVK*, std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::FenceWaiterVK>::__shared_ptr_default_delete<impeller::FenceWaiterVK, impeller::FenceWaiterVK>, std::_LIBCPP_ABI_NAMESPACE::allocator<impeller::FenceWaiterVK> >::__on_zero_shared(this=0x000000010e66fc60) at shared_ptr.h:263:5
    frame #49: 0x00000001000290c4 impeller_unittests`std::_LIBCPP_ABI_NAMESPACE::__shared_count::__release_shared[abi:v15000](this=0x000000010e66fc60) at shared_ptr.h:174:9
    frame #50: 0x0000000100029030 impeller_unittests`std::_LIBCPP_ABI_NAMESPACE::__shared_weak_count::__release_shared[abi:v15000](this=0x000000010e66fc60) at shared_ptr.h:215:27
    frame #51: 0x00000001004806f8 impeller_unittests`std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::FenceWaiterVK>::~shared_ptr[abi:v15000](this=0x00000001226844f8) at shared_ptr.h:702:23
    frame #52: 0x000000010047e91c impeller_unittests`std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::FenceWaiterVK>::~shared_ptr[abi:v15000](this=0x00000001226844f8) at shared_ptr.h:700:5
    frame #53: 0x0000000102a278d8 impeller_unittests`impeller::ContextVK::~ContextVK(this=0x0000000122684420) at context_vk.cc:114:1
    frame #54: 0x0000000102a27c58 impeller_unittests`impeller::ContextVK::~ContextVK(this=0x0000000122684420) at context_vk.cc:109:25
    frame #55: 0x0000000102a3be10 impeller_unittests`std::_LIBCPP_ABI_NAMESPACE::default_delete<impeller::ContextVK>::operator(this=0x0000000122683b18, __ptr=0x0000000122684420)[abi:v15000](impeller::ContextVK*) const at unique_ptr.h:48:5
    frame #56: 0x0000000102a3b9bc impeller_unittests`std::_LIBCPP_ABI_NAMESPACE::__shared_ptr_pointer<impeller::ContextVK*, std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::ContextVK>::__shared_ptr_default_delete<impeller::ContextVK, impeller::ContextVK>, std::_LIBCPP_ABI_NAMESPACE::allocator<impeller::ContextVK> >::__on_zero_shared(this=0x0000000122683b00) at shared_ptr.h:263:5
    frame #57: 0x00000001000290c4 impeller_unittests`std::_LIBCPP_ABI_NAMESPACE::__shared_count::__release_shared[abi:v15000](this=0x0000000122683b00) at shared_ptr.h:174:9
    frame #58: 0x0000000100029030 impeller_unittests`std::_LIBCPP_ABI_NAMESPACE::__shared_weak_count::__release_shared[abi:v15000](this=0x0000000122683b00) at shared_ptr.h:215:27
    frame #59: 0x00000001000f7a48 impeller_unittests`std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::Context>::~shared_ptr[abi:v15000](this=0x000000010e5c8508) at shared_ptr.h:702:23
    frame #60: 0x00000001000f58d0 impeller_unittests`std::_LIBCPP_ABI_NAMESPACE::shared_ptr<impeller::Context>::~shared_ptr[abi:v15000](this=0x000000010e5c8508) at shared_ptr.h:700:5
    frame #61: 0x00000001003a6068 impeller_unittests`impeller::PlaygroundImplVK::~PlaygroundImplVK(this=0x000000010e5c84d0) at playground_impl_vk.cc:118:37
    frame #62: 0x00000001003a613c impeller_unittests`impeller::PlaygroundImplVK::~PlaygroundImplVK(this=0x000000010e5c84d0) at playground_impl_vk.cc:118:37
    frame #63: 0x00000001003a619c impeller_unittests`impeller::PlaygroundImplVK::~PlaygroundImplVK(this=0x000000010e5c84d0) at playground_impl_vk.cc:118:37
    frame #64: 0x000000010038a658 impeller_unittests`std::_LIBCPP_ABI_NAMESPACE::default_delete<impeller::PlaygroundImpl>::operator(this=0x000000010e5b6828, __ptr=0x000000010e5c84d0)[abi:v15000](impeller::PlaygroundImpl*) const at unique_ptr.h:48:5
    frame #65: 0x00000001003834ec impeller_unittests`std::_LIBCPP_ABI_NAMESPACE::unique_ptr<impeller::PlaygroundImpl, std::_LIBCPP_ABI_NAMESPACE::default_delete<impeller::PlaygroundImpl> >::reset[abi:v15000](this=0x000000010e5b6828, __p=0x0000000000000000) at unique_ptr.h:305:7
    frame #66: 0x0000000100383320 impeller_unittests`impeller::Playground::TeardownWindow(this=0x000000010e5b67f0) at playground.cc:144:9
    frame #67: 0x000000010277bf68 impeller_unittests`impeller::PlaygroundTest::TearDown(this=0x000000010e5b67f0) at playground_test.cc:40:3
    frame #68: 0x0000000103230984 impeller_unittests`void testing::internal::HandleSehExceptionsInMethodIfSupported<testing::Test, void>(object=0x000000010e5b67f0, method=18 00 00 00 00 00 00 00 01 00 00 00 00 00 00 00, location="TearDown()")(), char const*) at gtest.cc:2631:10
    frame #69: 0x0000000103203bfc impeller_unittests`void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(object=0x000000010e5b67f0, method=18 00 00 00 00 00 00 00 01 00 00 00 00 00 00 00, location="TearDown()")(), char const*) at gtest.cc:2686:12
    frame #70: 0x0000000103203b58 impeller_unittests`testing::Test::Run(this=0x000000010e5b67f0) at gtest.cc:2714:3
    frame #71: 0x0000000103204940 impeller_unittests`testing::TestInfo::Run(this=0x000000010e340070) at gtest.cc:2885:11
    frame #72: 0x0000000103205f80 impeller_unittests`testing::TestSuite::Run(this=0x000000010e33c8a0) at gtest.cc:3044:30
    frame #73: 0x00000001032129ac impeller_unittests`testing::internal::UnitTestImpl::RunAllTests(this=0x000000010e3118d0) at gtest.cc:5913:44
    frame #74: 0x0000000103238014 impeller_unittests`bool testing::internal::HandleSehExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(object=0x000000010e3118d0, method=94 25 21 03 01 00 00 00 00 00 00 00 00 00 00 00, location="auxiliary test code (environments or event listeners)")(), char const*) at gtest.cc:2631:10
    frame #75: 0x00000001032124dc impeller_unittests`bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(object=0x000000010e3118d0, method=94 25 21 03 01 00 00 00 00 00 00 00 00 00 00 00, location="auxiliary test code (environments or event listeners)")(), char const*) at gtest.cc:2686:12
    frame #76: 0x0000000103212348 impeller_unittests`testing::UnitTest::Run(this=0x0000000104a04cd8) at gtest.cc:5482:10
    frame #77: 0x0000000100bca160 impeller_unittests`RUN_ALL_TESTS() at gtest.h:2497:46
    frame #78: 0x0000000100bc9ffc impeller_unittests`main(argc=1, argv=0x000000016fdfe6c0) at run_all_unittests.cc:64:12
    frame #79: 0x0000000193457f28 dyld`start + 2236

@gaaclarke
Copy link
Member Author

(The theory with that patch was maybe we shouldn't use the vk::UniqueDevice and instead just use vk::Device in hopes that Vulkan would help keep things alive in a shared_ptr-like way, but it doesn't seem to work)

Ah yea. Looking at the source code for vk::Device, it just has a pointer to a VkDevice.

FWIW I see how the device reference is managed as a separate issue to this PR. Even if we did migrate the device to a shared pointer, we'd still want to synchronize on the worker threads to avoid odd behaviors and to give us the ability to clean up the thread_local global variable.

For example, something like:

ContextVK::~ContextVK() {
  device_.reset();
  worker_message_loop_.reset();
  // Cleans up the thread_local global variable.
  CommandPoolVK::ClearAllPools(this);
  // Assert that there is no lingering references to the device.
  FML_DCHECK(!weak_device_.lock());
}

Adopting this PR doesn't stop us from changing how device references are tracked.

@gaaclarke
Copy link
Member Author

We had another offline meeting with Chinmay and he made the point that having a shared_ptr to the concurrent message loop and passing it into the ContextVK was intentional so that it could be shared across the whole process, limiting the number of threads. This PR doesn't allow that. If we do that, that means there needs to be a shutoff mechanism so that tasks that are queued up for execution after the ContextVK is deleted have a way to be aborted.

Chinmay was also asked if it would be safe to delete the VkDevice from an arbitrary thread and he believed so.

It sounds like the easiest thing to do will be to eliminate all instance variable storage of the VkDevice and instead store a weak_ptr<ContextVK> which is locked and queried for the device when it is needed. This keeps things like the instance alive too (It's not just the device that we need to keep alive).

I'm uncertain if this approach will easily allow us to clean up our cached CommandPoolVKs, but chinmay was of the opinion that we could probably switch back to creating them on demand. I made the point that maybe we could cache them by sending them through the stack instead of relying on a thread_local map.

cc @jason-simmons @dnfield @chinmaygarde

auto-submit bot pushed a commit that referenced this pull request May 9, 2023
…41748)

An alternative to #41527

This introduces a new protocol called `DeviceHolder` whose only purpose is to return a pointer to a device.  That way people can hold onto weak_ptr's to the `DeviceHolder` and know to abort further work once the Device is deleted (like loading pipelines on a background thread).  This PR also changes the return type of ContextVK::GetDevice from returning a reference to a pointer as a hope to better communicate what is happening and discourage copying of the vk::Device.

I would actually recommend abolishing holding onto instance variables of the vk::Device since it isn't clear that that is just a raw pointer.  That's sort of how we got into this problem.

fixes flutter/flutter#125522
fixes flutter/flutter#125519
~~fixes flutter/flutter#125571

Testing: Fixes flake in existing tests.

[C++, Objective-C, Java style guides]: https://github.com/flutter/engine/blob/main/CONTRIBUTING.md#style
@chinmaygarde
Copy link
Member

I believe this is obsoleted by #41748. Closing.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Projects

No open projects
Archived in project

Development

Successfully merging this pull request may close these issues.

[Impeller] random crash in impeller::PipelineCacheVK::CreatePipeline [Impeller] vulkan unit tests will randomly crash in ~TrackedObjectsVK()

4 participants