{"id":973,"date":"2013-04-04T08:54:00","date_gmt":"2013-04-04T08:54:00","guid":{"rendered":"https:\/\/blogs.msdn.microsoft.com\/dotnet\/2013\/04\/04\/net-memory-allocation-profiling-with-visual-studio-2012\/"},"modified":"2021-10-04T12:36:04","modified_gmt":"2021-10-04T19:36:04","slug":"net-memory-allocation-profiling-with-visual-studio-2012","status":"publish","type":"post","link":"https:\/\/devblogs.microsoft.com\/dotnet\/net-memory-allocation-profiling-with-visual-studio-2012\/","title":{"rendered":".NET Memory Allocation Profiling with Visual Studio 2012"},"content":{"rendered":"<blockquote>\n<p><em>This post was written by Stephen Toub, a frequent contributor to the <a href=\"http:\/\/blogs.msdn.com\/b\/pfxteam\">Parallel Programming in .NET blog<\/a>. He shows us how Visual Studio 2012 and an attention to detail can help you discover unnecessary allocations in your app that can prevent it from achieving higher performance. <\/em><\/p>\n<\/blockquote>\n<p>Visual Studio 2012 has a wealth of valuable functionality, so much so that I periodically hear developers that already use Visual Studio asking for a feature the IDE already has and that they\u2019ve just never discovered. Other times, I hear developers asking about a specific feature, thinking it\u2019s meant for one purpose, not realizing it\u2019s really intended for another.<\/p>\n<p>Both of these cases apply to Visual Studio\u2019s <a href=\"http:\/\/msdn.microsoft.com\/library\/dd264966.aspx\">.NET memory allocation profiler<\/a>. Many developers that could benefit from it don\u2019t know it exists, and other developers have an incorrect expectation for its purpose. This is unfortunate, as the feature can provide a lot of value for particular scenarios; many developers will benefit from understanding first that it exists, and second the intended scenarios for its use.<\/p>\n<h3>Why memory profiling?<\/h3>\n<p>When it comes to .NET and memory analysis, there are two primary reasons one would want to use a diagnostics tool:<\/p>\n<ol>\n<li><b>To discover memory leaks. <\/b>Leaks on a garbage-collecting runtime like the CLR manifest differently than do leaks in a non-garbage-collected environment, such as in code written in C\/C++. A leak in the latter typically occurs due to the developer not manually freeing some memory that was previously allocated. In a garbage collected environment, however, manually freeing memory isn\u2019t required, as that\u2019s the duty of the <a href=\"http:\/\/msdn.microsoft.com\/library\/0xy59wtx.aspx\">garbage collector<\/a> (GC). However, the GC can only release memory that is provably no longer being used, meaning as long as there are no rooted references to the memory. Leaks in .NET code manifest then when some memory that should have been collected is incorrectly still rooted, e.g. a reference to the object occurs in an event handler registered with a static event. A good memory analysis tool might help you to find such leaks, such as by allowing you to take snapshots of the process at two different points and then comparing those snapshots to see which objects stuck around for the second point, and more importantly, why.<\/li>\n<li><b>To discover unnecessary allocations.<\/b> In .NET, allocation is often quite cheap. This cost is deceptive, however, as there are more costs later when the GC needs to clean up. The more memory that gets allocated, the more frequently the GC will need to run, and typically the more objects that survive collections, the more work the GC needs to do when it runs to determine which objects are no longer reachable. Thus, the more allocations a program does, the higher the GC costs will be. These GC costs are often negligible to the program\u2019s performance profile, but for certain kinds of apps, especially those on servers that require high-throughput operation, these costs can add up quickly and make a noticeable impact to the performance of the app. As such, a good memory analysis tool might help you to understand all of the allocation being done by the program, in order to help spot allocations you can potentially avoid.<\/li>\n<\/ol>\n<p>The .NET memory profiler included in Visual Studio 2012 (Professional and higher versions) was designed primarily to address the latter case of helping to discover unnecessary allocations, and it\u2019s quite useful towards that goal, as the rest of this post will explore. The tool is not tuned for the former case of finding and fixing memory leaks, though this is an area the Visual Studio diagnostics team is looking to address in depth for the future (you can see <a href=\"http:\/\/msdn.microsoft.com\/library\/windows\/apps\/jj819176.aspx\">such an experience for JavaScript<\/a> that was added to Visual Studio as part of <a href=\"http:\/\/blogs.msdn.com\/b\/somasegar\/archive\/2012\/11\/26\/visual-studio-2012-update-1-now-available.aspx\">VS2012.1<\/a>). While the tool today does have an <a href=\"http:\/\/msdn.microsoft.com\/library\/dd264934.aspx\">advanced option to track when objects are collected<\/a>, it doesn\u2019t help you to understand why objects weren\u2019t collected or why they were held onto longer than was expected.<\/p>\n<p>There are also other useful tools in this space. The downloadable <a href=\"http:\/\/www.microsoft.com\/download\/details.aspx?id=28567\">PerfView<\/a> <a href=\"http:\/\/blogs.msdn.com\/b\/dotnet\/archive\/2012\/10\/09\/improving-your-app-s-performance-with-perfview.aspx\">tool<\/a> doesn\u2019t provide as user-friendly an interface as does the .NET memory profiler in Visual Studio 2012, but it is a very powerful tool that supports both tasks of finding memory leaks and discovering unnecessary allocations. It also supports profiling Windows Store apps, which the .NET memory allocation profiler in Visual Studio 2012 does not currently support as of the writing of this post.<\/p>\n<h3>Example to be optimized<\/h3>\n<p>To better understand the memory profiler\u2019s role and how it can help, let\u2019s walk through an example. We\u2019ll start with the core method that we\u2019ll be looking to optimize (in a real-world case, you\u2019d likely be analyzing your whole application and narrowing in on the particular offending areas, but for the purpose of this example, we\u2019ll keep this constrained):<\/p>\n<div id=\"codeSnippetWrapper\">\n<blockquote>\n<pre class=\"csharpcode\"><span class=\"kwrd\">public<\/span> <span class=\"kwrd\">static<\/span> async Task&lt;T&gt; WithCancellation1&lt;T&gt;(<span class=\"kwrd\">this<\/span> Task&lt;T&gt; task, CancellationToken cancellationToken)\n{\n    var tcs = <span class=\"kwrd\">new<\/span> TaskCompletionSource&lt;<span class=\"kwrd\">bool<\/span>&gt;();\n    <span class=\"kwrd\">using<\/span> (cancellationToken.Register(() =&gt; tcs.TrySetResult(<span class=\"kwrd\">true<\/span>)))\n    <span class=\"kwrd\">if<\/span> (task != await Task.WhenAny(task, tcs.Task))\n        <span class=\"kwrd\">throw<\/span> <span class=\"kwrd\">new<\/span> OperationCanceledException(cancellationToken);\n    <span class=\"kwrd\">return<\/span> await task;\n}<\/pre>\n<\/blockquote><\/div>\n<p>The purpose of this small method is to enable code to await a task in a cancelable manner, meaning that regardless of whether the task has completed, the developer wants to be able to stop waiting for it. Instead of writing code like:<\/p>\n<blockquote>\n<pre class=\"csharpcode\">T result = await someTask;<\/pre>\n<\/blockquote>\n<p>the developer would write:<\/p>\n<blockquote>\n<pre class=\"csharpcode\">T result = await someTask.WithCancellation1(token);<\/pre>\n<\/blockquote>\n<p>and if cancellation is requested on the relevant CancellationToken before the task completes, an OperationCanceledException will be thrown. This is achieved in WithCancellation1 by wrapping the original task in an async method. The method creates a second task that will complete when cancellation is requested (by Registering a call to TrySetResult with the CancellationToken), and then uses Task.WhenAny to wait for either the original task or the cancellation task to complete. As soon as either does, the async method completes, either by throwing a cancellation exception if the cancellation task completed first, or by propagating the outcome of the original task by awaiting it. (For more details, see the blog post \u201c<a href=\"http:\/\/blogs.msdn.com\/b\/pfxteam\/archive\/2012\/10\/05\/how-do-i-cancel-non-cancelable-async-operations.aspx\">How do I cancel non-cancelable async operations?<\/a>\u201d)<\/p>\n<p>To understand the allocations involved in this method, we\u2019ll use a small harness method:<\/p>\n<p>&#160;<\/p>\n<blockquote>\n<pre class=\"csharpcode\"><pre class=\"csharpcode\"><span class=\"kwrd\">using<\/span> System;\n<span class=\"kwrd\">using<\/span> System.Threading;\n<span class=\"kwrd\">using<\/span> System.Threading.Tasks;\n \n<span class=\"kwrd\">class<\/span> Harness\n{\n    <span class=\"kwrd\">static<\/span> <span class=\"kwrd\">void<\/span> Main() \n     { \n         Console.ReadLine(); <span class=\"rem\">\/\/ wait until profiler attaches<\/span>\n         TestAsync().Wait(); \n     }\n    <span class=\"kwrd\">static<\/span> async Task TestAsync()\n    {\n        var token = CancellationToken.None;\n        <span class=\"kwrd\">for<\/span>(<span class=\"kwrd\">int<\/span> i=0; i&lt;100000; i++)\n            await Task.FromResult(42).WithCancellation1(token);\n    }\n}\n \n\n<span class=\"kwrd\">static<\/span> <span class=\"kwrd\">class<\/span> Extensions\n{\n    <span class=\"kwrd\">public<\/span> <span class=\"kwrd\">static<\/span> async Task&lt;T&gt; WithCancellation1&lt;T&gt;(\n    <span class=\"kwrd\">this<\/span> Task&lt;T&gt; task, CancellationToken cancellationToken)\n    {\n        var tcs = <span class=\"kwrd\">new<\/span> TaskCompletionSource&lt;<span class=\"kwrd\">bool<\/span>&gt;();\n        <span class=\"kwrd\">using<\/span> (cancellationToken.Register(() =&gt; tcs.TrySetResult(<span class=\"kwrd\">true<\/span>)))\n            <span class=\"kwrd\">if<\/span> (task != await Task.WhenAny(task, tcs.Task))\n                <span class=\"kwrd\">throw<\/span> <span class=\"kwrd\">new<\/span> OperationCanceledException(cancellationToken);\n        <span class=\"kwrd\">return<\/span> await task;\n    }\n}<\/pre>\n<\/blockquote>\n<p>The TestAsync method will iterate 100,000 times. Each time, it creates a new task, invokes the WithCancellation1 on it, and awaits the result of that WithCancellation1 call. This await will complete synchronously, as the task created by Task.FromResult is returned in an already completed state, and the WithCancellation1 method itself doesn\u2019t introduce any additional asynchrony such that the task it returns will complete synchronously as well.<\/p>\n<h3>Running the .NET memory allocation profiler<\/h3>\n<p>To start the memory allocation profiler, in Visual Studio go to the Analyze menu and select \u201cLaunch Performance Wizard\u2026\u201d. This will open a dialog like the following:<\/p>\n<p><img decoding=\"async\" title=\"image\" style=\"border: 0px currentcolor;margin-right: auto;margin-left: auto;float: none\" border=\"0\" alt=\"image\" src=\"https:\/\/devblogs.microsoft.com\/dotnet\/wp-content\/uploads\/sites\/10\/2013\/04\/1586.image_6BAB9C20.png\" width=\"487\" height=\"419\" \/><\/p>\n<p>Choose \u201c.NET memory allocation (sampling)\u201d, click Next twice, followed by Finish (if this is the first time you\u2019ve used the profiler since you logged into Windows, you\u2019ll need to accept the elevation prompt so the profiler can start). At that point, the application will be launched and the profiler will start monitoring it for allocations (the harness code above also requires that you press \u2018Enter\u2019, in order to ensure the profiler has attached by the time the program starts the real test). When the app completes, or when you manually choose to stop profiling, the profiler will load symbols and will start analyzing the trace. That\u2019s a good time to go and get yourself a cup of coffee, or lunch, as depending on how many allocations occurred, the tool can take a while to do this analysis.<\/p>\n<p>When the analysis completes, we\u2019re presented with a summary of the allocations that occurred, including highlighting the functions that allocated the most memory, the types that resulted in the most memory allocated, and the types with the most instances allocated:<\/p>\n<p><img decoding=\"async\" title=\"image\" style=\"border: 0px currentcolor;margin-right: auto;margin-left: auto;float: none\" border=\"0\" alt=\"image\" src=\"https:\/\/devblogs.microsoft.com\/dotnet\/wp-content\/uploads\/sites\/10\/2013\/04\/7652.image_1D3709B6.png\" width=\"624\" height=\"334\" \/><\/p>\n<p>From there, we can drill in further, by looking at the allocations summary (choose \u201cAllocation\u201d from the \u201cCurrent View\u201d dropdown):<\/p>\n<p><img decoding=\"async\" title=\"image\" style=\"border: 0px currentcolor;margin-right: auto;margin-left: auto;float: none\" border=\"0\" alt=\"image\" src=\"https:\/\/devblogs.microsoft.com\/dotnet\/wp-content\/uploads\/sites\/10\/2013\/04\/5518.image_120D7F6C.png\" width=\"624\" height=\"205\" \/><\/p>\n<p>Here, we get to see a row for each type that was allocated, with the columns showing information about how many allocations were tracked, how much space was associated with those allocations, and what percentage of allocations mapped back to that type. We can also expand an entry to see the stack of method calls that resulted in these allocations:<\/p>\n<p><img decoding=\"async\" title=\"image\" style=\"border: 0px currentcolor;margin-right: auto;margin-left: auto;float: none\" border=\"0\" alt=\"image\" src=\"https:\/\/devblogs.microsoft.com\/dotnet\/wp-content\/uploads\/sites\/10\/2013\/04\/2465.image_71F272AE.png\" width=\"624\" height=\"279\" \/><\/p>\n<p>By selecting the \u201cFunctions\u201d view, we can get a different pivot on this data, highlighting which functions allocated the most objects and bytes:<\/p>\n<p><img decoding=\"async\" title=\"image\" style=\"border: 0px currentcolor;margin-right: auto;margin-left: auto;float: none\" border=\"0\" alt=\"image\" src=\"https:\/\/devblogs.microsoft.com\/dotnet\/wp-content\/uploads\/sites\/10\/2013\/04\/3036.image_3CE5E37E.png\" width=\"624\" height=\"212\" \/><\/p>\n<h3>Interpreting and acting on the profiling results<\/h3>\n<p>With this capability, we can analyze our example\u2019s results. First, we can see that there\u2019s a substantial number of allocations here, which might be surprising. After all, in our example we were using WithCancellation1 with a task that was already completed, which means there should have been very little work to do (with the task already done, there is nothing to cancel), and yet from the above trace we can see that each iteration of our example is resulting in:<\/p>\n<ul>\n<li>Three allocations of Task`1 (we ran the harness 100K times and can see there were ~300K allocations)<\/li>\n<\/ul>\n<ul>\n<li>Two allocations of Task[] <\/li>\n<\/ul>\n<ul>\n<li>One allocation each of TaskCompletionSource`1, Action, a compiler-generated type called &lt;&gt;c_DisplayClass2`1, and some type called CompleteOnInvokePromise<\/li>\n<\/ul>\n<p>That\u2019s nine allocations for a case where we might expect only one (the task allocation we explicitly asked for in the harness by calling Task.FromResult), with our WithCancellation1 method incurring eight allocations.<\/p>\n<p>For helper operations on tasks, it\u2019s actually fairly common to deal with already completed tasks, as often times operations implemented to be asynchronous will actually complete synchronously (e.g. one read operation on a network stream may buffer into memory enough additional data to fulfill a subsequent read operation). As such, optimizing for the already completed case can be really beneficial for performance. Let\u2019s try. Here\u2019s a second attempt at WithCancellation, one that optimizes for several \u201calready completed\u201d cases:<\/p>\n<pre class=\"csharpcode\">    <span class=\"kwrd\">public<\/span> <span class=\"kwrd\">static<\/span> Task&lt;T&gt; WithCancellation2&lt;T&gt;(<span class=\"kwrd\">this<\/span> Task&lt;T&gt; task, <br \/>                             CancellationToken cancellationToken)\n    {\n        <span class=\"kwrd\">if<\/span> (task.IsCompleted || !cancellationToken.CanBeCanceled)\n            <span class=\"kwrd\">return<\/span> task;\n        <span class=\"kwrd\">else<\/span> <span class=\"kwrd\">if<\/span> (cancellationToken.IsCancellationRequested)\n            <span class=\"kwrd\">return<\/span> <span class=\"kwrd\">new<\/span> Task&lt;T&gt;(() =&gt; <span class=\"kwrd\">default<\/span>(T), cancellationToken);\n        <span class=\"kwrd\">else<\/span>\n            <span class=\"kwrd\">return<\/span> task.WithCancellation1(cancellationToken);\n    }<\/pre>\n<p>This implementation checks:<\/p>\n<ul>\n<li>First, whether the task is already completed or whether the supplied CancellationToken can\u2019t be canceled; in both of those cases, there\u2019s no additional work needed, as cancellation can\u2019t be applied, and as such we can just return the original task immediately rather than spending any time or memory creating a new one.<\/li>\n<\/ul>\n<ul>\n<li>Then whether cancellation has already been requested; if it has, we can allocate a single already-canceled task to be returned, rather than spending the eight allocations we previously paid to invoke our original implementation.<\/li>\n<\/ul>\n<ul>\n<li>Finally, if none of these fast paths apply, we fall through to calling the original implementation.<\/li>\n<\/ul>\n<p>Re-profiling our micro-benchmark while using WithCancellation2 instead of WithCancellation1 provides a much improved outlook (you\u2019ll likely notice that the analysis completes much more quickly than it did before, already a sign that we\u2019ve significantly decreased memory allocation). Now we have just have the primary allocation we expected, the one from Task.FromResult called from our TestAsync method in the harness:<\/p>\n<p><img decoding=\"async\" title=\"image\" style=\"border: 0px currentcolor;margin-right: auto;margin-left: auto;float: none\" border=\"0\" alt=\"image\" src=\"https:\/\/devblogs.microsoft.com\/dotnet\/wp-content\/uploads\/sites\/10\/2013\/04\/7268.image_1CCAD6C1.png\" width=\"624\" height=\"115\" \/><\/p>\n<p>So, we\u2019ve now successfully optimized the case where the task is already completed, where cancellation can\u2019t be requested, or where cancellation has already been requested. What about the case where we do actually need to invoke the more complicated logic? Are there any improvements that can be made there?<\/p>\n<p>Let\u2019s change our benchmark to use a task that\u2019s not already completed by the time we invoke WithCancellation2, and also to use a token that can have cancellation requested. This will ensure we make it to the \u201cslow\u201d path:<\/p>\n<pre class=\"csharpcode\">    <span class=\"kwrd\">static<\/span> async Task TestAsync()\n    {\n        var token = <span class=\"kwrd\">new<\/span> CancellationTokenSource().Token;\n        <span class=\"kwrd\">for<\/span> (<span class=\"kwrd\">int<\/span> i = 0; i &lt; 100000; i++)\n        {\n            var tcs = <span class=\"kwrd\">new<\/span> TaskCompletionSource&lt;<span class=\"kwrd\">int<\/span>&gt;();\n            var t = tcs.Task.WithCancellation2(token);\n            tcs.SetResult(42);\n            await t;\n        }\n    }<\/pre>\n<p>Profiling again provides more insight:<\/p>\n<p><img decoding=\"async\" title=\"image\" style=\"border: 0px currentcolor;margin-right: auto;margin-left: auto;float: none\" border=\"0\" alt=\"image\" src=\"https:\/\/devblogs.microsoft.com\/dotnet\/wp-content\/uploads\/sites\/10\/2013\/04\/0486.image_7CAFCA03.png\" width=\"624\" height=\"132\" \/><\/p>\n<p>On this slow path, there are now 14 total allocations per iteration, including the 2 from our TestAsync harness (the TaskCompletionSource&lt;int&gt; we explicitly create, and the Task&lt;int&gt; it creates). At this point, we can use all of the information provided by the profiling results to understand where the remaining 12 allocations are coming from and to then address them as is relevant and possible. For example, let\u2019s look at two allocations specifically: the &lt;&gt;c__DisplayClass2`1 instance and one of the two Action instances. These two allocations will likely be logical to anyone familiar with <a href=\"http:\/\/blogs.msdn.com\/b\/pfxteam\/archive\/2012\/02\/29\/10263921.aspx\">how the C# compiler handles closures<\/a>. Why do we have a closure? Because of this line:<\/p>\n<blockquote>\n<pre class=\"csharpcode\"><span class=\"kwrd\">using<\/span>(cancellationToken.Register(() =&gt; tcs.TrySetResult(<span class=\"kwrd\">true<\/span>)))<\/pre>\n<\/blockquote>\n<p>The call to Register is closing over the \u2018tcs\u2019 variable. But this isn\u2019t strictly necessary: the Register method has another overload which instead of taking an Action takes an Action&lt;object&gt; and the object state to be passed to it. If we instead rewrite this line to use that state-based overload, along with a manually cached delegate, we can avoid the closure and those two allocations:<\/p>\n<blockquote>\n<pre class=\"csharpcode\"><span class=\"kwrd\">private<\/span> <span class=\"kwrd\">static<\/span> <span class=\"kwrd\">readonly<\/span> Action&lt;<span class=\"kwrd\">object<\/span>&gt; s_cancellationRegistration =\n    s =&gt; ((TaskCompletionSource&lt;<span class=\"kwrd\">bool<\/span>&gt;)s).TrySetResult(<span class=\"kwrd\">true<\/span>);\n\u2026\n<span class=\"kwrd\">using<\/span>(cancellationToken.Register(s_cancellationRegistration, tcs))\n  <\/pre>\n<\/blockquote>\n<p>Rerunning the profiler confirms those two allocations are no longer occurring:<\/p>\n<h3>Start profiling today!<\/h3>\n<p>This cycle of profiling, finding and eliminating hotspots, and then going around again is a common approach towards improving the performance of your code, whether using a CPU profiler or a memory profiler. So, if you find yourself in a scenario where you determine that minimizing allocations is important for the performance of your code, give the .NET memory allocation profiler in Visual Studio 2012 a try. Feel free to <a href=\"https:\/\/aka.ms\/wpv10c\">download the sample project used in this blog post<\/a>.<\/p>\n<p>For more on profiling, see the blog of the <a href=\"http:\/\/blogs.msdn.com\/b\/visualstudioalm\/archive\/tags\/diagnostics\/\">Visual Studio Diagnostics team<\/a>, and ask them questions in the <a href=\"http:\/\/social.msdn.microsoft.com\/Forums\/en-US\/vsdebug\/threads\">Visual Studio Diagnostics forum<\/a>.<\/p>\n<p>Stephen Toub<\/p>\n<p>&#160;<\/p>\n<p>&#160;<\/p>\n<p>Follow us on Twitter (<a href=\"https:\/\/twitter.com\/dotnet\">@dotnet<\/a>) and Facebook (<a href=\"http:\/\/facebook.com\/dotnet\">dotnet<\/a>). You can follow other .NET teams, too: <a href=\"https:\/\/twitter.com\/aspnet\">@aspnet<\/a>\/<a href=\"http:\/\/facebook.com\/asp.net\">asp.net<\/a>, <a href=\"https:\/\/twitter.com\/efmagicunicorns\">@efmagicunicorns<\/a>\/<a href=\"https:\/\/www.facebook.com\/efmagicunicorns\">efmagicunicorns<\/a>, <a href=\"https:\/\/twitter.com\/visualstudio\">@visualstudio<\/a>\/<a href=\"https:\/\/www.facebook.com\/visualstudio\">visualstudio<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>This post was written by Stephen Toub, a frequent contributor to the Parallel Programming in .NET blog. He shows us how Visual Studio 2012 and an attention to detail can help you discover unnecessary allocations in your app that can prevent it from achieving higher performance. Visual Studio 2012 has a wealth of valuable functionality, [&hellip;]<\/p>\n","protected":false},"author":11288,"featured_media":58792,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[685],"tags":[11,36,59,108,147],"class_list":["post-973","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-dotnet","tag-net-framework","tag-async","tag-diagnostics","tag-performance","tag-visual-studio"],"acf":[],"blog_post_summary":"<p>This post was written by Stephen Toub, a frequent contributor to the Parallel Programming in .NET blog. He shows us how Visual Studio 2012 and an attention to detail can help you discover unnecessary allocations in your app that can prevent it from achieving higher performance. Visual Studio 2012 has a wealth of valuable functionality, [&hellip;]<\/p>\n","_links":{"self":[{"href":"https:\/\/devblogs.microsoft.com\/dotnet\/wp-json\/wp\/v2\/posts\/973","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devblogs.microsoft.com\/dotnet\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devblogs.microsoft.com\/dotnet\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/dotnet\/wp-json\/wp\/v2\/users\/11288"}],"replies":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/dotnet\/wp-json\/wp\/v2\/comments?post=973"}],"version-history":[{"count":0,"href":"https:\/\/devblogs.microsoft.com\/dotnet\/wp-json\/wp\/v2\/posts\/973\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/dotnet\/wp-json\/wp\/v2\/media\/58792"}],"wp:attachment":[{"href":"https:\/\/devblogs.microsoft.com\/dotnet\/wp-json\/wp\/v2\/media?parent=973"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/dotnet\/wp-json\/wp\/v2\/categories?post=973"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/dotnet\/wp-json\/wp\/v2\/tags?post=973"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}