-
Notifications
You must be signed in to change notification settings - Fork 5.3k
Description
Description
I recently analyzed a few memory dumps of an .NET 5.0 application which makes heavy use of MemoryCache. Two things, I could not really explain, stood out:
MemoryCacheEntryOptions
For each new cache entry a new MemoryCacheEntryOptions object was created, but at least for the options used by the app it was not necessary to create a new object for each new cache entry, so it was removed and replaced by directly setting the appropriate values on cacheEntry (or using the extension methods). Maybe it would be useful to enhance the docs to point to the extension methods. The examples in https://docs.microsoft.com/en-us/aspnet/core/performance/caching/memory?view=aspnetcore-5.0 create MemoryCacheEntryOptions most of the time instead of using the extension methods from https://github.com/dotnet/runtime/blob/a2f81e7258d2963af52d4441e891025aa46a1fc3/src/libraries/Microsoft.Extensions.Caching.Abstractions/src/MemoryCacheExtensions.cs. I guess that's the reason why a new options object was created each time.
Memory Usage of AsyncLocal in CacheEntry
This is the more interesting one, the memory usage of System.Threading.AsyncLocalValueMap+ThreeElementAsyncLocalValueMap for the ExecutionContext was actually the third largest memory allocation behind string and byte[]. See Screenshot.
(this is from the real dump, it is not just 1 request, but a batch of them, probably around 400 or 500)
I wrote a short repro which shows the problem:
- Run the repro
- Record the memory usage (I used JetBrains dotMemory) after opening the browser at e.g. http://localhost:5000
- Look at the memory dump, it should look like the example below (depends on the cpu count a bit)
using System;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Caching.Memory;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
namespace CacheEntryRepro
{
class Program
{
static async Task Main(string[] args)
{
var builder = Host.CreateDefaultBuilder().ConfigureWebHostDefaults(whb =>
whb.ConfigureServices(s => s.AddMemoryCache())
.Configure(app =>
{
app.UseRouting();
app.UseEndpoints(erb => { erb.MapGet("/", hc => RequestDelegate(hc)); });
}));
await builder.Build().RunAsync();
}
private static async Task RequestDelegate(HttpContext context)
{
var cache = context.RequestServices.GetRequiredService<IMemoryCache>();
int processors = Environment.ProcessorCount;
var tasks = new Task[processors];
for (int i = 0; i < 1000; i++)
{
for (int j = 0; j < processors; j++)
{
tasks[j] = AddCacheEntry(cache);
}
await Task.WhenAll(tasks);
}
context.Response.StatusCode = StatusCodes.Status200OK;
await context.Response.StartAsync();
await context.Response.WriteAsync("Done");
await context.Response.CompleteAsync();
}
private static Task AddCacheEntry(IMemoryCache cache)
{
var guid = Guid.NewGuid();
using var entry = cache.CreateEntry(guid.ToString());
entry.Size = 1;
entry.SlidingExpiration = TimeSpan.FromMinutes(1);
entry.Value = Guid.NewGuid();
return Task.CompletedTask;
}
}
}This is partially related to the PR #45280 from @adamsitnik as he specifically lists that AsyncLocal is expensive and he optimized the codepaths related to some of it. I have not tested his PRs if they help with the problem above, he suggested on twitter I should create an issue with a repro.