Skip to content

Comments

Change Interpreter to share code with the JIT and use its CompAllocator#123830

Open
davidwrighton wants to merge 31 commits intodotnet:mainfrom
davidwrighton:use_bump_heap_in_interpreter
Open

Change Interpreter to share code with the JIT and use its CompAllocator#123830
davidwrighton wants to merge 31 commits intodotnet:mainfrom
davidwrighton:use_bump_heap_in_interpreter

Conversation

@davidwrighton
Copy link
Member

@davidwrighton davidwrighton commented Jan 31, 2026

This provides a bump allocator for allocating temporary memory which is managed efficiently by the JIT->EE interface. As well, this provides a scheme for categorizing and measuring allocation costs. In the JIT costs are typically gathered by the type allocated, but in the interpreter since so much is allocated as int32_t arrays, I've chosen to apply a category of where in the compiler we were allocating.

The behavior should be roughly the same between the jit and the interpreter although the stats dumping is a little different. Set DOTNET_JitMemStats=1 to get a summary, and DOTNET_JitMemStats=2 to get per-method data.

With this change, we now have a jitshared directory for sharing code between the interpreter and jit. The long term goal here is to reduce some of the code duplication between the JIT/Interpreter for things which make no sense to build independently, such as this allocator infrastructure. In the future, I see us moving jitstd and the collections to this directory, as well as probably the method naming infrastructure and MethodSet logic.

Overall, the behavior of everything should remain the same in the interpreter. However, persistent memory allocation should now be restricted to a smattering of calls to AllocMethodData (which will be changed/replaced in a future PR). In addition, ALL temporary allocations now have some sort of categorization. In general a quick analysis of the statistics gathered from some simple tests indicates the Interpreter compiler uses ~10X less memory to compile compared to the JIT. A summary of the results from running the interpreter test looks like the data below. Note that it is clear that the biggest cost is the Vars table, which makes sense since effectively every instruction creates at least 1 var, and often several.

Interpreter Memory Statistics:
==============================
For      2643 methods:
  count:             298017 (avg     112 per method)
  alloc size :     43780827 (avg   16564 per method)
  max alloc  :       131072

  allocateMemory   :    345702400 (avg  130799 per method)
  nraUsed    :     44015232 (avg   16653 per method)

Alloc'd bytes by kind:
                  kind |       size |     pct
  ---------------------+------------+--------
          AllocOffsets |     406464 |   0.93%
          AsyncSuspend |          0 |   0.00%
            BasicBlock |    4747104 |  10.84%
      CallDependencies |      40464 |   0.09%
              CallInfo |     412960 |   0.94%
     ConservativeRange |    2281760 |   5.21%
              DataItem |     342400 |   0.78%
             DebugOnly |     423576 |   0.97%
      DelegateCtorPeep |       7072 |   0.02%
              EHClause |      28672 |   0.07%
                    GC |    7314728 |  16.71%
               Generic |          0 |   0.00%
                ILCode |        423 |   0.00%
            InterpCode |    1351460 |   3.09%
           Instruction |    6226184 |  14.22%
           IntervalMap |          0 |   0.00%
     NativeToILMapping |     937476 |   2.14%
                 Reloc |     437224 |   1.00%
             RetryData |      42288 |   0.10%
             StackInfo |     409080 |   0.93%
              StackMap |      26280 |   0.06%
          StackMapHash |     294096 |   0.67%
           SwitchTable |       2412 |   0.01%
                   Var |   17843520 |  40.76%
      VarSizedDataItem |     205184 |   0.47%


Largest method allocation:
count:       2569, size:     510792, max =     131072
allocateMemory:     655360, nraUsed:     514424

Alloc'd bytes by kind:
                  kind |       size |     pct
  ---------------------+------------+--------
          AllocOffsets |        192 |   0.04%
          AsyncSuspend |          0 |   0.00%
            BasicBlock |      33480 |   6.55%
      CallDependencies |          0 |   0.00%
              CallInfo |       9040 |   1.77%
     ConservativeRange |      17728 |   3.47%
              DataItem |       3968 |   0.78%
             DebugOnly |        344 |   0.07%
      DelegateCtorPeep |          0 |   0.00%
              EHClause |          0 |   0.00%
                    GC |      30632 |   6.00%
               Generic |          0 |   0.00%
                ILCode |          0 |   0.00%
            InterpCode |      26236 |   5.14%
           Instruction |     110464 |  21.63%
           IntervalMap |          0 |   0.00%
     NativeToILMapping |      16548 |   3.24%
                 Reloc |          0 |   0.00%
             RetryData |         16 |   0.00%
             StackInfo |         72 |   0.01%
              StackMap |         16 |   0.00%
          StackMapHash |        344 |   0.07%
           SwitchTable |          0 |   0.00%
                   Var |     258048 |  50.52%
      VarSizedDataItem |       3664 |   0.72%


---------------------------------------------------
Distribution of total memory allocated per method (in KB):
     <=         64 ===>     569 count ( 21% of total)
     65 ..     128 ===>    1556 count ( 80% of total)
    129 ..     192 ===>     501 count ( 99% of total)
    193 ..     256 ===>       6 count ( 99% of total)
    257 ..     512 ===>      10 count ( 99% of total)
    513 ..    1024 ===>       1 count (100% of total)
   1025 ..    4096 ===>       0 count (100% of total)
   4097 ..    8192 ===>       0 count (100% of total)

---------------------------------------------------
Distribution of total memory used      per method (in KB):
     <=         16 ===>    2150 count ( 81% of total)
     17 ..      32 ===>     262 count ( 91% of total)
     33 ..      64 ===>     150 count ( 96% of total)
     65 ..     128 ===>      60 count ( 99% of total)
    129 ..     192 ===>       9 count ( 99% of total)
    193 ..     256 ===>       6 count ( 99% of total)
    257 ..     512 ===>       6 count (100% of total)
    513 ..    1024 ===>       0 count (100% of total)
   1025 ..    4096 ===>       0 count (100% of total)
   4097 ..    8192 ===>       0 count (100% of total)

Create a shared arena allocator infrastructure in src/coreclr/jitshared/ that
can be used by both the JIT and interpreter. This addresses the memory leak
issue in the interpreter where AllocMemPool() was using raw malloc() without
any mechanism to free the memory.

New shared infrastructure (jitshared/):
- allocconfig.h: IAllocatorConfig interface for host abstraction
- arenaallocator.h/.cpp: Page-based arena allocator with bulk deallocation
- compallocator.h: CompAllocator and CompIAllocator template wrappers
- memstats.h: Shared memory statistics tracking infrastructure

Interpreter changes:
- Add ArenaAllocator member to InterpCompiler
- AllocMemPool() now uses arena allocator instead of malloc()
- Add destructor to InterpCompiler that calls arena.destroy()
- Add InterpMemKind enum for future memory profiling support
- Add InterpAllocatorConfig implementing IAllocatorConfig

JIT preparation (for future integration):
- Add JitAllocatorConfig implementing IAllocatorConfig using g_jitHost

The interpreter now properly frees all compilation-phase memory when the
InterpCompiler is destroyed, fixing the long-standing FIXME comment about
memory leaks.
Update interpreter compiler to use CompAllocator with InterpMemKind
categories instead of raw AllocMemPool() calls, enabling memory
profiling when MEASURE_INTERP_MEM_ALLOC is defined.

- Add InterpAllocator typedef and getAllocator(InterpMemKind) method
- Update MemPoolAllocator to store InterpMemKind for TArray usage
- Add operator new overloads for InterpAllocator
- Categorize allocations: BasicBlock, Instruction, StackInfo, CallInfo,
  Reloc, SwitchTable, IntervalMap, ILCode, Generic
- Create jitshared.h with MEASURE_MEM_ALLOC define (1 in DEBUG, 0 in Release)
- Update CompAllocator template to accept MemStats* for tracking allocations
- Add memory statistics infrastructure to InterpCompiler:
  - Static aggregate stats (s_aggStats, s_maxStats) with minipal_mutex
  - initMemStats(), finishMemStats(), dumpAggregateMemStats(), dumpMaxMemStats()
- Add DOTNET_JitMemStats config value to interpconfigvalues.h
- Update ProcessShutdownWork to dump stats when DOTNET_JitMemStats=1
- Link interpreter against minipal for mutex support

Usage: DOTNET_JitMemStats=1 DOTNET_InterpMode=3 corerun app.dll
Change templates to take a single traits struct parameter instead of
separate TMemKind and MemKindCount parameters. The traits struct provides:
- MemKind: The enum type for memory kinds
- Count: A static constexpr int giving the number of enum values
- Names: A static const char* const[] array of names for each kind

This fixes the issue where the largest method stats didn't print the
kind breakdown because m_memKindNames was NULL (the per-method MemStats
objects weren't initialized with the names array).

With traits, the names are accessed via TMemKindTraits::Names directly,
so no initialization is needed and all MemStats objects can print by kind.
- Add histogram.h and histogram.cpp to jitshared directory
- Histogram uses std::atomic for thread-safe counting
- Interpreter records memory allocation and usage distributions
- Display histograms after aggregate and max stats in output
- Add DOTNET_InterpDumpMemStats config option (default 0)
- When InterpDump is active and InterpDumpMemStats=1, print per-method
  memory statistics after BuildEHInfo completes (includes all allocations)
- When InterpDump is active but InterpDumpMemStats is not set, print a
  hint message about how to enable per-method stats
- Create jitshared/dumpable.h with Dumpable base class
- Update jitshared/histogram.h to inherit from Dumpable
- Update JIT CMakeLists.txt to include jitshared directory and histogram.cpp
- Remove JIT's own Histogram and Dumpable classes from compiler.hpp
- Remove JIT's Histogram implementation from utils.cpp
- Update dump() methods to be const (Counter, NodeCounts, DumpOnShutdown)
- Use std::atomic for thread-safe histogram counting instead of InterlockedAdd
- Template ArenaAllocator<TMemKindTraits> to include MemStats as a member
- Add MemStatsAllocator helper struct for tracking allocations by kind
- Add outOfMemory() method to IAllocatorConfig interface
- Update CompAllocator to get stats from ArenaAllocator via getMemStatsAllocator()
- Update interpreter to use InterpArenaAllocator typedef
- Remove separate m_stats member from InterpCompiler (now in arena)
- Add JIT's jitallocconfig.cpp to build, with global g_jitAllocatorConfig
- ArenaAllocator implementation moved to header (template requirement)

This prepares the infrastructure for the JIT to also use the shared
templated ArenaAllocator in a future change.
Copilot AI review requested due to automatic review settings January 31, 2026 01:41
@github-actions github-actions bot added the area-CodeGen-coreclr CLR JIT compiler in src/coreclr/src/jit and related components such as SuperPMI label Jan 31, 2026
@davidwrighton davidwrighton changed the title Change Interpreter to share code with the JIT and use its CompMemAllocator Change Interpreter to share code with the JIT and use its CompAllocator Jan 31, 2026
@dotnet-policy-service
Copy link
Contributor

Tagging subscribers to this area: @JulieLeeMSFT, @dotnet/jit-contrib
See info in area-owners.md if you want to be subscribed.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR introduces a shared allocator/statistics infrastructure used by both the JIT and the interpreter, so the interpreter can allocate compilation-temporary memory via the same arena-style patterns as the JIT and optionally report categorized allocation statistics.

Changes:

  • Added a new jitshared/ component containing a templated arena allocator, categorized allocation wrappers, histogram support, and memory stats helpers.
  • Refactored the JIT to use the shared allocator templates and a new JitAllocatorConfig abstraction for host-backed slab allocation.
  • Updated the interpreter to allocate from an arena (instead of leaking malloc-backed pools), categorize allocations by new InterpMemKind, and optionally emit per-method/aggregate mem stats.

Reviewed changes

Copilot reviewed 26 out of 26 changed files in this pull request and generated 7 comments.

Show a summary per file
File Description
src/coreclr/jitshared/memstats.h New templated memory allocation statistics types used by JIT/interpreter.
src/coreclr/jitshared/jitshared.h Shared macro/config definitions for allocator/stat collection.
src/coreclr/jitshared/histogram.h New shared histogram API for distribution reporting.
src/coreclr/jitshared/histogram.cpp Shared histogram implementation (now atomic-based).
src/coreclr/jitshared/dumpable.h Shared “dumpable” base interface used by stats/reporting types.
src/coreclr/jitshared/compallocator.h New shared categorized allocator wrapper and IAllocator adapter.
src/coreclr/jitshared/arenaallocator.h New shared templated arena allocator implementation.
src/coreclr/jitshared/arenaallocator.cpp Stub translation unit retained for build compatibility.
src/coreclr/jitshared/allocconfig.h New host abstraction interface for arena page allocation and OOM.
src/coreclr/jitshared/CMakeLists.txt Adds CMake project structure for the new jitshared area.
src/coreclr/jit/utils.cpp Moves histogram/dumpable usage to shared implementation and adjusts constness.
src/coreclr/jit/jitallocconfig.h Adds JIT-specific IAllocatorConfig implementation declaration.
src/coreclr/jit/jitallocconfig.cpp Implements JIT host-backed allocation + debug behaviors for arena pages.
src/coreclr/jit/compiler.hpp Switches to shared Dumpable/Histogram headers and const dump API.
src/coreclr/jit/compiler.cpp Updates JIT mem-stats plumbing and arena construction to use new config.
src/coreclr/jit/alloc.h Refactors arena/comp allocator types into template instantiations + new stats manager.
src/coreclr/jit/alloc.cpp Provides JIT mem-kind names and new aggregate/max stats implementation.
src/coreclr/jit/CMakeLists.txt Wires jitshared sources/includes and adds new JIT allocator config source.
src/coreclr/interpreter/interpmemkind.h Adds interpreter memory kind taxonomy via X-macro list.
src/coreclr/interpreter/interpconfigvalues.h Adds new interpreter config knobs for dumping memory stats.
src/coreclr/interpreter/interpallocconfig.h Adds interpreter IAllocatorConfig implementation for arena pages.
src/coreclr/interpreter/eeinterp.cpp Initializes/stops interpreter mem-stats reporting at startup/shutdown.
src/coreclr/interpreter/compiler.h Integrates shared allocators into interpreter compiler and introduces categorized alloc accessors.
src/coreclr/interpreter/compiler.cpp Migrates interpreter allocations to arena + implements mem-stats aggregation/histograms.
src/coreclr/interpreter/CMakeLists.txt Links interpreter against needed shared sources and minipal for mutex support.
src/coreclr/CMakeLists.txt Adds jitshared subdirectory when interpreter feature is enabled.

- Add outOfMemory() to memory traits and call it on allocation overflow
  instead of returning nullptr in CompAllocatorT and CompIAllocatorT
- Remove contract.h/safemath.h includes from iallocator.h to allow use
  from interpreter without pulling in VM headers
- Add contract.h include to simplerhash.h and gcinfo/arraylist.cpp
- Fix missing semicolon in interpconfigvalues.h
- Include jitshared.h in jit.h and remove duplicate MEASURE_MEM_ALLOC
- Fix Histogram bounds check to use HISTOGRAM_MAX_SIZE_COUNT - 1
- Add safety check for mutex initialization in finishMemStats()
Consolidate the allocator configuration by moving all IAllocatorConfig
callbacks to the traits template parameter (TMemKindTraits). This removes
the need for both a runtime config object and a traits type.

Changes:
- Add bypassHostAllocator, shouldInjectFault, allocateHostMemory,
  freeHostMemory, fillWithUninitializedPattern as static methods on
  JitMemKindTraits and InterpMemKindTraits
- Update ArenaAllocatorT to call TMemKindTraits:: methods instead of
  m_config-> methods
- Remove IAllocatorConfig* m_config member from ArenaAllocatorT
- Change ArenaAllocatorT constructor to be parameterless
- Delete allocconfig.h, jitallocconfig.h/cpp, interpallocconfig.h
- Update jitshared CMakeLists.txt to create OBJECT library
- Update JIT CMakeLists.txt to link against jitshared objects instead of
  compiling sources directly
- Update interpreter CMakeLists.txt to link against jitshared objects
- Update coreclr_static to include jitshared objects
- Move histogram.h include outside conditional block in compiler.hpp
  since HISTOGRAM_MAX_SIZE_COUNT is needed unconditionally
Copilot AI review requested due to automatic review settings February 2, 2026 23:49
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 27 out of 27 changed files in this pull request and generated 7 comments.

@AndyAyersMS
Copy link
Member

fyi @dotnet/jit-contrib

Add ownership handling for the ICorDebugInfo arrays which are allocated via new[]
…rrect, since its basically using malloc, but at least all locations are now reliably allocating with AllocMethodData)
Copilot AI review requested due to automatic review settings February 4, 2026 21:15
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 35 out of 35 changed files in this pull request and generated no new comments.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 35 out of 35 changed files in this pull request and generated 2 comments.

Comment on lines 59 to 74
dn_simdhash_ptr_ptr_holder(dn_simdhash_ptr_ptr_holder&& other)
{
Value = other.Value;
ValueDestroyCallback = other.ValueDestroyCallback;
ArenaAllocator = other.ArenaAllocator;
other.Value = nullptr;
other.ValueDestroyCallback = nullptr;
}
dn_simdhash_ptr_ptr_holder& operator=(dn_simdhash_ptr_ptr_holder&& other)
{
if (this != &other)
{
free_hash_and_values();
Value = other.Value;
ValueDestroyCallback = other.ValueDestroyCallback;
ArenaAllocator = other.ArenaAllocator;
other.Value = nullptr;
other.ValueDestroyCallback = nullptr;
}
return *this;
}
Copy link

Copilot AI Feb 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the move constructor and move assignment operator, consider also setting other.ArenaAllocator = nullptr after transferring it. This follows the same pattern as other.Value = nullptr and prevents potential issues if the moved-from object is accessed inadvertently. The same applies to the dn_simdhash_u32_ptr_holder class below (lines 107-122).

Copilot uses AI. Check for mistakes.
Comment on lines 59 to 64
dn_simdhash_ptr_ptr_holder(dn_simdhash_ptr_ptr_holder&& other)
{
Value = other.Value;
ValueDestroyCallback = other.ValueDestroyCallback;
ArenaAllocator = other.ArenaAllocator;
other.Value = nullptr;
other.ValueDestroyCallback = nullptr;
}
Copy link

Copilot AI Feb 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The move assignment operator in dn_simdhash_ptr_ptr_holder and dn_simdhash_u32_ptr_holder should also null out other.ArenaAllocator after transferring it, similar to how other.Value is nulled. This prevents potential issues if the moved-from object is accessed inadvertently.

Copilot uses AI. Check for mistakes.
Copy link
Member

@AndyAyersMS AndyAyersMS left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

JIT changes LGTM.

@jakobbotsch maybe you can also take a look?

@jakobbotsch
Copy link
Member

The JIT throughput regression looks significant:
image

Not sure if those regressions are real or not, IIRC we had some issues around whole program optimization for the tpdiff (I don't remember if we ever resolved that).

@davidwrighton
Copy link
Member Author

Looking at the disassembly, it appears that something involving templates has made this all less optimized. I've put together some forceinlines to see if that makes it better.

Copilot AI review requested due to automatic review settings February 23, 2026 22:50
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 32 out of 32 changed files in this pull request and generated 2 comments.


static TSList* Pop(TSList *head)
{
TSList *next = head->pNext;
Copy link
Member

@AaronRobinsonMSFT AaronRobinsonMSFT Feb 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This represents a slight behavioral change. Not that it probably matters, but destructors are important here and I'd suggest we just call it as it should be a nop for now, but we will avoid sadness if someone relies on it in the future.

if (head != NULL)
    head->~TSList();

Ubuntu and others added 3 commits February 24, 2026 19:21
…to inline, and causes significant code bloat (and the default clang optimization rules will inline)
- In practice this probably doesn't matter as the TSList is only used once, and doesn't need a destructor, but we might as well have equivalent semantics
Copy link
Member

@AaronRobinsonMSFT AaronRobinsonMSFT left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Part way through.


const char* CorInfoHelperToName(CorInfoHelpFunc helper);


Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change

printf("Allocations for %s\n", m_methodName.GetUnderlyingArray());
m_arenaAllocator->getStats().Print(stdout);
}
#endif
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
#endif
#endif // MEASURE_MEM_ALLOC



void* MemPoolAllocator::Alloc(size_t sz) const { return m_compiler->AllocMemPool(sz); }
return m_compiler->getAllocator(m_memKind).allocate<char>(sz);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
return m_compiler->getAllocator(m_memKind).allocate<char>(sz);
return m_compiler->getAllocator(m_memKind).allocate<int8_t>(sz);

I'd prefer to see int8_t or even BYTE for bytes rather than char.

int insSize = sizeof(InterpInst) + sizeof(uint32_t) * dataLen;
InterpInst *ins = (InterpInst*)AllocMemPool(insSize);
memset(ins, 0, insSize);
InterpInst *ins = (InterpInst*)getAllocator(IMK_Instruction).allocateZeroed<char>(insSize);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why can't we use the pool allocator here? Is this because the memory is long lived as opposed to scratch?

{
bb->stackHeight = stackHeight;
bb->pStackState = (StackInfo*)AllocMemPool(sizeof(StackInfo) * stackHeight);
bb->pStackState = new (getAllocator(IMK_StackInfo)) StackInfo[stackHeight];
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this follow the pattern on 314 and use the allocateZeroed<StackInfo>()?

if (m_stackCapacity < 4)
m_stackCapacity = 4;

m_pStackBase = new (getAllocator(IMK_StackInfo)) StackInfo[m_stackCapacity];
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this mean extra capacity is now in an uninitialized state? Does StackInfo have a constructor to properly initialize it?

if (m_numILVars > 0)
{
// This will eventually be freed by the VM, using freeArray.
ICorDebugInfo::NativeVarInfo* eeVars = (ICorDebugInfo::NativeVarInfo*)m_compHnd->allocateArray(m_numILVars * sizeof(ICorDebugInfo::NativeVarInfo));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This function name is suspect. I would assume the argument type is count, but it looks like it is bytes, is that really correct?

Comment on lines +1924 to +1926
// Clean up allocated memory if non-null
if (m_pILToNativeMap != nullptr)
m_compHnd->freeArray(m_pILToNativeMap);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like freeArray and company handle null just fine. I would just call it.


size_t size = sizeof(InterpIntervalMapEntry) * intervalCount;
*ppIntervalMap = (InterpIntervalMapEntry*)AllocMemPool0(size);
*ppIntervalMap = getAllocator(IMK_IntervalMap).allocate<InterpIntervalMapEntry>(intervalCount);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we call allocateZeroed here?

int32_t dVar = m_pStackPointer[-1].var;

int *callArgs = (int*) AllocMemPool((2 + 1) * sizeof(int));
int *callArgs = getAllocator(IMK_CallInfo).allocate<int>(2 + 1);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
int *callArgs = getAllocator(IMK_CallInfo).allocate<int>(2 + 1);
int *callArgs = getAllocator(IMK_CallInfo).allocate<int32_t>(2 + 1);

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area-CodeGen-coreclr CLR JIT compiler in src/coreclr/src/jit and related components such as SuperPMI

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants