Skip to content

Port LOAD_GLOBAL from old opcache to PEP 659 adaptive interpreter. #51

@markshannon

Description

@markshannon

LOAD_GLOBAL has two obvious specializations:

  • The variable is a global within the module
  • The variable is a builtin variable

There are a number of strategies for optimizing these, listed in decreasing performance for the load operation (ignoring costs in terms of memory and complexity)

  1. "Out of line" caches as used by PyPy and HotPy. These are the fastest for hot code, but are complex and can be memory hungry. We definitely want to use these or something similar for tiers 2 and 3.
  2. Use a global cache and "dictionary watchers". Cinder does this. This is a bit slower than out of line caches, but is simpler to deoptimize.
  3. Make the dictionary keys immutable, and cache the pointer to the keys and the index into the values. This was the approach used for instance attributes in HotPy. It falls in somewhere between 1 and 4 in terms of performance.
  4. Version the dictionary keys and cache the keys version and the index into the values. This is slower, but is compact and doesn't leak references (we cache 32 bit version numbers instead of 64 bit refcounted pointers).

For tier 1 execution, we should try not to leak references and minimize memory use. So I plan to use strategy 4.

The current (3.10) implementation.

3.10 caches both dictionary versions and the object.
Changing the value of any global invalidates all caches in that module, including those for unrelated variables, which means it cannot be used too eagerly. It also leaks references because it caches the value.

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

Status

Done

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions