Skip to content

Compiling large projects leads to ghc getting borked #2415

@jhance

Description

@jhance

So, when I compile a large project (and I mean like, 3000 modules in the same package - its a lot of autogenerated code), the memory usage slowly but steadily increases. I have 16GB of RAM but after ~1000 modules I start to run OOM and I have to ctrl-c stack at around 90% usage to prevent myself from OOMing.

If I don't do this, and let stack continue as it would, the system OOMs and the ghc becomes completely borked (this might be specific to my machine and have something to do with harddisk corruption?). Stack then refuses to do anything, and since stack setup decides to run --version, it complains loudly when --version decides to segfault or something. It then refuses to install a new ghc over the borked one, instead of handling the case nicely and reinstalling for me. So I have to go into my stack cache and remove ghc-8.0.1.

I think there are two separate issues here:

  1. No way to recover from the --version failing and treating that as "ghc is not installed"
  2. [and this might be cabal's fault, not stack's] no way to deal with compiling a large amount of modules in a single batch.

My instinct is that GHC processes are being re-used somehow by Cabal, or something else is persisting, which causes the memory usage to slowly climb.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions