Skip to content

Conversation

@q-posev
Copy link
Contributor

@q-posev q-posev commented Mar 3, 2023

Description

There was a race condition in the parallel calculation of ESP over grid in memory. This was fixed in PR #1900.
This made the computation of ESP serial and quite slow (as mentioned in #1614 for example).
This PR brings back the OpenMP parallelization, the race condition is avoided by allowing each thread to hold its own copy of the ElectrostaticInt object. Thanks to @JonathonMisiewicz for the initial hint.

User API & Changelog headlines

  • Significant acceleration of the calculation of ESP over grid in memory

Dev notes & details

  • Created a vector of thread-specific <ElectrostaticInt> and <Matrix> objects for computing the ESP at a given grid point
  • Added #pragma omp parallel for schedule(dynamic) to parallelize the outer loop over grid points

Questions

  • While working on this part of the code I noticed that cubeprop uses a scheme different from the one used by oeprop to compute ESP on a grid. I am not sure this is documented. Should it be?
  • Initially my goal was to parallelize the computation of compute_esp_over_grid function which reads the grid from grid.dat file. The way it is designed now, the grid is read iteratively, so one grid point read->one ESP point computed->one ESP point written to the output grid_esp.dat file. This is not parallelizible. However, if one would read grid points in batches - this can be parallelized. What do you think?

Checklist

Status

  • Ready for review
  • Ready for merge

Copy link
Contributor

@JonathonMisiewicz JonathonMisiewicz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One minor comment.

outfile->Printf("\n Electrostatic potential to be computed on the grid and written to grid_esp.dat\n");
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@JonathonMisiewicz I accidentally marked this as resolved, please let me know if additional changes are needed for approval of this PR.

Copy link
Member

@loriab loriab left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the parallelization!

@q-posev
Copy link
Contributor Author

q-posev commented Apr 1, 2023

Thanks for reviewing @loriab @JonathonMisiewicz !

@JonathonMisiewicz JonathonMisiewicz merged commit b4d7db4 into psi4:master Apr 3, 2023
}

#ifdef _OPENMP
#pragma omp parallel for schedule(dynamic)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

schedule(guided) would be better

@loriab loriab added this to the Psi4 1.8 milestone Apr 18, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants