Draft: Programmable Patchy Particles For Materials Design
Draft: Programmable Patchy Particles For Materials Design
design
Ella M. Kinga,1 , Chrisy Xiyu Dub,c,1 , Qian-Ze Zhub , Samuel S. Schoenholzd,e , and Michael P. Brennerb,d
a
Department of Physics, Harvard University, Cambridge MA 02139, USA; b School of Engineering and Applied Sciences, Harvard University, Cambridge MA 02139, USA;
c
Mechanical Engineering, University of Hawai‘i at Mānoa, Honolulu HI 96822, USA; d Google Research, Mountainview CA 94043; e OpenAI
Direct design of complex functional materials would revolutionize sive research has been done on anisotropic particles, ranging
technologies ranging from printable organs to novel clean energy de- from mapping out phase diagrams for hard particles with non-
arXiv:2312.05360v1 [cond-mat.soft] 8 Dec 2023
vices. However, even incremental steps towards designing functional trivial shapes (17) to designing patchy particles to self-assemble
materials have proven challenging. If the material is constructed from open lattices such as cubic diamond structures (18, 19). The
highly complex components, the design space of materials proper- design space for anisotropic particles is immense (20), making
ties rapidly becomes too computationally expensive to search. On brute force approaches to searching for desirable properties
the other hand, very simple components such as uniform spherical unsustainable for modern materials design. However, the
particles are not powerful enough to capture rich functional behav- dramatic advances brought about by the machine learning
ior. Here, we introduce a differentiable materials design model with community have recently made it possible to search vast design
components that are simple enough to design yet powerful enough landscapes for regions with desirable behavior.
to capture complex materials properties: rigid bodies composed of We introduce the first inverse design method for anisotropic
spherical particles with directional interactions (patchy particles). particles that is both physics-based and fully differentiable.
We showcase the method with self-assembly designs ranging from Our work has parallels to previous work in that we design
open lattices to self-limiting clusters, all of which are notoriously interactions that lead to target self-assembled structures (8–
T
challenging design goals to achieve using purely isotropic particles. 12, 21). However, many of these approaches lead to highly
By directly optimizing over the location and interaction of the patches complex interactions (8, 13, 21) which are inaccessible ex-
on patchy particles using gradient descent, we dramatically reduce perimentally. In contrast, we move the complexity from the
F
the computation time for finding the optimal building blocks.
bining the principles that enable machine learning methods to dimensional representations.
efficiently navigate large parameter spaces with physics- and Here, we introduce a framework that capitalizes on the
materials science-informed models, we introduce a system that advances wrought by machine learning to broadly enable in-
both complex enough to capture desirable functional behavior
and is amenable to inverse design.
One major line of inquiry towards designing complex func- Significance Statement
tional materials focuses on materials with uniform spherical
The development of new materials has been a transformative
particles as components. While this approach has led to
force in shaping the modern world. The traditional approach
promising advances (8–12), the absence of directional interac-
to creating new functional materials relies on a combination
tions significantly limits the design space. In materials with
of hard-won intuition and arduous labor to sift through the
spherical components, designed interactions either are too
innumerable possibilities. However, as the functions we need
complicated to be experimentally realizable (8), or necessitate
grow increasingly complex, finding that rare substance amid
that every particle interact uniquely with every other particle
all possible materials becomes increasingly difficult. Instead,
in the system, which cannot be physically instantiated at scale
we will need methods for the direct design of novel materials.
(13). Conversely, a rich literature of work (13–16) has been
Existing methods for inverse materials design are limited to
conducted by running forward simulations of systems with
extremely simple components. Here, we dramatically increase
highly complex components, such as proteins. The complexity
the complexity of the designable components, broadening the
of the components means individual simulations are extremely
scope of materials we can design.
computationally intensive, making inverse design approaches
untenable. Moreover, the design space for these systems is too
Please provide details of author contributions here.
vast to search effectively.
The authors declare no conflict of interest here.
Breaking rotational symmetry of the component particles
1
Ella M. King contributed equally to this work with Chrisy Xiyu Du
vastly increases the potential for materials designability with-
2
out relying on having a large number of particle types. Exten- To whom correspondence should be addressed. E-mail: [email protected]
+
for rotational degrees of freedom, following the algorithm intro-
larger system duced in (36). The technical details of threading the gradients
longer simulation through simulations of anisotropic particles, in addition to a
detailed discussion of other features of the implementation,
T
can be found in the Supplementary Information (SI) .
Fig. 1. A. Optimizing patchy particle interactions The optimization parameters
(shown on the top) are the patch locations and the interaction matrix of patch strengths. Our prototypical optimization procedure for inverse design
These parameters are used to run a forward simulation, and a loss function is com-
F with MD begins with specifying all the variables needed for
puted. We then take the gradient of the loss function with respect to the optimization a standard forward simulation. These variables include the
parameters, and update the parameters accordingly. The loss functions vary for differ- system information, such as the number of particles, pair
ent optimization targets. B. Gradient of Loss Function respect to parameters for
optimization The pseudo code demonstrates how the gradient is computed based on
potential, and box size, as well as the integrator information,
RA
the parameters for optimization. C. Extrapolation to more performant MD engines such as the integrator type, temperature, and step size. While
We test optimal parameters in HOOMD-blue, showing both that optimal parameters any of these variables can be optimized, we focus on examples
are valid across different MD engines and enabling rapid testing for longer simulations where we optimize over only the pair potentials and the particle
with more particles.
geometries. We parameterize the pair potential by a matrix
of interaction strengths, and we parameterize the particle
geometries by the locations of patches on central particles
verse design of complex, anisotropic systems. This framework
(Fig. 1A).
end-to end differentiable and is system-independent within
D
JAX-MD (28), a Molecular Dynamics (MD) engine with auto- To compute gradients of patch locations (θi , ϕi ) and inter-
matic differentiation (AD) (29) enabled. AD is the workhorse action strengths (Eij ), we first define a forward MD simulation
underlying the explosion in productivity in the machine learn- function (Fig. 1B) that takes (θi , ϕi , Eij ) as input parameters
ing community in recent decades. We introduce the ability to and returns the position data of every particle in the simu-
directly optimize over particle geometry and anisotropic inter- lation. We then use the position data as input for our loss
actions. We demonstrate the method specifically on patchy function, where we define criteria to evaluate whether the
particles, a model system that is simple enough to design, yet system is close to our targeted behavior. Lastly, we take the
rich enough to capture features such as directional interac- gradient of the loss function respect to (θi , ϕi , Eij ), and update
tions. In this paper, we first discuss the implementation of (θi , ϕi , Eij ) based on the gradient values for the next iteration.
the method, and then we demonstrate the versatility of the We use the Adam optimizer to update our parameters based
platform by showcasing three examples: (i) stabilization of a on the gradient computation.
Kagome lattice, (ii) self-limiting 2D ring assembly and (iii) sta- As a final step, shown in Fig. 1C, we use our optimal pa-
bilization of 3D finite clusters. We include a python notebook rameters to run forward simulations in a more performant
(30) that demonstrates how to perform the optimization for MD engine. These forward simulations are run for longer
the 3D finite cluster case. Direct gradient descent of building timescales and with more particles, demonstrating validity of
block properties will enable researchers to efficiently design our optimal parameters across different MD engines and be-
novel materials with targeted properties and functions. yond the time- and length-scale of the simulations we optimize
over. This iterative procedure shows one possible means of
bringing inverse-design into traditional MD simulation.
Method
We note that this optimization procedure does not rely
MD simulations are a powerful tool for understanding micro- on any black-box methods: all the functions we differentiate
and nanoscale systems. When combined with inverse design are physics-based simulations. Because every step of the
methods, MD simulations can be used to design highly com- optimization function, from MD simulation to loss function
plex structures and materials properties, ranging from intricate evaluation, is fully differentiable, we are able to compute
2 | www.pnas.org/cgi/doi/10.1073/pnas.XXXXXXXXXX Du et al.
A WCA Potential:
6
σ12 σ
U (r) = ε − +ε [3]
r r
T
optimization is initialized with random parameters. B Simulations run in HOOMD-blue and six patches that are rigidly attached to the central particle
demonstrating that the optimal parameters (left) stabilize the kagome lattice, while (Fig. 2B). The central particles interact with one another via a
random parameters (right) cause the lattice to melt. soft sphere potential, and the patches each interact via a Morse
potential. We keep the Morse interactions fixed and optimize
gradients of arbitrary parameters.
F over the locations of the patches on the central particle.
The optimization procedure follows the structure outlined in
Method Section. We initialize the system in a Kagome lattice
RA
Results configuration with the orientations of the particles randomized.
We demonstrate the optimization and design of patchy parti- We then run 200 replicate MD simulations. The simulations
cles. These particles consist of a central particle plus a set of are run with for 40000 steps with a timestep (dt) of 1e-3 and
patches rigidly attached to the central particle (see Fig. 1A). 100 particles, at a temperature (kT ) of 0.1 and an area fraction
The central particle describes the general shape of the patchy of 0.3 with periodic boundary conditions. We then measure
particle, and the set of patches governs the directional in- the average loss function across the replicates. Because we
teraction of the patchy particle. Because we have enabled initialize in the lattice configuration, the loss function for the
D
end-to-end differentiable simulations of anisotropic particles, optimization is simply the distance of the particles from their
we can directly optimize over the locations of the patches and initial position: if the Kagome lattice is stable, the lattice
the interaction matrices between patches. As a result, the rich will not melt and the positions of the particles will remain
design space available to patchy particles becomes feasible to constant, up to vibrational motion. We compute the gradient
search. of the average loss function with respect to the positions of
Here, we showcase three examples: (i) stabilization of a the patches on the central particle. Lastly, we update the
Kagome lattice, (ii) self-limiting 2D ring assembly and (iii) positions of the patches based on the value of the gradient
stabilization of 3D finite clusters. Designing parameters for using the Adam optimizer available in JAX. We repeat this
any of these three examples using isotropic particles is highly procedure for 100 optimization steps with a learning rate of
challenging and requires complex pair potentials. By introduc- 0.1, and then starting from the optimal value, we run another
ing anisotropy, we demonstrate robust design of each of these 100 optimization steps with a learning rate of 0.05 and finally
systems with simple interaction potentials. another 100 steps with a learning rate of 0.01.
In each example discussed, the patches interact via a Morse At the end of the JAX-based optimization procedure, we
potential, and the central particles interact via a soft sphere take the optimal parameters to run a longer simulation testing
potential or WCA potential. their stability using HOOMD-blue (34, 42, 43). Fig. 2A shows
The three potentials are as follows: the ψ6 order parameter measured using Freud (44) for designed
building blocks and randomly generated ones. We can see
Morse Potential: clearly that the designed building blocks stabilize the Kagome
lattice and the parameters are transferable across different
U (r) = ε(1 − e−α(r−r0 ) )2 [1]
MD engines.
Soft Sphere Potential: Self-Limiting Rings. Despite the fact that natural systems rely
σ
α on self-limited assembly, synthetically developing self-limiting
U (r) = ε [2] structures remains a significant challenge (5). Using our model,
r
Fig. 3. Assembly of self-limiting rings. A Optimization results for formation of Self-Limiting 3D Clusters. While our 2D examples were suc-
square rings. The x-axis shows the optimization steps. The y -axis shows the patch
cessful, working with three-dimensional structures often poses
opening angle on the left (navy), and the loss function on the right (purple). Over the
T
course of the optimization, the patch opening angle tends towards a value slightly different challenges. Inspired by virus shells, we demonstrate
greater than 100 degrees. B Assembly yields computed from forward MD simulations the stabilization of the simplest nontrivial platonic solid: the
run in HOOMD-blue for the patchy particles designed to yield triangles (green) and octahedron (Fig. 4A(ii)). We leverage the non-isotropic inter-
squares (orange). The yield of each ring design peaks at the desired ring size, actions offered by patchy particles to find the patch positions
F
demonstrating that the design procedure was successful. C One example end result
of assembling squares in a bath of particles. While some incorrect products (primarily
larger rings) are observed, most of the particles are in square configurations.
and interaction strengths that consistently stabilize octahedral
structures.
We begin with the model proposed by (24), consisting of
RA
two layers of patches in concentric circles (Fig. 4A(i)). We
we design self-limited rings of varying sizes that self-assemble optimize over the the positions of these circles of patches while
in a bath of components. fixing the interaction strength to be consistent with (24).
To design self-limiting rings, we use a patchy particle model Critically, though (24) required mapping out regions of the
consists of a central particle and two patches. The central free energy landscape to achieve assembly of clusters, we are
particles interact via a soft sphere potential (Eq. 2), and the able to recover features similar to their results with no explicit
patches interact via a Morse potential (Eq. 1). We optimize measurement of the free energies.
D
over both the location of the patches and the strength of Based on the simulation details in (24), we adapted the
the patch interactions. Each patch is allowed to vary inde- model to the JAX-MD simulation environment. We use a WCA
pendently. We initialize the patch positions and strengths potential (Eq. 3) for the center particle with σ = 5.0 and
randomly. ε = 1.0 and a Morse potential for the patches with ε = 4.0
The optimization procedure again follows the structure and r0 = 0.0. We simulate using a Langevin integrator with
listed in the Method section. Here, however, because we γ = 5.0, dt = 1e-4, kT = 0.8, and a number density of 0.05.
are interested in self-assembly rather than stabilization, we Despite the modifications to the simulation parameters, the
initialize the simulation with particles with random initial energy scale and dynamics of our system closely follow those
positions and orientations. described in (24).
To compute the loss for this calculation, we take the dis- Our optimization procedure for stabilizing 3D clusters
tance between each particle and its M nearest neighbors closely mirrors our method in two dimensions, with minor
(M = 3 for square rings because squares consist of 4 particles, modifications. We note that the self-assembly process for
etc), and compare those distances to a reference structure. finite 3D clusters is considerably more complex from both
The reference structure is a perfectly assembled ring. We thermodynamic and simulation perspectives. There are far
optimize over both the positions and the strengths of interac- more competing structures and the system takes much longer
tions of the patches. We compute the average loss over 128 to equilibrate. We mitigate these challenges by initializing our
replicate simulations that each run for 40,000 steps with a dt systems with 6 patchy particles, each consisting of 1 center
of 1e-3, at a temperature (kT ) of 1.0 and an area fraction of particle and 20 patches, in a perfect octahedron.
0.2. To reduce computational cost, we optimize over only the We again use a loss function that consists of computing
last 1,000 steps of the simulation. Despite this approximation, nearest neighbor distances relative to those of a reference
the gradients are meaningful enough to converge to optimal structure. In this case, the reference structure is the correctly
parameters. formed octahedron. For every optimization step, we run 1
Our model rapidly converges to a set of parameters that con- simulation for 200,000 steps and compute the gradient of
sistently forms independent rings of the specified size (Fig. 3A). the loss function to update patch locations. The length of
4 | www.pnas.org/cgi/doi/10.1073/pnas.XXXXXXXXXX Du et al.
op
A (i) θ1 (ii) (iii) tim
iza
parameters yielded a lower loss for stabilization than those in
θ2 tio (24) (see SI for details). This may be alternatively explained
n
by a difference in the two models: our patches have no volume
of their own.
Despite these differences, our optimal ring positions fall
B between the two found in (24). The optimal location we find
is closer to the inner ring in (24), which may indicate that the
inner ring is a more vital feature in stabilization. Additionally,
we ran forward self-assembly simulations using the two sets
of parameters using HOOMD-blue to test their ability to self-
assemble, and the JAX-MD optimized parameters lead to a
faster decrease of our loss function (see SI for more details).
Critically, we were able to achieve these results without
mapping the free energy landscape. This method not only
provides a straightforward way to search the design space
of anisotropic particles for properties of interest, but also
showcase how small difference in model choice could lead to
different optimal final results. This fast feedback loop for par-
ticle design could be instrumental in mapping to experimental
systems.
Discussion
Fig. 4. Stabilization of octahedral cluster. (A) (i) patchy particle model used to
We have introduced an end-to-end differentiable model system
optimize for octahedral cluster. θ1 and θ2 determines the locations of the two rings
capable of capturing rich functional behavior in materials
T
of patches, where each ring has A-A type attractions. (ii) sample octahedral cluster
(iii) patchy particle evolution from the beginning to the end of an optimization run. while still being simple enough to directly design. We have
(B) Ensemble of optimization results for stabilization of an octahedral cluster. The demonstrated the model by designing stabilization of an open
x-axis shows the optimization steps. The y -axis shows the patch opening angle on lattice structure, self-limiting assembly in 2D, and stabilization
F
the bottom plot (navy), and the loss function on the top plot (purple). The dotted lines
show the optimized parameters from (24). Over the course of the optimization, the
two patch angles converge to the same value, and fall into the range of the literature
optimized values. The set of independent optimization runs all converge to the same
of 3D finite clusters.
In each case, we have made use of only one particle type.
Previous efforts to, e.g., stabilize or assemble octahedral struc-
RA
optimal patch angles. tures have relied on having N different particle types to assem-
ble a structure of N particles (33, 45). Though our individual
components are more complex, the need to construct only one
the simulation is determined by analyzing the time needed particle type renders our model system possible to manufacture
for to self-assemble a partial cluster (see SI for details) and at scale.
the number of replicates per optimization step is decided by Though we believe our model offers significant potential
gradient magnitude. We unexpectedly observed that using for design of novel functional materials, its design potential is
D
Langevin dynamics over Nosé-Hoover significantly reduces limited by requiring differentiable loss functions and compu-
variation in the gradients. The whole optimization procedure tational expense. To compute meaningful gradients based on
consists of 300 optimization steps with three learning rates the loss function, we can only use loss functions that do not
[0.1, 0.05, 0.01] respectively using the Adam optimizer. rely on a sharp radial or nearest-neighbor cut-off. This limi-
We initialize the optimization with randomly generated tation makes using traditional well-performed loss functions
patch angle parameters (see Fig. 4A(iii) as an example). With (order parameters), such as the local bond order parameter
these random parameter values, the octahedron is not stable. (46), less feasible and increases the difficulty of designing self-
This can be seen in the early values of the loss function: limiting structures, where one of the most straightforward
the loss at the outset of the optimization is both large and loss functions is to count the number of particles in a cluster.
highly variable. As the optimization proceeds, the patch The limitation on loss functions is both a challenge and an
positions converge and the loss decreases. Ultimately, the opportunity. With more machine learning techniques being
optimization converges to parameters that reliably stabilize the incorporated in materials design, we need to rethink how we
three-dimensional cluster, as shown in Fig 4B. We performed 4 describe materials structures and properties and come up with
independent optimizations for cluster stabilization, and for all new robust and accurate descriptors that fit the inverse-design
the runs where the loss function converged, the patch locations method at hand.
converged to the same value also (see SI for details). On the side of computational expense, our method is limited
Because we optimize for stabilization rather than assembly, mostly by GPU memory. Large systems of particles yield
our results deviate slightly from (24). In (24), the optimal highly memory-intensive gradient computations, so smaller
parameters for octahedra assembly is [42.0◦ , 53.7◦ ], while our systems or behaviors that are well-described by local structure
optimal parameters for octahedra stabilization is [45.3◦ , 46.0◦ ]. are better-suited to the model. The memory usage for a
We conclude that having two rings at similar locations is more particular optimization run is decided by the combination of
favorable than having separated rings for the case of stabiliza- system size (N ), run timesteps (rs ), and batch size (b). For
tion, and validate this conclusion with forward simulations a GPU with 32Gb of memory, we can run a simulation of
of systems with each set of parameters. Indeed, our optimal N × rs × b ≤ 108 . The limitation can be mitigated in a few
T
17-1-3029), the National Science Foundation Graduate Research Fel- 38. TH Han, et al., Fractionalized excitations in the spin-liquid state of a kagome-lattice antiferro-
lowship under Grant No. DGE1745303, NSF Grant DMR-1921619, magnet. Nature 492, 406–410 (2012).
and the NSF AI Institute of Dynamic Systems (#2112085). 3D 39. H Xue, Y Yang, F Gao, Y Chong, B Zhang, Acoustic higher-order topological insulator on a
particles and trajectories are rendered by INJAVIS (48). Data F kagome lattice. Nat. materials 18, 108–112 (2019).
40. WD Piñeros, M Baldea, TM Truskett, Designing convex repulsive pair potentials that favor
generated in this paper is managed by Signac (49, 50).
assembly of kagome and snub square lattices. The J. Chem. Phys. 145, 054901 (2016).
41. Q Chen, SC Bae, S Granick, Directed self-assembly of a colloidal kagome lattice. Nature 469,
1. M Sindoro, N Yanai, AY Jee, S Granick, Colloidal-sized metal–organic frameworks: synthesis
381–384 (2011).
and applications. Accounts chemical research 47, 459–469 (2014).
42. TD Nguyen, CL Phillips, JA Anderson, SC Glotzer, Rigid body constraints realized in massively-
RA
2. MR Jones, NC Seeman, CA Mirkin, Programmable materials and the nature of the dna bond.
parallel molecular dynamics on graphics processing units. Comput. Phys. Commun. 182,
Science 347 (2015).
2307–2313 (2011).
3. WB Rogers, WM Shih, VN Manoharan, Using dna to program the self-assembly of colloidal
43. J Glaser, X Zha, JA Anderson, SC Glotzer, A Travesset, Pressure in rigid body molecular
nanoparticles and microparticles. Nat. Rev. Mater. 1, 1–14 (2016).
dynamics. Comput. Mater. Sci. 173, 109430 (2020).
4. Z Zeravcic, VN Manoharan, MP Brenner, Colloquium: Toward living matter with colloidal
44. V Ramasubramani, et al., freud: A software suite for high throughput analysis of particle
particles. Rev. Mod. Phys. 89, 031001 (2017).
simulation data. Comput. Phys. Commun. 254, 107275 (2020).
5. MF Hagan, GM Grason, Equilibrium mechanisms of self-limiting assembly. Rev. modern
45. Z Zeravcic, VN Manoharan, MP Brenner, Size limits of self-assembled colloidal structures
physics 93, 025008 (2021).
made using specific interactions. Proc. Natl. Acad. Sci. 111, 15918–15923 (2014).
6. T Hueckel, GM Hocky, S Sacanna, Total synthesis of colloidal matter. Nat. Rev. Mater. 6,
46. PJ Steinhardt, DR Nelson, M Ronchetti, Bond-orientational order in liquids and glasses. Phys.
1053–1069 (2021).
Rev. B 28, 784 (1983).
D
7. CX Du, et al., Programming interactions in magnetic handshake materials. Soft Matter 18,
47. A Walther, AH Muller, Janus particles: synthesis, self-assembly, physical properties, and
6404–6410 (2022).
applications. Chem. reviews 113, 5194–5261 (2013).
8. J Dshemuchadse, PF Damasceno, CL Phillips, M Engel, SC Glotzer, Moving beyond the
48. M Engel, Injavis — interactive java visualization (2021).
constraints of chemistry via crystal structure discovery with isotropic multiwell pair potentials.
49. CS Adorf, PM Dodd, V Ramasubramani, SC Glotzer, Simple data and workflow management
Proc. Natl. Acad. Sci. 118, e2024034118 (2021).
with the signac framework. Comput. Mater. Sci. 146, 220–229 (2018).
9. H Fang, MF Hagan, WB Rogers, Two-step crystallization and solid–solid transitions in binary
50. V Ramasubramani, CS Adorf, PM Dodd, BD Dice, SC Glotzer, signac: A Python framework
colloidal mixtures. Proc. Natl. Acad. Sci. 117, 27927–27933 (2020).
for data and workflow management in Proceedings of the 17th Python in Science Conference.
10. WD Piñeros, BA Lindquist, RB Jadrich, TM Truskett, Inverse design of multicomponent
pp. 152–159 (2018).
assemblies. The J. Chem. Phys. 148 (2018).
11. BA Lindquist, RB Jadrich, WD Piñeros, TM Truskett, Inverse design of self-assembling
frank-kasper phases and insights into emergent quasicrystals. The J. Phys. Chem. B 122,
5547–5556 (2018).
12. CS Adorf, J Antonaglia, J Dshemuchadse, SC Glotzer, Inverse design of simple pair potentials
for the self-assembly of complex structures. The J. chemical physics 149 (2018).
13. S Angioletti-Uberti, BM Mognetti, D Frenkel, Theory and simulation of dna-coated colloids: a
guide for rational design. Phys. Chem. Chem. Phys. 18, 6373–6393 (2016).
14. TE Ouldridge, AA Louis, JP Doye, Structural, mechanical, and thermodynamic properties of a
coarse-grained dna model. The J. chemical physics 134, 02B627 (2011).
15. KJ Bowers, et al., Scalable algorithms for molecular dynamics simulations on commodity
clusters in Proceedings of the 2006 ACM/IEEE Conference on Supercomputing. pp. 84–es
(2006).
16. J Wang, RM Wolf, JW Caldwell, PA Kollman, DA Case, Development and testing of a general
amber force field. J. computational chemistry 25, 1157–1174 (2004).
17. D Klotsa, ER Chen, M Engel, SC Glotzer, Intermediate crystalline structures of colloids in
shape space. Soft Matter 14, 8692–8697 (2018).
18. F Romano, J Russo, L Kroc, P Šulc, Designing patchy interactions to self-assemble arbitrary
structures. Phys. Rev. Lett. 125, 118003 (2020).
19. AB Rao, et al., Leveraging hierarchical self-assembly pathways for realizing colloidal photonic
crystals. ACS nano 14, 5348–5359 (2020).
20. SC Glotzer, MJ Solomon, Anisotropy of building blocks and their assembly into complex
structures. Nat. materials 6, 557–562 (2007).
21. S Whitelam, I Tamblyn, Neuroevolutionary learning of particles and protocols for self-assembly.
Phys. review letters 127, 018003 (2021).
6 | www.pnas.org/cgi/doi/10.1073/pnas.XXXXXXXXXX Du et al.