D-WFC Growing Grid
D-WFC Growing Grid
Title: Expanding Wave Function Collapse with Growing Aalborg University Copenhagen
Semester Coordinator:
Project Period: 1. feb 2019 – 28. may 2019
Secretary:
Abstract:
Supervisor(s): George Palamas
This project combines Wave Function Collapse and
Growing Grids to create an improved procedural
content generator that creates diverse output. A
solution for traversal and feature distribution was
Project group no.: ? also implemented. The results were evaluated by
measuring the difficulty of navigating maps created.
Another evaluation measured the recognizability of
maps created with the system. A visual inspection
Members: Jonas Aksel
was also done to showcase the capabilities. The
Billeskov & Tobias Nordvig results showed that navigation was significantly
Møller harder in maps created with the system compared to
maps created with the original WFC algorithm. The
results from the second test showed that it was
slightly easier for the evaluation participants to
recognize the maps created with the system
compared to maps created with the original WFC
algorithm.
Copyright © 2006. This report and/or appended material may not be partly or completely published or copied without
prior written approval from the authors. Neither may the contents be used for commercial purposes without this written
Copies:
approval.
Pages:
Finished:
Expanding Wave Function Collapse with
Growing Grids for Procedural Content
Generation
2 Introduction 2
3 Analysis 3
3.1 What is Procedural Content Generation? . . . . . . . . . . . . . 3
3.2 Why use Procedural Content Generation in video games . . . . . 4
3.3 Procedural Content Generation in Video Games . . . . . . . . . 4
3.4 Wave Function Collapse . . . . . . . . . . . . . . . . . . . . . . . 8
3.5 Irregular Quadrilateral Grids . . . . . . . . . . . . . . . . . . . . 13
3.6 Marching squares . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.7 Traversability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.8 Related Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
5 Implementation 22
5.1 Implementation of Wave Function Collapse . . . . . . . . . . . . 22
5.2 Optimizing the propagation step . . . . . . . . . . . . . . . . . . 25
5.3 Contradictions and attempts at reinforcement learning . . . . . . 26
5.4 Feature distribution . . . . . . . . . . . . . . . . . . . . . . . . . 27
5.5 Implementation of Growing Grid . . . . . . . . . . . . . . . . . . 31
5.6 Blend shapes in a WFC environment . . . . . . . . . . . . . . . . 34
6 Evaluation 39
6.1 First Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
6.2 Second Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . 44
6.3 Evaluation through visual inspection . . . . . . . . . . . . . . . . 47
7 Discussion 50
8 Conclusion 51
9 Bibliography 51
10 Appendix 56
1
1 Abstract
This project combines Wave Function Collapse and Growing Grids to create an
improved procedural content generator that creates diverse output. A solution
for traversal and feature distribution was also implemented. The results were
evaluated by measuring the difficulty of navigating maps created. Another eval-
uation measured the recognizability of maps created with the system. A visual
inspection was also done to showcase the capabilities. The results showed that
navigation was significantly harder in maps created with the system compared
to maps created with the original WFC algorithm. The results from the second
test showed that it was slightly easier for the evaluation participants to rec-
ognize the maps created with the system compared to maps created with the
original WFC algorithm.
2 Introduction
Procedural content generation (PCG) is the method for algorithmically creating
assets, with popular installments including terrains, levels and items. Using
PCG in video games as opposed to manually creating assets, has the benefit of
allowing developers to produce larger amounts of content, and the possibility
of games with endless content [1]. Some of the most exciting games in recent
history have used PCG to create content, such as an entire universe in No
Man’s Sky. A novel procedural content generator is the constraint solver Wave
Function Collapse (WFC). It is inspired by quantum mechanics’ wave functions,
where an unobserved state allows for all states to be possible, and observations
constraints the possibilities. In WFC the states are a set of models created,
where each model has a set of constraints, which limits what models can be
placed next to it in the grid. A grid of cells is initialized, with each cell being
unobserved and thus could contain all models. Slowly the grid is filled with
models, constraining neighbouring cells until the entire grid is collapsed into an
observed state, with only one model being possible in each cell. WFC is unique
from many other PCG algorithms because it’s a constraint solving algorithm.
This makes the output from WFC unique and the process of building worlds with
WFC is very satisfying. WFC is also interesting because there are no limitations
as to what the models could be. It can be used for terrains, with models being
hills, trees and rocks or be used for cities, with models being houses, roads or
parks. In addition to this, you can place models, such as roads or buildings
before running the algorithm and it will fill the grid around it. This makes
it possible to handcraft parts of the world and let the WFC algorithm handle
the rest. This project aims to expand upon WFC, by introducing irregular
quadrilateral grids. These will be generated with Growing Grid (GG), which is
a space filling algorithm designed to take a binary input image and distribute
a structured grid accordingly. Using this algorithm in combination with WFC,
any shape can be used to create diverse and interesting video game content.
2
3 Analysis
The analysis chapter will describe general PCG and different algorithms used in
this field. It will describe state of the art PCG algorithms used in video games
and how they might relate to this project. Following this, the chapter will go
in depth with the WFC algorithm and state of the art applications of WFC. In
relation to WFC the chapter will describe methods to expand upon WFC, with
irregular grids, marching squares and ensuring traversability.
3
categories; core, partial framing and decorational [8]. ”Core” PCG media is
when a core part of the media is procedurally generated, examples of this could
be games such as No Man’s Sky[9] or The Flame in the Flood[10]. In these
games the planets and the river respectively plays an important role in the core
gameplay. ”Partial framing” is when parts of the media are made with PCG,
examples of this could be the weapons in Borderlands[11], where the weapons
are very central but the core gameplay is a FPS shooter. ”Decorational” is
when PCG is only used for decorating handmade content, alot of videogames
use this to some degree. PCG algorithms can be categorized in six different
categories [1]. Pseudo random number generators, generative grammars, image
filters, spatial algorithms, modeling, simulation of complex systems and finally
artificial intelligence where constraint satisfaction algorithms such as WFC are
assigned.
4
and what tools game designers are using to model their procedural content to
fit their vision and requirements.
PCG in video games have a long history going back to games like the original
Rogue from 1980 that had the entire genre of video games Roguelikes, named
after itself [15]. Rogue was a fantasy game where the player had to explore a
procedurally generated dungeon that was different each time. This procedu-
ral system was implemented to keep the game fresh and exciting for multiple
playthroughs as the game featured permanent deaths and therefore a whole lot
of starting the game over.
Dead Cells
Dead Cells is one of the newer and more popular examples of a Roguelike.
You are playing as an undead warrior trying to escape an infected island and
in the end, kill the evil king. Dead Cells uses PCG for all its levels and some
of its weapons. The interesting part about Dead Cells’ use of PCG is how re-
stricted it is. The level designer Sebastien Benard describes how in Dead Cells
they only had PCG change very few things about their levels while handcrafting
rooms [16]. Every room does however, have a list of things the PCG algorithm
is allowed to change. They call this approach a hybrid between PCG and hand-
crafted levels. This approach was used because the developers felt that a more
unrestricted use of PCG would decrease the quality of level design more than
it would increase the replay ability and gameplay. Their algorithms are also fo-
cused on the difficulty of the game, where they control the power of the weapons
you pick up, the number of enemies you encounter and the damage the enemies
output. This makes the game challenging even on later playthroughs. The game
has completion time for the entire game around 60 hours, which is a long game
given a small studio and short development time. This is largely due to PCG
as handcrafting that amount of content with the given circumstances would not
be possible.
5
The Flame in the Flood
New video games in the RogueLike genre are still being developed and one
of the more interesting titles in recent years is The Flame in the Flood. The
game is developed by the small development team, the Molases Flood. Their
game stays true to the original RogueLike features like permanent deaths and
generated levels but instead of dungeons, The Flame in the Flood procedurally
generates a river that the player navigates downstream. The river is generated
by using gradient descent to place islands in the river. The game also uses
flow calculations based on the width of the river and the islands to generate
dynamic rapids and streams [12]. The Flame in the Flood highlights how PCG
can be integrated into the core game mechanics of a video game. The procedural
generation system allows control over how many rapids and flowing obstacles
that are seamlessly integrated in the rivers. These tools lets the developers
control the difficulty of the game and thereby creating more content that keeps
being challenging for players.
Borderlands
6
have 12 different manufacturers of weapons, where each manufacturer has their
own special strengths and weaknesses. The Dahl manufacturers’ weapons have
a high recoil reduction and low accuracy, for example. This system makes it
possible for the players to familiarize themselves with the generated weapons
and prefer one manufacturer over the other.
No Man’s Sky
One of the most ambitious video games using procedural generation in re-
cent years is No Man’s Sky. No Man’s Sky is a space exploration game by
Hello Games where an entire universe is procedural generated. No Man’s Sky
received mixed reviews upon release and the game never really justified why
18,446,744,073,709,551,616 different but strikingly similar planets is better than
a few handcrafted ones. IGN reviewer Dan Stapleton says ”The promise of a
limitless universe ended up working against it when I lost faith that it had
meaning-full things to show me” [18]. This was however also the reason why
the game received attention in the early days of development. The promise of
infinite content and exploration was a big appeal for players around the world.
No Man’s Sky used many different novel techniques to create their world such as
L-systems for vegetation, Warped Perlin noise for better landscapes and circu-
lar coordinate systems to enable their planets gravity to work as intended. No
Man’s Sky also controls parts of the difficulty with their procedural generation.
They control how many resources are on each planet, the amount of enemies
and the power of the spaceships and enemies.
7
Figure 4: One of the many procedurally generated planets in No Man’s Sky [9]
8
What is Wave Function Collapse?
WFC is a constraint based procedural algorithm that is inspired and named
after the concept wave function collapse from quantum physics. In quantum
physics wave function collapse is the idea that the unobserved state of a particle
can be anything. As soon as the particle is observed, the possibilities disappear
and the wave function collapses. The same idea is the backbone of the procedural
algorithm. WFC can be implemented with two different models, the tiled model
and the overlapping model. In this report, the focus will entirely on the tiled
model. The WFC algorithm initializes a grid, where each cell in the grid will be
defined as a slot. The WFC algorithm’s tiled model variant occupies each slot
in the grid with a module. A module contains information about the 3D model
and the constraints for the module’s neighbours. In the complete unobserved
state, each slot has the possibility to be filed by every module possible. The
modules have lists of possible neighbours they allow next to them in the grid.
This means that if one slot is collapsed down to a single possible module, the
neighbouring slots will also be restricted in possible modules because of the
neighbouring collapsed slot. This constraint of possible modules then spreads
to neighbouring slots, also constraining them in possible modules. WFC can be
extended to high dimensions but the most common is two and three dimensions.
For a two dimensional system, four neighbours is needed for a square grid and for
three dimensions six neighbours. WFC can also be used in different grid shapes,
square being the most common and the shape used in this report. Hexagon grids
with six neighbours are also popular. Designing the set of modules that can fill
out the grid can be a hard task, as they need to fit each other and allow some
flexibility in possible neighbours. This area will be explained further later in
the report. The algorithm for WFC described in the following section is from
the original WFC GitHub by Maxim Gumin [19]. This version is modified to
fit the tiled model and this project’s implementation.
1. Create an array with the dimensions of the output, this array will be
described as the grid but can also be described as the wave. Each element
in the array contains a slot, each slot contains a list of possible modules,
this list will be described as the possibility space.
2. Initialize the grid in a completely unobserved state, i.e. with all modules
added to all slots’ possibility spaces.
9
ii. Collapse this slot’s possibility space at random down to a single
module. This is done by removing all but one module from the
possibility space.
(b) Propagation: propagate the information gained on the previous ob-
servation step.
4. By now all the slots possibility space either contains exactly one module
or they all contain exactly zero modules. If all slots contain zero modules,
the algorithm has hit a contradiction and the result should be discarded.
If the slots all contain one module, the algorithm is completed and the
result can be returned.
A module contains four lists of modules that are allowed neighbours; a North,
West, South, East list. For two modules to be placed next to each other they
both need to have the other module in the respective neighbour list. If module
one contains module two in its north list of allowed neighbours, module two
needs to contain module one on its south list of allowed neighbours. If this is
not the case the algorithm will most likely hit a contradiction.
A possibility space can never become larger; it can only decrease in size.
When the algorithm starts propagating through the grid, possible modules for
a slot can only be removed. As they are removed they start a chain reaction
that reduces the size of neighbouring possibility spaces.
The propagation step propagates through the whole grid and checks if changes
made in the observation step affect the possibility space of the neighbouring
slots. A possibility space contains all modules that are allowed to be placed
in the slot based on the module’s allowed neighbours. There are two distinct
approaches to this. One approach being an inwards approach and another be-
ing an outwards approach. In the inwards approach each slot checks its four
neighbouring slots possibility spaces and derives what modules that are possi-
ble in the slot itself. This is done by looping through all modules in all four
neighbouring slots possibility spaces. If these four possibility spaces have an
identical module on their opposite neighbour list, the module can stay in the
center slots possibility space. The outwards approach removes modules in the
neighbouring slots possibility spaces instead, based on its own possibility space
allowed neighbours.
The propagation step is the most expensive part of the algorithm and is
recursive and dependent on the size of the grid. In the simplest approach it
propagates through each single slot in the grid each time a change in a possibility
space happens. It is not enough to only to loop through the grid once per
propagation step, as a reduction in a possibility space caused by the loop can
require new reductions in the neighbouring slots possibility space.
Step four is the last step when all possibility spaces have been reduced to
their minimum size. If only one module is left in every slot’s possibility space,
the algorithm succeeded. This means that a full grid has been generated and
all modules fit together with its neighbours. If zero modules are left in the
possibility spaces the algorithm failed and hit a contradiction. Contradiction
10
happens when the algorithm works itself into a corner where some constraints in
the module sets make no modules available for another slots possibility space.
This can be caused by demanding module sets. In this report we define a
demanding module set as a set where the modules have very small amount of
neighbours. This could be the case if the visual style of the output should be
very controlled or a specialized sub-module set with few transitions to the main
set is added in.
It is perfectly fine to do a pre-collapse of a slot. If a specific module is needed
on a specific slot because of design considerations or such, this can be done
after step 2. Simply remove all other modules from the given slots possibility
space and begin the algorithm with the propagation step. The algorithm will
then build around the given module and works like if it had been collapsed
randomly in step 3.a.ii. Pre-collapsing makes it possible to handcraft areas
of output generated with WFC. This can be useful when designing dungeons
for video games or similar content. For example, if a boss encounter needed
multiple modules put together in a specific way to form an arena in the center
of a dungeon; this could easily be done with WFC. Just pre-collapse the arena
modules before running the algorithm and let the algorithm form the dungeon
around the arena.
WFC was quickly adapted to work in three dimensions and the possibilities
of tilesets changed from sets of pixels to sets of 3D models. Maxim Gumin made
an application with voxel shapes to show the capabilities of working in three
dimensions [20].
Marian Kleineberg created an infinite world in three dimensions using WFC
[21]. The world can extend infinitely in any direction the player chooses to go in.
The tilesets consists of around 100 different modules, with custom constraints
for each of them. The result of this is a fully traversable 3D city with stairs,
houses and bridges.
11
The first use of WFC in video games was in the game ProcSkater developed
by Joseph Parker and Ryan Jones at a game jam [22]. The algorithm was
used for level generation and given a certain set of constraints, the generated
levels were ensured to be traversable. Joseph Parker went on to use the WFC
algorithm to create plateaus in the game Swapland and creating platforms in
the game Bug with a Gun.
Game developer Oskar Stålberg popularized the algorithm further by ex-
ploring working with WFC in three dimensions, non-square shapes and creating
beautiful tilesets. Oskar Stålberg demonstrates how working with WFC can
be used to generate procedural spheres with custom tilesets, generate buildings
extended infinitely into the air and generating small islands. The ladder is used
in the real-time tactics game Bad North developed by Plausible Concept (See
Fig. 6).
Figure 6: Example of 3D level generated by WFC in the game Bad North [23]
Andy Wallace created the game Maureens’ Chaotic Dungeon at the Global
Game Jam 2019, which utilizes an interesting feature of WFC [24]. The level is
generated using WFC and on runtime you can regenerate parts of the level. This
core gameplay mechanic allows the player to move around the infinite world and
change the level using the existing constraints of the WFC.
WFC introduces a wide array of uses to generate levels, whether it is in 2D,
3D or to manipulate existing levels on runtime. The algorithm is versatile in its
use and can be used with endless amounts of module sets to create exactly the
aesthetic needed. This makes the algorithm interesting to work with and tweak
as modifications to the algorithm can create basis for something completely
different, but with the existing qualities of the original WFC algorithm. Given
12
its versatility, it is still seen that all the games mentioned use WFC with regular
grids and evenly sized slots. There is big room for improvement in this field as
the only requirement for WFC to work is that each slot has an equal amount of
edges, and does not necessarily need to have a regular shape.
Noise
The simplest method that was found to create a structured irregular quadri-
lateral grid was using noise. Every point in the grid would be distorted by a
small amount to create some randomness and irregularities in the grid. The
method worked well and did what it was supposed to, but the grid was still
visible, since the center of each slot had not moved and was still in the same
position. Using this approach was also limited since there was a maximum
amount of noise you could apply to the points. If the noise became too high the
points would overlap into neighbouring slots and the grid would be destroyed.
The method could be salvaged by for example applying limits on the amount
of noise based on the neighbours’ noise. However it was quickly found that if
this method should work it would require many modifications and the outcome
might become uncontrollable.
13
Growing grid
Growing Grid (GG) is a self-organizing network developed by Bernd Fritzke
in 1995 [25]. The algorithm takes in a 2D binary image as an input and organizes
a grid to best cover the shape. GG is an expansion on the Growing Neural Gas
also developed by Bernd Fritzke [26]. The intended purpose of the algorithm is
as a space filling algorithm that makes it easier to use it in computer graphics
than Growing Neural Gas as grids are very common in this field. The algorithm
starts out with a 2x2 grid and goes through two phases known as “the growth
phase” and “the fine tuning phase”. The growth phase of the algorithm is set in
epochs that iteratively chooses a random point in the distribution and drags the
nearest point in the grid towards it. The other points will be dragged towards
the selected distribution point too, penalized by their distance in the grid to the
closest point. This process keeps looping until an arbitrary epoch size is reached.
When this epoch size is reached, the algorithm will insert a new row or column
in the grid. The row or column will be placed between the two points with
the highest distance between them. Because of the distance between the closest
point and the other points in the grid is calculated by a distance in the grid and
not a Euclidean distance, the grid will relax itself in the distribution. When
a defined max size of the grid is reached, the algorithm will enter the ”fine-
tuning phase.” This phase is almost identical to the ”growth-phase”, where
the only difference is that there will no longer be added rows and columns to
the grid. When the algorithm has run for one final epoch, it terminates and
returns the points from the grid. The growth phase is the main phase of the
algorithm and can be split into eight steps according to an explanation of by
Bernd Fritzke [27]. These steps will be explained further in the implementation
where examples from the Unity3D C# implementation can also be seen.
The input image thus decides how you want to use the algorithm. Using a
simple shape, such as a circle as an input image will leave you with an approx-
imation of a circular grid (see fig. 7).
14
Using something more complex, such as a heightmap can let you control the
density of the grid i.e. size of each slot, while also being able to control the
shape of the final grid (see fig. 8).
The algorithm is not very intuitive, as there are many random variables in
the algorithm and it takes some playing around with it to understand how it
works. Some inputs create stretched grids and large slots, while others create
even distributions. In addition to this, the same input does not necessarily
ensure the same output. However the algorithm will always ensure a structured
grid, which allows WFC to work without modifications to the algorithm.
15
The algorithm will take the corners of the input shape and invert those
inwards to create the corner squares of the grid. It will thereafter create squares
following the edge of the input shape between the corners to create the edges
of the grid. This is repeated until no longer possible and the remaining area is
triangulated to create an even amount of triangles, which are combined in pairs
to create quads. All slots are then subdivided if needed to create a uniform size
of slots.
The algorithm has a lot of potential for map generation given the strict
constraints it is able to solve and seemingly no constraints to what kind of
input it takes. The algorithm will allow a designer to create maps in any kind
of shape and size and combinations of shapes. Given the grid structure of the
output it will allows WFC to work seamlessly and other operations, which works
on grid. For example, in the demonstrations we see how the edges of each grid
are used to generate roads and roadside buildings, while the inside of the grid
is used to fill out the buildings.
16
to formulate a system of modules that fit together. Marching squares could also
have been used to implement flexible module shapes. In marching squares, a
square is defined by its four corners. Each corner can be either solid or not. As
squares in a larger grid of squares will share corner points between each other,
the values can then be interpolated to create smooth contours for the solid area
in the grid. This technique is commonly used to create smooth output from
binary input produced by other procedural algorithms. Video games use this
for many different purposes, such as caves or destructible environment.
By defining each square by its corners and as seen in figure 10, there are
six different possible squares if each square can be rotated freely. Both squares
in the second row are defined by the same corners and are ambiguous, if both
are implemented that totals in seven different squares. Only five of the seven
squares need rotation as both completely solid and completely non solid is the
same regardless of rotation. The two diagonal squares in the second row only
need one rotation. That totals to 18 different squares for a complete set. This
set of squares is because of its relation to two dimensional games, known as the
”marching square tileset”. This tileset can be expanded further into The Blob
tileset and reduced into the Micro-blob tileset. The Micro-blob tileset coined by
BorisTheBrave [32], is almost identical to the marching squares tileset. The only
difference is that both ambiguous diagonal tiles are removed. This means that
there are only five unique tiles remaining for the tileset and a total of 14 tiles
including rotated tiles. This makes creating working tilesets for WFC a simpler
task as the number of unique tiles are low and neighbourhood constraints can
also be defined according to this system.
17
3.7 Traversability
An important aspect of making a video game level is making sure that the player
is able to reach the goal of the level from the starting position [33]. Making
sure that the level allows the player to traverse through the level all the way
to the goal is an easy task when designing a level by hand, but when the level
generation is procedural it will require some rules. Procedurally generated levels
may end trapping the player in the starting position or locking off the goal of
the level. This would make the level unplayable and thus not fit for a game. In
many games with procedurally generated levels there is no set objective, such as
Minecraft or No Man’s Sky, which nullifies this problem. However in games such
as Diablo, Path of Exiles and Dead Cells, there is a need for the player to get
from point A to point B, which makes creating a traversable path between the
points a concern. The solution for this will often be specific to the underlying
procedural generation algorithm and a specific set of rules can not be set for all
games. The following sections will explain some of the different techniques used
to guarantee traversable levels in PCG.
Caves in Minecraft
The caves in Minecraft are made based on an algorithm similar to the Ran-
dom Walk algorithm. The algorithm is called Perlin Worms and carves out caves
out of a solid with a metaphorical worm moving around [35]. The direction of
the worm is determined by a 3D Perlin noise and changes after the worm has
moved a set length. Compared to using 3D Perlin noise to generate caves, this
algorithm ensures that each cave created by one worm is connected all the way
through, where 3D Perlin noise often leaves unconnected chunks of caves.
18
Rooms in Path of Exiles
The rooms in Path of Exiles are structured using graph theory combined
with branching [36]. The levels in Path of Exiles consist of a set of square
rooms organized such that you can exit and enter all rooms, and there is a path
from the entrance to the exit. The first step to making a level is placing the
entrance and the exit rooms. The next step is populating a grid with different
rooms from a library of rooms. These rooms have different sizes and shapes and
are assigned a weight, determining the difficulty of the room. All rooms in the
grid are placed to be traversable, such that the entire level can be navigated.
Now using a shortest path algorithm a route is determined from the entrance
to the exit based on length and difficulty of the rooms. All rooms not on the
route are now removed and the final step is to branch out from the path and
place rooms neighbouring the route. The algorithm ensures that the player
experiences minimal backtracking, while not having a straightforward path to
the exit.
Chiseling
19
the WFC algorithm. This ensures that you get an organic looking level with a
traversable path between keypoints.
20
levels with unique features that could work in a similar way as the discussed
patterns.
Daniel R. Montello shows that navigating urban fields with parallel and
orthogonal streets are easier than oblique streets[48]. In this project, using
the combined WFC, GG and chiseling could likely produce output with non-
orthogonal pathways. This could suggest a relation between navigation difficulty
and irregular grids.
21
5 Implementation
The implementation chapter will discuss how the different elements of this
project were developed. The elements are, WFC, the different module sets,
optimizing the algorithm, feature distribution and GG.
Initializing the grid happens in a nested for-loop that takes in four points
for each slot, where some slots share points. These points are assigned from
the GG algorithm that runs before the WFC algorithm. A root position is also
assigned to each point based on its position in the loop (see fig. 11).
Figure 11: The loop initializing new slots with points from the Growing Grid
algorithm
After the slots have been initialized each slot is assigned its six neighbours
based on grid positions. There is no border padding as a slot can have less than
six neighbours and is therefore only assigned its neighbours if they exist.
22
Step 2: Initialize the grid in unobserved state
Each slot has a list of all modules in a module set added to its own possibility
space. The list of modules is retrieved from a scriptable object list that makes it
easy to add and remove modules from the scriptable object in the Unity Editor.
If the slot has been selected for a path by the chiseling, instead of adding the
whole list of modules, only a single floor module is added. This results in the
slot being precollapsed into a floor module. The modules contain the 3D model
they eventually need to instantiate when the algorithm finishes and they contain
six lists with the neighbouring modules they are allowed to have (see fig. 12).
Figure 12: The loop loading the slots possibility spaces with modules
23
algorithm. One explanation for this could be that it is harder for the algorithm
to work itself systematic forward through the grid. Maxim Gumin original ex-
planation for the use of smallest possibility space was ”I noticed that when
humans draw something they often follow the minimal entropy heuristic them-
selves. That’s why the algorithm is so enjoyable to watch ”[19]. But maybe he
also stumbled upon the approach creating the least contradictions. When there
is only a single module left in a possibility space, that module’s 3D model gets
instantiated at the slots position.
Step 4: Cleanup
When step three is done every slot’s possibility space size is either one or
zero. If the size is one, the algorithm succeeded and every slot should have
instantiated a single 3D model in its position. If the size is zero, a contradiction
24
has happened and the algorithm failed. If there is no contradiction failcase,
Unity will usually crash and the application has to be restarted. But adding
a simple method to check for zero sized possibility spaces and calling it in the
propagation step solves this. When the content generation is done, this project
does a few things that are not integral pieces of the WFC algorithm but is
almost always going to be useful in any real application. First, a recalculation
of the normals of each instantiated model happens. If this is not done, and
the meshes have been changed, it can result in very strange looking output.
The mesh is also made static so a navmesh can be calculated, this allows NPC
movement and simplified player movement on the generated map. These things
are only needed if the WFC algorithm is going to be used for online generation
as opposed to offline pre generated content. The implementation is setup to
use coroutines. Using this allows for the WFC algorithm to be spread out
over a suitable amount of frames and thereby prevent the process from crashing
instantly. It also makes debugging easier as it is possible to follow the collapsing
and propagation process visually when using coroutines.
Figure 14: The first level created with Wave Function Collapse.
25
grass fire algorithm is now implemented. It is given from the WFC algorithm
that a possibility space can only change if, it is being changed directly in the
observation step or if its neighbouring slots possibility spaces change. Because
of this, it is possible to know exactly what slots that can change without recal-
culating the entire grid. Note that, it is possible to know if a possibility space
can change but not possible to know if it will change without calculations, as
a smaller possibility space in neighbouring slots not necessarily decreases the
original possibility spaces size. Each slot now contains a Boolean that holds
information about whether it should be recalculated or not. This Boolean is
true if a change in any of its neighbouring slots possibility spaces have hap-
pened since its own last recalculation. If a slot gets recalculated, it sets the
neighbouring slots Boolean to true. The propagation steps still needs to check
the whole grid for this Boolean and the exit condition, which means that the
number of loops through the grid remains the same. But it no longer needs to
recalculate every slot in the grid. The practical benefit from this change was
not documented but the effect made it possible to calculate grids many times
larger than originally without crashing Unity.
26
imagine it being implemented in an online application. The idea was discarded
after this but the uncollapsing subgrid is still used to handle contradictions. If a
contradiction happens, a sub-grid of 4x4 slots with the source of the contradic-
tion in the center of the sub-grid will un-collapse and re-collapse. If it does not
solve the contradiction the process will repeat until the contradiction is solved.
Figure 15: Gradient input image and output from WFC with weighted feature
distribution
27
neighbour it, the system will have no incentive to place a module that makes
a transition possible for the following slots. By utilizing the grammar system
this project assigns weights based on their neighbourhood grammar code. This
means that instead of weighting tiles directly, the tiles are weighted based on
their grammar code. This means that if the weight of a specific slot dictates that
a floor should be placed, but it is not allowed due to the neighbouring slots, it
will place a module, with a grammar code, which allows a floor to be placed next
to it. This system will allow the WFC to better transition into the modules that
are dictated by the weights to be placed. When the WFC was run without this
system and weights, it was seen that the modules placed would cluster together
and not allow the weights to work. I.e. if a solid module is placed there would
be a cluster of walls and corners around it and the WFC would have a hard time
breaking out of it and transitioning to a different type of tile, such as a floor.
The system was implemented to take in a binary image, which would control
the density of the module placed. To control this a density factor would have
to be calculated for each module, such that it could be matched to the input
image. The density factor was calculated based on each module’s grammar code
and how many of its neighbours are allowed to be floor tiles. For example a
floor module is allowed to have four floor modules as neighbours and thus it
will have a low density factor while a solid module is allowed to have 0 floor
tiles as neighbours and will then have a high density factor. In between are the
corner module, which is allowed two floor modules and the wall module, which
is allowed one floor module. For each floor module allowed, 0.25 is added to a
variable, giving for example a floor module a value of one and a wall module a
value of 0.25 (see fig. 16).
Next when collapsing a slot the density factor is taking into account in
choosing, which module is collapsed. This is done by assigning a weight to each
module depending on the value from the input image and the density factor.
The weight is calculated by taking a linear interpolation between (1 - the density
factor) and the density factor by the value form the input image. This is then
lifted to the power of eight to give us the final weight score (see fig. 17). Given
an input value of 0.8 will then result in a floor tile having a weight of
28
and a wall tile having a weight of
Figure 17: Code snippet for calculating weight when collapsing a slot
In the micro-blob tileset there exists four different density factors and these
can be plotted to see the correlation between input values, density factor and
the calculated weight (see fig. 18).
Each module in the possibility space then has a weight and the module to
collapse is chosen with a weighted random selection (see fig. 19).
29
Figure 19: Code snippet for weighted random selection
The feature distribution can be used to create open and closed spaces as it
was done it the examples presented in this section. However the features can be
expanded to be much more than this. The feature could for example be enemies
or power-ups for the player, which could be used to control the difficulty of the
level.
Figure 20: Circle input image and output from WFC with weighted feature
distribution
30
5.5 Implementation of Growing Grid
The implementation of the GG algorithm was done following the description
from Bernd Fritzke [27] [25] In addition to this, the source code for the Java im-
plementation on the demonstration website was also used. All implementation
was done in Unity3D and written in C#. The implementation was done in two
dimensions but it could be adapted to more.
The first step of the algorithm is to initialize the different values of the
algorithm. The distribution is created based on an input images pixels grey
scale values. The starting size of the grid was mostly set to 2x2 but can be
anything lower than the max grid size. The position of the points is chosen at
random from the distribution. Four other variables are also set, the max number
of points, the neighbourhood range, the epoch size and the learning rate. The
second step is to select a random point from the distribution. The second step
is also the start of an iterative process, which goes through step 2-8. The third
step finds the closest point in the grid to the random point based on Euclidean
distance (see fig. 21).
The fourth and fifth step increment a local counter on the winner and a
global time counter. These values will be used later. The sixth step adapts
each point by changing its position in the direction towards the random point
31
depending on learning rate and adaption strength (see fig. 22).
The learning rate is a modifier to the distance changed, which ranges from 0
– 0.1. The adaption strength is calculated for each point based on the distance
between the point and the winning point in terms of their index in the grid and
the neighbourhood range (see fig. 23). This means that the closest point in
the grid to the random point will move towards the random point as well as a
certain neighbourhood around the winning point.
Figure 24: Code snippet for calculating the distance between points
The seventh step is executed if the global time counter has reached a chosen
epoch size. If not the algorithm continues with from step two again. If the
epoch size has been reached you find the point, which has been the winning
point most frequently (see fig. 25).
32
Figure 25: Code snippet for finding the points with most wins
Next you find the most distant neighbour to the most winning point in
Euclidean distance. If the two points are in the same row, a new column will
be inserted and if they are in the same column, a new row will be inserted. The
new row or column will be between the two points’ rows or columns and the
points will be interpolated between their direct neighbours.
If a row or column is inserted all local counters will be reset and the global
time counter will be reset as well. The eighth step checks if the network size
has reached the desired network size, which is initialized in total amount of
points in the grid. If it has not the algorithm returns to step two and if it has
the algorithm goes out of the growth phase and into the fine tune phase. The
fine tune phase is basically running the algorithm from step 2-6 for one more
epoch. This allows the algorithm to smooth out the most recently added row
or column and relax the grid. This phase can also be used to gradually reduce
learning rate and can be adjusted in length according to need. After the fine
tuning phase the GG algorithm is finished and you will be left with an irregular
quadrilateral grid, which is distributed according to the input image.
After the original algorithm was implemented there were added a few modi-
fications. The first modification was instead of having a binary image as input,
a grayscale image was used where the grayscale value would be used to control
the distribution. The values were split into four intervals, where a higher value
would increase the number of occurrences in the distribution and thus a more
dense distribution in the final grid. This modification would be used to be able
to better control the density of the final grid, through the input image. This
allowed input images such as heightmaps and gradients to be used. Using the
33
final grid with WFC showed that at times the grid would be flipped upside
down. This was fixed using the crossproduct of the first square in the grid. The
grid would then be flipped if this was the case. The final modification was the
length of each epoch, which was set to a fixed size instead of being dependent on
the size of the grid as it is in the original algorithm. It was found that the first
couple of iterations needed longer time to run and the last iterations needed less
time to run. Combining WFC and GG will produce maps on an irregular grid,
which we will define as twisted maps as opposed to maps created on a regular
grid, which will be defined as straight maps.
34
The Cave Module set
The first module set that was created with another purpose than purely
testing if the WFC algorithm worked was the cave module set (see fig. 27). The
cave module set contains five different modules. Floor, Solid, Corner, Inverse
Corner and Wall (see fig. 26). The three rotation dependent modules (Corner,
Inverse Corner and Wall) are also added to the system with their rotations. This
totals to 14 different modules. There are three different grammar codes for the
modules. Zero means no mass above floor level, one means half wall above floor
level and two means solid wall. Each of the modules also defines if its rotation is
inverse and if it is symmetrical to enable the grammar to work on the modules
that are rotated. An example could be that a Floors grammar is 0,0,0,0 because
it contains no solid mass above ground level. Another example could be that
the Corners grammar is 1,0,0,1. This grammar system is inspired by the same
marching squares system that inspired what modules that was needed in the
set. The concept is to define every slot by its four corners instead of its center
point. Defining a corner as ”on” or ”off” can then be used to create a continuous
boundary between two defined spaces. This has a number of benefits that makes
it easier to create procedural assets that connects well together as well as having
a binary grammar that defines each possibility.
Figure 26: All five modules used for the project. Adding up to 14 modules with
rotations.
35
Figure 27: A 50 by 50 grid generated from the cave module set
36
the module set was complete and every module was unique this would not be
needed and the continuity parts could be spread unevenly across the edges. As
all the modules are in this set are flipped around the vertical axis the continuity
such as the walls does not need to be centered on the edges.
Figure 28: All ten modules for the hand drawn module set
A two dimensional module set requires more unique modules than a 2.5
dimensional module set. When creating a 2.5 dimensional module set, only
five different modules are needed as the wall, corner and inverse corner can be
rotated and flipped to complete the total 15 modules for a complete micro blob
set. When creating a two dimensional module set that uses a standard horizontal
perspective line, only the modules that can be flipped around the vertical axis
can be flipped without messing up the modules. The modules that are flipped
around the vertical axis still works as their perspective is still intact. This can
be seen in fig. 29 where three different modules are needed for the wall module
and two different corner and inverse corner modules. These considerations are
not needed for two dimensional module sets without perspective such as flat
patterns or maps.
Figure 29: Example of output from the hand drawn module set
37
The three dimensional moduleset
To further evaluate the system and to gain a better understanding of the
visual outcomes, a three dimensional module set was created. In contrast to
the cave module set, this set was designed to work in 3D grids. Because of the
”natural” setting of the cave set, a concern about how the twisting and skewing
of man-made locations would look was raised. This module sets aim was to
explore this concern. The module set consists of seven unique modules (see fig.
31).
The module set is, like the other sets, based on the marching squares system.
However, there are some omissions compared to the other sets. The non-solid
38
module from the other sets is not present in this set as the buildings should
be rectangular like real buildings allowing windows in every room. If non-
rectangular blocks of buildings was desirable, a solid module could easily be
added to the set as it does not need a model. This is because the solid module
would never be visible in a three dimensional system as it is always encapsulated.
The inverse corner is also removed for the similar reasons, adding in an inverse
corner could create L shaped houses which are not desirable.
6 Evaluation
The evaluation chapter is split into three parts; two evaluations and a third
part with visual inspection of the generated output. The first evaluation focus
on the difficulty of the generated output. The second evaluation focuses on the
recognizability of the generation output. In the visual inspection section we
explore how the combined system could be used and test the capabilities of the
system. Evaluation procedural generated content can be difficult, even more so
when the product is more of a method or technique than an actual video game
or system.
39
WFC and GG will increase the time it takes for the participants to navigate
the maps. The hypothesis is based on the work by Daniel R. Montello that
suggests navigating in parallel and orthogonal paths are easier than navigation
in map with twisted paths [48]. Based on this research, we expect the test
participants to have a lower completion time in the non-twisted maps and a
higher completion time in the twisted maps created with WFC and GG.
40
All modules were placed without any weighting and with chiseling imple-
mented to create a route between two corners of the map. These six maps then
had the GG features removed to generate additional six maps, but with the same
distribution of modules. This created a total of 12 maps for the evaluation. Par-
ticipants of the evaluation would play a total of six of these maps, where they
would play each of the maps but have it randomly be either twisted with GG
or without. Every participant played the maps in the same order, to account
for learning how to control the character. The end of the labyrinth was marked
with a chest filled with gold the player would pick up when arriving at the end.
The start was marked with a closed-off entrance door to the labyrinth. The
player was also informed that they were a poor treasure hunter looking to prove
themselves in the labyrinth. All this was done to give the participants some mo-
tivation and some associations with other games or experiences. The evaluation
was done online, using Unity’s webGL build and results were returned through
automated email. The participants would therefore evaluate without supervi-
sion from us. The evaluation took approximately 5-10 minutes to complete all
six labyrinths. The evaluation was deployed through social media.
• H0: The variable from which the sample was extracted follows a Normal
distribution.
• Ha: The variable from which the sample was extracted does not follow a
Normal distribution.
The data set for twisted maps returned the following values and QQ-plot:
W: 0.616
p-value: < 0.0001
alpha: 0.05
41
Figure 33: QQ-plot for the twisted maps data set
As the computed p-value is lower than the significance level alpha=0.05, one
should reject the null hypothesis H0, and accept the alternative hypothesis Ha.
The data set for straight maps returned the following values and QQ-plot:
W: 0.602
p-value: < 0.0001
alpha: 0.05
42
• H0: The two samples follow the same distribution.
• Ha: The distributions of the two samples are different.
43
compared to the baseline, than when playing the twisted maps. There is a
greater difference between the means than between the medians, which suggests
some outliers in the dataset. This can also be seen in the higher standard
deviation of the twisted dataset, suggesting a greater tendency to get lost, when
playing the twisted maps. This can also be seen in the maximum value of each
data set, where the twisted dataset has a maximum value of 93.6 and the straight
dataset has a maximum value of 61.89. Disturbing the regularity of the grid
thus makes navigation more difficult and increases the tendency to lose spatial
orientation. These results are in line with the work by Daniel R. Montello, where
participants show greater navigational skills, when in orthogonal surroundings
[48]. The results also confirm the hypothesis proposed in this evaluation.
Figure 36: The six maps the participants had to match with the map they
played.
This second evaluation had some similarities to the first evaluation, as the
participants were asked to walk around a generated map and the players were
44
trying to gain some sense of the map they were placed in. However, this time
around the evaluation was done under the supervision of us, this made it pos-
sible to observe the evaluation and to be able to ask the participants questions.
For each map the participants were asked about their confidence in their choice,
their overview over the map and if they used any specific features to match the
pictures to the maps. After the participants had played their three maps, they
were asked for general techniques they had used and what map they thought
was the hardest to match in hindsight. The evaluation had ten participants.
Biggs et. al suggests players use patterns to orientate themselves in procedu-
rally generated maps [47]. However, the use of the patterns had no significant
impact on their measured ability to navigate. The use of patterns could be
similar to the use of twisted maps, as the twisted maps contain unique versions
of the modules that could be noticeable. The hypothesis of the evaluation is:
When twisting the maps with WFC and GG participants will have increased
ability to match the maps to the pictures.
45
X Correct Guesses
Straight 60%
Twisted 80%
The participants guesses shows that it was slightly easier for them to match
the maps in the twisted versions. The self assessment answers following each
questions in figure 6.2 shows similar results.
The answers to the Question ”Did you base your choice on a specific feature
of the map?” Shows that in 10 out of the 15 played twisted maps, the partici-
pant used the shape of the buildings in some way. Indicated by keywords such
as ”curved”,”flat”,”thick” and ”triangle”. 19 of the total 30 answers mentioned
the height of the buildings indicated with the words ”tower” and ”height” or
mentioning the number of floors. Of the 19 answers 10 were in straight maps
and nine in twisted maps. In the question ”What techniques did you use to
investigate the map?” asked after the participants had played three maps, eight
participants mentioned the height of the buildings. Three participants men-
tioned the shape of the buildings. Three participants mentioned the corners of
the maps. In the question ”Which map would you rate the most difficult to
match to the picture?” six participants said a straight map and four partici-
pants a twisted. Additionally one participant mentioned a straight map being
the easiest and one participant mentioned a twisted map being the easiest.
46
6.3 Evaluation through visual inspection
This section will evaluate the project through a visual inspection, showcasing
what the different algorithms of the project are capable of achieving. The pur-
pose of this section is to visually present different use cases of the algorithms and
discuss advantages and disadvantages of this approach to PCG. The different
algorithms, which will be presented is GG, Chiseling and Feature Distribution.
The algorithms will be tested on the same shapes.
Visual inspection
Using GG on a contained shape works well and the final grid will take on
the shape of the input (see fig. 38).
47
Figure 40: Growing Grid and Wave Function Collapse on various shapes
Figure 41: Growing Grid and Wave Function Collapse on various shapes
The feature distribution will be inspected using two different inputs (see fig.
42). The lines input shows how an input will be twisted when used together with
GG (see fig. 43). The edge input shows how some inputs can be implemented
regardless of the shape of the grid (see fig. 44).
48
Knowing how the input is transformed when applied to a twisted grid is
important when working with the two algorithms together. General inputs such
as an edge or a line through the entire image will work well, while specific shapes
might get skewed.
Figure 43: Growing Grid, Wave Function Collapse and feature distribution on
various shapes with lines input
Figure 44: Growing Grid, Wave Function Collapse and feature distribution on
various shapes with edge input
Chiseling is applied to all the shapes with the line input for feature distri-
bution to show how the path is generated before everything else (see fig. 45).
It is set to go from one corner to the opposite corner in the following examples.
The feature distribution will continue to work outside of the path regardless of
the shape of the path.
Figure 45: Growing Grid, Wave Function Collapse, feature distribution and
chiseling on various shapes
49
Figure 46: Input for Growing Grid and feature distribution for creating an arena
The arena shows how a distribution input can be used with general shapes,
such as an edge and a line through the map to create the map. In addition to
this random dots can be placed around the map, where the position of these are
not important, but rather to create a certain aesthetic in the map.
7 Discussion
The aim of this project was to utilize multiple state of the art systems for
PCG and combining them into a single system. Important factors in game
design, such as diversity, recognizability, traversability and being controllable
was chosen as the main focus for the algorithm. The final algorithm proved
to work well, where evaluation showed that the main focus of the algorithm
was achieved. However the results show only how the system compares to a
different version of the WFC algorithm and not how it compares to other PCG
algorithms. Because of this the steps taken to ensure the requirements to the
algorithm is a step in the right direction, but with no comparison to other
PCG algorithms, it is hard to predict the usefulness of the system. Using GG
and the transformation of the grid as a metric in both tests was interesting,
but to further solidify the results, a metric for the transformation would be
needed. It is difficult to measure the transformation of the grid, however it
would be interesting to match the result to a metric of how much the grid
50
was transformed. Some of the systems implemented were not evaluated as the
output of them is certain, such as chiseling and feature distribution, which both
ensures a certain output. Both could be evaluated as a designer tool and if
a designer would be able to use them when making a map. Evaluating the
project as a designer tool was considered throughout the process. The goal of
the algorithm is to be used for a video game by a game designer and thus the
usability of the algorithm is important. It was chosen to perform the current
evaluation, because it was decided that the functionality of the algorithm was
more important to evaluate, than the usability.
8 Conclusion
This report attempted to explore a novel PCG algorithm WFC. By analyzing
state of the art applications of PCG and WFC in the video game industry,
diversity, difficulty control and traversability was selected as three areas where
WFC could be improved, to increase its chances of use in future game projects.
To improve on these areas, WFC was combined with the GG algorithm because
of its ability to create diverse output. Traversability was solved with the use of
chiseling and a solution to control feature distribution was also implemented.
To evaluate how the difficulty could be adjusted an evaluation was done with
44 evaluation participants navigating mazes created with a combination of all
three algorithms. The results from the first evaluation show that participants
navigated the straight maps faster than the twisted maps, which suggests that
the difficulty of a possible game could be adjusted by disturbing the regularity
of the grid. The second evaluation suggests that by using the proposed system,
maps can become more recognizable. Adjusting the difficulty and ensuring
recognizable maps are two of the important factors mentioned in the analysis,
which both can be achieved by using GG in combination with WFC. The visual
inspection shows how diverse output can be created using this algorithm. This
indicates that our efforts to create diverse and meaningful content was successful
and could be expanded for future use in video games.
9 Bibliography
[1] Mark Hendrikx, Sebastiaan Meijer, Joeri Van Der Velden, and Alexandru
Iosup. Procedural content generation for games: A survey. ACM Trans.
Multimedia Comput. Commun. Appl., 9(1):1:1–1:22, February 2013.
[2] Julian Togelius, Emil Kastbjerg, David Schedl, and Georgios N. Yan-
nakakis. What is procedural content generation?: Mario on the borderline.
In Proceedings of the 2Nd International Workshop on Procedural Content
Generation in Games, PCGames ’11, pages 3:1–3:6, New York, NY, USA,
2011. ACM.
[3] Noor Shaker, Julian Togelius, and Mark J. Nelson. Procedural Content
51
Generation in Games: A Textbook and an Overview of Current Research.
Springer, 2016.
[4] Julian Togelius, Alex J. Champandard, Pier Luca Lanzi, Michael Mateas,
Ana Paiva, Mike Preuss, and Kenneth O. Stanley. Procedural Content
Generation: Goals, Challenges and Actionable Steps. In Simon M. Lu-
cas, Michael Mateas, Mike Preuss, Pieter Spronck, and Julian Togelius,
editors, Artificial and Computational Intelligence in Games, volume 6 of
Dagstuhl Follow-Ups, pages 61–75. Schloss Dagstuhl–Leibniz-Zentrum fuer
Informatik, Dagstuhl, Germany, 2013.
[5] Gillian Smith. An analog history of procedural content generation. In FDG,
2015.
[6] Aristid Lindenmayer. Mathematical models for cellular interactions in de-
velopment i. filaments with one-sided inputs. Journal of theoretical biology,
18:280–99, 04 1968.
[12] Damian Isla. Forging the river in the flame in the flood.
https://www.youtube.com/watch?v=6N56YpHCHBM, 2016. [Online; ac-
cessed 22-may-2019].
[13] Noor Shaker, Georgios Yannakakis, and Julian Togelius. Towards auto-
matic personalized content generation for platform games. In Proceedings
of the Sixth AAAI Conference on Artificial Intelligence and Interactive
Digital Entertainment, AIIDE’10, pages 63–68. AAAI Press, 2010.
[14] Noor Shaker, Georgios N. Yannakakis, and Julian Togelius. Towards player-
driven procedural content generation. In Proceedings of the 9th Conference
on Computing Frontiers, CF ’12, pages 237–240, New York, NY, USA,
2012. ACM.
[15] Michael C. Toy and Kenneth C. R. C. Arnold. A guide to the dungeons
of doom. https://docs.freebsd.org/44doc/usd/30.rogue/paper.pdf.
[Online; accessed 22-april-2019].
52
[16] Sebastien Benard. Building the level design of a procedurally generated
metroidvania: a hybrid approach. https://www.gamasutra.com/blogs/
SebastienBENARD/20170329/294642/Building_the_Level_Design_of_
a_procedurally_generated_Metroidvania_a_hybrid_approach.php,
2017. [Online; accessed 22-May-2019].
[26] Bernd Fritzke. A growing neural gas network learns topologies. In Advances
in neural information processing systems, pages 625–632, 1995.
[27] Bernd Fritzke. Growing grid. https://demogng.de/JavaPaper/node24.
html#SECTION00730000000000000000, 1997. [Online; accessed 22-May-
2019].
53
[29] William E. Lorensen and Harvey E. Cline. Marching cubes: A high res-
olution 3d surface construction algorithm. SIGGRAPH Comput. Graph.,
21(4):163–169, August 1987.
[30] Chien.-Chang Ho, Fu-Che Wu, Bing-Yu Chen, Yung-Yu Chuang, and Ming
Ouhyoung. Cubical marching squares: Adaptive feature preserving surface
extraction from volume data. Computer Graphics Forum, 24(3):537–545,
2005.
[31] CatlikeCoding. marching squares, a unity c tutorial. https://
catlikecoding.com/unity/tutorials/marching-squares/, 2019. [On-
line; accessed 9-May-2019].
[38] Gillian Smith and Jim Whitehead. Analyzing the expressive range of a level
generator. In Proceedings of the 2010 Workshop on Procedural Content
Generation in Games, PCGames ’10, pages 4:1–4:7, New York, NY, USA,
2010. ACM.
[39] N. Shaker, M. Nicolau, G. N. Yannakakis, J. Togelius, and M. O’Neill.
Evolving levels for super mario bros using grammatical evolution. In 2012
IEEE Conference on Computational Intelligence and Games (CIG), pages
304–311, Sep. 2012.
54
[40] Britton Horn, Steve Dahlskog, Noor Shaker, Gillian Smith, and Julian
Togelius. A comparative evaluation of procedural level generators in the
mario ai framework. 2014.
[41] Cameron Browne and Frederic Maire. Evolutionary game design. Compu-
tational Intelligence and AI in Games, IEEE Transactions on, 2:1 – 16, 04
2010.
[42] Isaac Karth and Adam M. Smith. Wavefunctioncollapse is constraint solv-
ing in the wild. In Proceedings of the 12th International Conference on the
Foundations of Digital Games, FDG ’17, pages 68:1–68:10, New York, NY,
USA, 2017. ACM.
[43] Hugo Scurti and Clark Verbrugge. Generating paths with WFC. CoRR,
abs/1808.04317, 2018.
[44] Andreas Wulff-Jensen, Niclas Nerup Rant, Tobias Nordvig Møller, and
Jonas Aksel Billeskov. Deep convolutional generative adversarial network
for procedural 3d landscape generation based on dem. In Anthony L.
Brooks, Eva Brooks, and Nikolas Vidakis, editors, Interactivity, Game
Creation, Design, Learning, and Innovation, pages 85–94, Cham, 2018.
Springer International Publishing.
[45] Ian Horswill and Leif Foged. Fast procedural level population with playabil-
ity constraints. In Proceedings of the Eighth AAAI Conference on Artificial
Intelligence and Interactive Digital Entertainment, AIIDE’12, pages 20–25.
AAAI Press, 2012.
[46] D. H. Adrian and S. C. Ana Luisa. An approach to level design using pro-
cedural content generation and difficulty curves. In 2013 IEEE Conference
on Computational Inteligence in Games (CIG), pages 1–8, Aug 2013.
[47] Michael Biggs, Ute Fischer, and Michael Nitsche. Supporting wayfinding
through patterns within procedurally generated virtual environments. Pro-
ceedings of Sandbox 2008: An ACM SIGGRAPH Videogame Symposium,
Sandbox’08, 09 0002.
[48] Daniel R. Montello. Spatial orientation and the angularity of urban routes:
A field study. Environment and Behavior, 23(1):47–69, 1991.
55
10 Appendix
1. Raw test results from evaluation 1
56
2. Raw test results from evaluation two translated form
danish
57