We split views into two groups, the ones related to controlling the game through the GUI (buttons, menus, dialogs, labels and a game inventory or a mini-map, for instance), and the ones that are part of the actual game (waypoints, the player and game elements that need to be updated depending on the game logic or even GUI changes). For this reason, we also have two types of view controllers: the game elements’ controller and the GUI controller. Figure 2 presents a high-level view of that architecture. When one of these controllers change the game logic, the other receives a notification about the changes and can update its views. 3.1. MVC in SimVR-Trei Figure 3: Simulator integration With the proposed architecture, it is simple to add components such as an external simulator. One of our projects uses an external simulator SDK to measure a solar power plant’s performance. The user can change the input parameters from a simulation GUI. Even during the training session, she/he can change simulation parameters from the result of some action started in the 3D world. For instance, the player can travel to a solar panel control box, which is a three dimensional object in the virtual environment, and change the angle of a group of solar panels, which affects the performance of the solar power plant. In Figure 3, it is possible to see how an external simulator can be integrated in the architecture. The simulator is accessed through a simulation service API, preferably an interface. We developed our own waypoint system but some engines may already have one available. We created three classes: Waypoint, WaypointMover, and WaypointsHolder. The first one (Waypoint), is a script that contains waypoint visualization options and associated actions — we want to be able to visually place the waypoints when creating the scene. This class stores not only the waypoint’s color and size, but also the character’s activation distance to it and its type (regular waypoint, “choose path” or “turn around”) and looping type. The WaypointMover is the most important script; it controls the object’s movement along a waypoint path. It has a WaypointHolder, which contains all the waypoints and renders them as they were connected, as shown in Figure 4. Figure 4: Waypoint paths Figure 5: Thermal Camera To illustrate that, consider our thermal camera view, which is shown in Figure 5. During a training session with one of our applications, the user must make use of the thermal camera to photograph failures on a solar panel. The thermal camera is a game object, which has several components, such as a Transform and a RenderTexture. The Transform component determines the location of the game object in the 3D world; and we use a second camera in the scene, which renders its image (with a thermal effect), to the thermal camera texture. This is a simple way to implement the thermal camera and it is possible thanks to the component architecture. However, the thermal camera view needs more than that: it has buttons for switching from thermal to regular camera mode and a button to take a picture. When the regular camera mode button is pressed, the image effect must change. Furthermore, all of these actions must be reported to the game logic in order to decrease the player’s score when he photographs a failure with the wrong camera mode, for example. In other words, the view needs a controller that can receive events and deal with them properly. Figure 6: Character control system Figure 8: AmbSim screenshot AmbSim (Figure 8) is an application for training operations on an oil rig. The operations strictly follow a manual, as well as employing general safety measures. All actions on the virtual environment are sent to a simulator, which is responsible for providing the behavior of the systems and equipment on the oil rig. Figure 10: AmbSim using Oculus Rift Figure 9: AmbSim using Kinect Figure 11: Web client for smartphone access Figure 12: Solar Screenshot Solar (Figure 12) is an application for training routines in a solar power plant. These require the user to have basic knowledge of the plant layout and what types of failures might occur. The application teaches and tests response to potential situations. Figure 13: Input-sensitive visual cues Two different images are shown in Figure 13. The left image represents the expected gesture when the input system is using Kinect. The right image represents the expected button input when the input system is using the keyboard. Both perform the same action in the world (move forward). Figure 14: Alternative Solar application