homogenize. Similar to other critical professions, such as the medical profession or architecture, IT auditors need to accomplish a certain supervised working period. Advanced work areas typically require continuous training. To organize professional standards and also control the adherence of auditors to these standards, governments should facilitate IT auditing communities. In the Netherlands, for instance, IT auditors require an accredited parttime IT auditing university degree and minimally 3 years of relevant practice. Becoming an auditor could also be considered an audit itself and should, therefore, be transparent and controllable. This includes a profes- sional association with mandatory ethical and quality control standards and the possibility to dispute professional issues. A AwawinaA tanhwen LAAs SATA. Arwen hemcaeres aAeosmmean ¢ankwennlawss Dbewrverl ad we discussed in Sect. 2.2. The last section, Sect. 2.3 presents an overview of a typical blockchain architecture that explains the relation between some over the overarching concepts. Fig. 2 The Byzantine Generals Problem. When all nodes behave honest and work together the system works otherwise it will fail prevents double-spending when all nodes behave honestly. However, not all nodes can be trusted as some might be used to act maliciously and propose incorrect versions of the distributed ledger for their own gain. For instance, by introducing non-valid transactions to increase their own balance. Literature on distributed systems refers to this issue as the Byzantine Generals Problem (Lamport et al., 2019). Figure 2 illustrates this problem. Te slisvctentdiem clamwiad tes: ewomewrdadwd ac a w;wesiorshkews Bee ileeacezs fick lassi Fig. 3 Example of three texts translated into three unique hash digests. Note how although the length of the text differs the length of the hash is always 64 symbols transactions. This PoW entails that nodes use their computational power to “vote” on the validity of transactions instead of IP addresses, effectively meaning that t majority of computational power within the network decides. Although it might easy for someone of ill-intend to amass several IP addresses, obtaining large amount of computational power is likely to be more difficult. Nodes d eliver their PoW solving a computational difficult mathematical puzzle. The first node to solve t puzzle is granted some Bitcoin as a reward. Finding the so requires finding the right nonce (a random number) that matches the header of t current block, given information of the prior block. The process of finding the right solution to build a block is called mining, and nodes that make t puzzle are referred to as miners. There is only one miner that can he effort to solve t block. Whenever a node has found the right solution, it pro pagates the block constructed to the other nodes. The other nodes then verify the correctness of t block, and if correct append it to their copy of the ledger. ution to the puzzle ne be by he ne ne be the first to mine a it ne Fig. 5 Graphical example of the longest chain rule. Eventually all nodes will accept the bottom branch as it is the longest of the two Table 1 Spectrum of blockchain network arrangements of a public blockchain is usually encapsulated in the algorithms that the platform uses to process data among things. Such features are therefore not easily changed. Unfortunately, the fact that anyone can join the network and perform all possible actions might be considered as inconvenient by some organizations as their control over the platform is diminished. Moreover, public blockchains require complete transparency of the transactions history which is sometimes at odds with the privacy concerns of an organization. Combined, these two factors have led to the introduc- tion of permissioned and private and consortium blockchains. Proponents of such blockchains advocate that more privacy and access control is needed to guarantee that the blockchain can be used for business. Rather than having one network for all participants, and being owned by all participants private/consortium blockchains are owned by a consortium of organizations or even one organization. Contrary to public blockchains, most private and consortium blockchains have tailor-made distribu- tions of the permissions each participant is granted. Therefore, these types of networks can be considered permissioned. Projects like Hyperledger Fabric (Androulaki et al., 2018) provide frameworks to build these consortium/private networks. There are also blockchains that combine features of both architectures. Blockchain networks provide the technical infrastructure on which several ser- vices like smart contracts can be run. As said, ultimately the blockchain infrastruc- liscerned which are the characteristics or properties of an observation (Bishop, 2006). Relations that the machine has learned are represented as models, that express hese relations as parameters, variables, or other mathematical concepts like vectors. ML algorithms can learn in a descriptive, predictive, or prescriptive manner from . provided data set. These types of learning differ from one another because the aims yf the learning process are different. Descriptive learning focuses on extracting elations between features in the data set with the aim of understanding laying yare these relations. For instance, a data set encompassing several customers of a irm can be used to learn how customers are grouped, and on the basis of what +haracteristics. Fig. 8 Schematic depiction of a neural network Oftentimes deep learning is discerned as another subset of mac hine learning. Like “normal” machine learning deep learning can be employed for descriptive, predic- tive, and prescriptive purposes and can also be taught to learn using a supervised, unsupervised, or reinforced learning approach. What sets deep other machine learning approaches is how the relations between earning apart from features are stored. Neural networks often consist of many hidden layers to extract and store features from data. In essence, neural networks are data structures mode ed to resemble the human brain. Figure 8 depicts a schematic version of a neural network. Introduction to Advanced Information Technology Neural networks are vastly complex multi-layered networks. Similar to a human brain, a neural network encompasses several nodes (similar to a neuron) that are inter-connected which allows data to be passed between them. The neural network always encompasses an input layer and an output layer. In between the input and output layer there are multiple hidden layers. Some neural networks can encompass millions of hidden layers, whereas others only have 20. The hidden layers in a neural network pass on data from the input layer and provide a subsequent outcome to the output layer. Due to the complexity of neural networks it difficult, if not impossible, to understand what happens when data is passed between the nodes. Therefore, neural networks in all of their different shapes and sizes are considered a black box, meaning that we know the input and the output of the algorithm but not what happens during the processing of the data. How neural networks are structured strongly depends on the deep learning algorithm used to perform a task. In turn, research (Pouyanfar et al., 2018) has demonstrated that some types of deep learning algorithms are more suitable than others for a specific task. Hence, there is often a strong relation between the task at hand and the type of neural network employed to store the data. Roughly speaking neural networks can be divided into two groups: convolutional neural networks and recurrent neural networks. Convolutional neural networks are predominantly used for image recognition. Table 2 Differences between a CNN and RNN emporal or sequential data (€.g., a MOvVIC). Recurrent neural networks are better equipped to work with temporal or sequen- ial data. This is largely due to the fact that recurrent neural networks use the input of rior nodes in the network to weigh in their information in order to establish the elation between input and output. Effectively this constitutes an internal memory hat is able to distinguish important details such as those related to the input they eceived. Using its memory, the neural network is able to predict what will come 1ext. This important characteristic of a RNN makes them highly usable for tasks elated to speech, video, and text. The key takeaway about RNNs is that when sequence is of the essence, a RNN will learn a far more profound understanding of he sequence as compared to other algorithms. Because we already know how to calculate the precision and recall we can simply plug in these calculations into the formula. Where we multiply precision by recall and dividing it by the product of precision and recall. Please note that despite the fact that the Fl score is a commonly used metric there is an ongoing debate on the appropriateness of the metric. In the example here above, we only used the F'l score to calculate an algorithms performance on two classes. However, an adjusted version of the Fl score can also be used for multi-classification testing. distinctions, the recurrent neural network can be activated image recognition, object and face detection, and activation functions. The training examples (i.e., images) can be supervised learning and unlabeled for unsupervised learning. The a each input image as an array of pixels translated pertains data in the form of Height x Width x works, consider an image of 20 pixels x 15 pixels Dimension. To ill color. The range of the numbers that are stored in t both labeled ustrate how t depth. Hence, the color range strongly dictates the maximum number of colors t can be used. For RGB colors that are a mixture of red, green, and b images, this range is from 0 to 255. After converting the matrices to a plethora (sometimes millions) of features, labels can be added to the images to train the model. for several tasks like image recognition using a set of or gorithm regards as a matrix. This matrix usually his x 1 where the 1 denotes the RGB he matrix is referred to as the co lor hat ue often used in Fig. 14 Image recognition task with different classes of labels The JaaS component provides the computing resources to the client. These resources include virtual machines (VMs), data storage, connected networks, and other utilities through a service model. The promise and premise of cloud computing is founded on the hardware the cloud provider provides. SaaS is the next component in acloud computing service layer, that refers to the delivery of an application. These applications are delivered via the network (infrastructure) to users of the applica- tions. Users of SaaS divided into several groups: organizations that give access to the software applications, the administrators charged with the configuration of the software application, and end users. For each cloud computing provider, there are several manners to calculate the costs of deploying an application. Some cloud providers charge based on the number of end users that use the application, others on the time used, volume of the data stored or its duration. The PaaS component binds the other two layers together and provides a platform for the client of a cloud provider to use tools and other resources to develop, test, and deploy their applica- tions. During the life span of the application, the PaaS layer also enables the management of the hosted application. Among the users of the PaaS are application developers, testers, owners, and administrators. Between the service layer and the physical resource layer is the abstraction and The Committee of Sponsoring Organizations of the Treadway Commission (COSO) provides companies with a framework for internal management and control. COSO is a “top-down and risk-based approach” framework. It has been inserted at the highest level of the Audit Committee, the board of directors, and senior management to create broader support for the urgency, importance, and follow-up. The COSO cube in Fig. | depicts the framework. On the top face of the COSO cube the objectives that a company strives for are qiecnlayvead and oan the frant fare thea rannired five cpamnonnentc tn attain theca comprehends several processes. Within this quadrant, the operation is tested over < period of time. A third quadrant is related to the Application controls (AC). The AC configuration device of an application allows automatic controls to be done withir the application. In contrast to quadrant 1 (what you are allowed to do), in this quadrant it becomes clear what you can actually do. It is a recording of the existence of the control. The final and fourth quadrant concerns the manual controls (MCs). Manual procedures are obviated herein. Between the MC and AC there is ar intermediate form “the IT-dependent control.” An example of this is extractions from databases that are subsequently checked manually. ee: whPentesn: Hawes et eal awl oeoeenlax mow Des aati leae weer! oc ER EEE The effective design of application controls can be demonstrated by proof of existence. For application controls and IT-dependent controls to work continuously and undisturbed, it is important that the ITGCs have also worked effectively during a control period. The ITGCs are preconditional and test the operation of the controls over a period of time. Without a sufficient level of segregation of duties and ITGCs, it cannot be determined with reasonable certainty whether user controls or applica- tion controls have worked during the controlling period. a ey oe ee os —+—1 2. 4...%* 4. gf. a ee re a ee neh en eee teens ne TES EET EI! ELL IES EISELE SESE SED In the financial external audit, it is possible to deviate from relying on the ITGCs if these prove insufficient during the systems audit. Then firms can choose to use data analysis to determine whether there has been a deviation during that period, to which is referred as the data-oriented checks. If there is no deviation or unfamiliar data patterns in the sampled population of the data, it can be stated that the risk has not materialized during that period and that there is no reason to believe that a material error has occurred in that process. The Intercompany Settlement Blockchain: Benefits, Risks, and. . are Made auloMauCally (OlepS + and J). Before elaborating further on the IT-controls, this paragraph starts by elucidating the architecture of this customized corporate blockchain. In this case study, the network configuration is set up with a so-called “notary node,” which is the node in the network that stores and activates the smart contracts. Company A and Subsidiary B are present in the network with their own network “node.” After validation and execution of the smart contract, only the bookings are made to their ledgers. This is different in a “public permissionless” network such as bitcoin blockchain. In that type of blockchain, everyone receives a copy which is not the case in this corporate blockchain. Node-to-Node is the consensus mechanism that makes this possible and 3 IT-Controls: “AS-IS Control Environment’ differences on the transit balance sheets. Detecting unwanted actions takes time because of the manual and IT-dependent checks. The IT-control objectives are present on all control levels and apply to the central financial ERP system and its surroundings and are described in Table 4 below. The purpose of the intercompany settlement process in a corporate blockchain remains unchanged; the real-time transaction processing and encryption contribute to an improvement of the efficient and effective processing of transaction data and the integrity of the financial data. Transparency has increased thanks to “shared ledgers.” In the current situation, the financial ERP is leading in financial account- ability. With differences in the blockchain “shared ledgers,” the risk is that an entity can claim to have the financial “single source of truth.” The smart contract can offer a solution in case of a difference, but parties can create and delete fraudulent smart contracts. The changing risks have an impact on the IT-control objectives and are described in Table 5 below. Table 7 TO-BE entity level controls collaboration between blockchain parties when they decide to deploy changes on the network. In a consortium with blockchain participants, agreements will be made about these vital network components, which can impact the integrity and confiden- tiality of the blockchain network. The “AS-IS and TO-BE” Intercompany settlement objectives, efficient and effective processing of transaction data and the integrity of the financial data, remain the same in both situations. The risks and control objec- tives change. The existing risks are identified to secure a central ERP application. The new risks are based on a blockchain network and are having direct impact on the IT-controls on all levels. Table 8 AS-IS transaction level controls Table 9 TO-BE transaction level controls 9 Blockchain and Transaction Level Controls: “IT-Dependent Controls” At the process level, the impact is large and the blockchain contributes to efficient and effective processing of Intercompany settlement invoices. All manual and IT-dependent controls are overtaken by a total of nine application controls. The transactions are verified and validated against the smart contract in which the agreements between a “buyer” and “seller” are digitally recorded. This digital process optimization eliminates the involvement of accounting houses in the ICS process and their controls are totally automated. The Intercompany Settlement Blockchain: Benefits, Risks, and. . WIC NOW LU-DE sltudulOn. The increase in ELCs despite the disappearance of the accounting houses can be >xplained by IT consortium consultations. The purpose of these consortia is to keep control over the private blockc hain network. The common configuration items and multiple participants make it necessary to inform each other about their technical status of their blockchain node. With regard to the number of ITGCs, there is no difference with or without a corporate blockchain (see Fig. 14). Only the IT-controls, at ITGC level, are increasing in numbers and are an addition to the ITGC (see Fig. 15). The ITGCs also require control over the entire network in a consortium on Sorporate ELC level, because t he financial data is not only secured and stored in an “on premise” application, database, and server stack but can be operating on separate data centers making the financial data decentralized. The elaboration per ITGC on T-controls is shown in Fig. 15. Most ITGCs have an increase on IT-controls. Only Backup and Recovery and computer Operations show a decrease or limited change in their IT-controls. The decrease in Backup and Recovery can be explained by the fact that the notary node and the ERP node are each other’s backup. From the ledgers of the other nodes the Notary node and ERP node can be replicated or vice versa. The imited increase in Computer operations has to do with the real-time transaction processing feature in the blockchain. Job scheduling of external triggers become unnecessary. The explanation for the high increase in the number of IT-controls for Logical Access Security is the access to the blockchain nodes from various locations and the API connectors to the blockchain. Also, security of blockchain critical assets ike key pairs and smart contracts desire new advanced IT security controls. Change management, system development, and life cycle management have a profound impact on the entire network with additional IT-controls on forking, and the source code of consensus mechanisms. Rw nlacino the intercnmnany cettlament nrocece in 9 earnorate hlackchain 94 The Intercompany Settlement Blockchain: Benefits, Risks, and. . different aim. As the name suggests a predictive algorithm is designed to predict future outcomes based on past data. Predictive algorithms are used to answer the question ‘What’s going to happen next?’. Prescriptive algorithms go beyond this aim by not only calculating what is likely going to happen next, but in addition by making suggestions of what action should be taken. A prescriptive algorithm is used to answer the question ‘What needs to be done?’. Algorithms can also be classified based on complexity and explainability. In Fig. 2 Five perspectives of the framework Algorithms bring about both opportunities and threats for governments. In this section, we present a framework to maximise the benefits algorithms have to offer while addressing potential risks. The framework was constructed by conducting an elaborate analysis of the extant literature, other frameworks, brainstorm sessions and practical analysis. A more detailed description of the methodology followed to construct the audit framework for algorithms is included in Appendix. Our audit framework contains five different perspectives for investigating algorithms that are depicted in Fig. 2. It provides concrete answers to the questions which risks are associated with algorithms, of which aspects need to be assessed. Table 1 Risks and controls related to governance and accountability approach this may inherently mean that certain aspects do not apply to a specific algorithm (Table 2). Table 2 (continued) The audit framework presented in the prior section has been submitted to a practical usability test by assessing three algorithms as case studies. Another aim of the practical usability test was to improve the framework. The aim of the practical usability test was not to arrive at any individual judgements about the algorithms, but rather to aggregate the lessons learned from the analysis. Therefore, we A person or company applies for a grant, a travel document or a benefit. Does this application need checking or extra checking? 1 Governance and Accountability Fig. 4 Assessment system for detecting non-standard objects Appendix: Methodology of the Audit reflection of the degree of expertise on algorithms that a given organisation pos- sesses, as they differ in terms of their complexity and potential impact. We also found that central government does not have any uniform definition or standardised classification of algorithms, which resulted in differences of interpretation among the ministries when submitting their algorithms. Vey Pee ee: eee ee NRC: Se Re chee ee A simple example of such a training exercise is for instance providing an ML algorithm several pictures of Chihuahuas and muffins which can be presented to a computer (input), telling which picture is what (output). If the computer gets enough pictures, it learns to make connections between the different pictures and the computer is able to tell if there is a Chihuahua or muffin in the picture. So, there has been no person who has told the algorithm what the rules are for recognizing a Chihuahua or a muffin. However, humans are required to tell once what the correct output should be, so that the algorithm can make the connections itself between the input and output. This technique has developed enormously in recent years. Keeping Control on Deep Learning Image Recognition Algorithms T= fe Se Se eee ee ee eee eee ee ee ee ee The quality of the photos is checked before they are offered as an entrance check. Some control aspects are whether the photos are not corrupted and conform to the correct projection as agreed in the GLO. If “errors” appear here, these are logged in the database whereafter the application discards them. The photos are delivered in one set, this is also recorded in the GLO. Upon receiving the set of photos, a sample is taken from that set and if there are no errors, the set of photos is approved. If the photos are removed because they have not been approved, this set of photos (which contained the error) will not be accepted. In this situation, the GLO is serves as a guideline that decide which photo does not meet the requirements and will not be accepted. The result of this check is provided as feedback to the external party. The aerial photos that are being used are placed in a database on the storage environment and sent to the Databricks environment. Data stemming from internal sources, like as customer data, data about the crop, the insured amount, the coordinates, etc. are included during this process using the Datafactory. The most recent, accurate and most up-to-date model stored in Databricks is used to classify the photos. Access to the models is arranged via Identity Access Management (IAM) that Table 1 STRIDE: the different types of threats Prior to the implementation, “Threat Modelling” was applied by the Security department, to control the security threats as much as possible. The process to develop sufficient security controls involves identifying potential threats and devel- oping tests or procedures to detect and respond to those threats. It is important to understand how threats can affect systems. A threat model was developed for this purpose, which is based on STRIDE (Kohnfelder & Garg, 1999) threat modelling. STRIDE is a threat model created by Microsoft engineers intended to guide the discovery of threats in a system. The STRIDE model is meant to assess several types of threats to the security of an application. Table 1 shows the different types of threats that can be used to mount a cyber security attack: Table 2 Aspects related to Al-control. Specific use of the aspects is situation- and context dependent. The maturity level of the organization with regard to the use of these aspect plays ar important role in the implementation of the controls increasingly influenced our decision-making and are replacing humans evermore for several tasks. An algorithm in the context of computers can be described as a set of instructions that serve to carry out a task. This concerns systems, with “simple” calculation rules based on data, to make decisions or give advice, but also to constitute to more complex learning and/or predictive systems. For rule-based algorithms it is possible to determine how they have produced a certain outcome. However, the complexity of ML algorithms has proven to be far more difficult to unravel. in all kinds of other contexts. Common risk factors that relate to the deployment of algorithms may, roughly speaking, be grouped into three dimensions: If the algorithm has a presence on all three dimensions, and on one of these dimensions can be considered high risk, it is likely to become a target for review or audit at some point for some reason. In Fig. 1, we show the three dimensions in the form of a cube. An easy way to convey risk profiles is scoring the application on each of the three dimensions and drawing a plane through the cube connecting the three selected points. At the axes we directly relate these risk dimensions to the five control objectives we use for our work: integrity, resilience, explainability, fairness, and accountability. Table 1 Overview of SOC2 trust principles, EU working groups, and coherent audit research questions From an algorithm audit perspective, there are reasons to argue that such trust- worthy AI principles are a good basis to scope an algorithm audit. This is because these principles provide a specific perspective, a set of control objectives appropriate for AI assurance, for an auditor to focus on. There is also reason to argue that the already existing trust services criteria are insufficient, because algorithm assurance should not only focus on the algorithm itself but also on the context in which it is being used. If you try to map the SOC2 trust services criteria to the AI principles of the EU working group, no exceptional creativity is required to successfully make it fit. Fig. 3. Spheres of activity where risk and control play different roles Fig. 4 How control objectives, risks, and likelihood and impact drivers relate to each other Table 2 Overview of impact drivers and rationale thereof Table 4 A matrix of audit approaches with coherent focus area, the difficulty and feasibility of the audit the algorithm developers in conceptualizing the initial business problem into a formalized AI problem. Of course, the quality of the data and data preparation activities should also be in scope of these audit procedures. To test an algorithm’s implementation the same types of test procedures as in regular IT audits can be used as a Starting point, although some types of procedures may be less applicable or feasible, depending on the characteristics of the algorithm. In the subsection on tools and techniques, we will go in more detail. Testing the model can provide a high level of comfort, depending on the detail of Table 5 Confusion matrix for the running example This measure is crude, but also one likely to be used by the media to support an accusation of unfairness. The algorithm does not use the gender of the applicant, but the public body does have access to data about the gender of the applicant and household composition from a third party. We can therefore set up confusion matrixes for the single father household vs. the rest to gain insight (see Table 5). Ideally, we would like to be able to fill in all four conditions, including the distinction between true negatives and false negatives, but for the negative pre- dictions we don’t have information about what the outcome of manual processing would have been. Going from IaaS to SaaS, companies will experience the following benefits and disadvantages: with the subject matter. Over the course of the past years, some auditors who were being confronted with public cloud on a professional level have resorted to the Certified Cloud Security Professional certification from ISC2 to obtain the required knowledge to audit public cloud developments and systems. Only recently the “Certificate of Cloud Auditing Knowledge™’ (CCAK™) was introduced by the Cloud Security Alliance” (CSA), a global leader in cloud security research, training, and credentialing and ISACA™ a global leader in training, education, and certifica- tion for IS/IT professionals. Table 1 The shared responsibility model according to Microsoft (2022c) Table 2 The shared responsibility model according to Amazon Web Services (Amazon, 2021) really share responsibility for a single aspect of operations. The parts of the appli- cation and infrastructure stack that a consumer can configure, are solely managed by the consumer of the services, and the CSP does not dictate how the service consumer should secure his parts. Likewise, the user/consumer has no control over how the CSP secures their portions of the application and infrastructure stack. The user/ consumer usually has the ability and right to access the CSP’s certifications and related reports (e.g. SOCI, SOCII, SOCHI, FedRamp, ISO) to verify that their systems are secure and that they are adhering to the agreed terms and conditions. CSPs publish these reports regularly and freely, and the most current reports are always accessible to their clients. Please note that not all CSPs offer one or more of these reports as it can be costly to produce them/obtain these certifications. In our cloud audits, we have used the Microsoft Azure Shared Responsibility Model to make clear demarcations of the in-scope and out-of-scope elements in our Table 3. Overview of 17 domains of the Cloud Controls Matrix (CCM) Table 4 Processes of ISACA’s Cloud Computing Management Audit Program Table 5 Areas of ISACA’s Azure Audit Program service providers such as Amazon Web Services, Microsoft Azure, and Google. Consequently, in 2017 two proofs of concept were started to experiment with Microsoft Azure as well as with Amazon Web Services. Secondly, an alternative private cloud solution was explored and implemented. The proof of concept of the two public cloud platforms was so successful that a multi-platform strategy was finally adopted where both Microsoft Azure and Amazon Web Services had their place. The two private cloud solutions were maintained, next to the traditional Mainframe environment. ee ES ee The focus of our explanation will be on Infrastructure Managed Services (com- ponent 2 of the framework) and the Services and Workloads (component 3 of the framework). In our opinion, this has t currently is hardly any concrete guid topics. We will supply both the contextual information and ris that will help IT auditors gain a better understanding of the subject matter and that will aid them in designing the audit programs they can use implementations. As there are several publications and wor quately cover the other components (Cloud Service Provider, Standards, and Governance), we will only explain what the sp for these topics are in the context of public cloud. We will re and audit programs when covering these topics. he most added value, given the fact that there ance for IT auditors available on these two k-control descriptions to audit public cloud k programs that ade- Processes, Policies & ecific attention points fer to relevant articles ‘ The table was extended with column ‘Assign to’ Table 6 Assignment of Azure built-in roles. Adopted from Microsoft (2022d)* Regarding the virtual machines (VMs) of Linux and Windows, an organisatior can choose to implement/use CIS (globally recognised standard for secure baselines hardened images. Attention point is the fact that Microsoft changes the VMs more frequent than CIS changes the hardened image. In other words, there is a delay ir CIS hardened images becoming available. In order to keep up the pace witt Microsoft, product groups can better build the images themselves, making sure they have the latest copies and all required patches. The images can then be replacec at a higher rate in the product catalogue. Mandatory automated update of deployec VMs should also be considered. non-customised services can be configured/deployed while the cloud controls/poli- cies are determined at management group level (and then via inheritance will be enforced/in deny mode on lower levels). inom 4 7 “s sank ae . ee enforced/in deny mode on lower levels). To enable enforcement of different policies in different environments, an orga- nisation can choose to implement a separate management group for the development and test environments and another for the acceptance and production environments. When an organisation introduces secure landing zones at a later phase (i.e. not from the start), it should account for future changes regarding management groups, sub- scriptions, and resource groups. Teams that already have applications in the shared subscription or in their individual subscriptions need to migrate their applications to the secure landing zone environments. Migration can only be done after all applica- ble policies have been implemented. Therefore, a planning for application migration needs to be made. Migration of applications to secure landing zones is not done overnight and organisation may want to avoid changes in the management group- subscription-resource group structure that require application migrations. To miti- gate that risk, an IT auditor could review the design process and assess whether sufficient expertise was involved. Not only to speed up the subscription/resource group deployment process, but also to ensure that every DevOps team starts in the same position (with the same controls) the deployment process should be automated. such as North and West Europe. The datacentres are connected through a dedicated regional low-latency network. Azure Availability Zones are physically separate locations within each Azure Region that are tolerant to local failures. Failures can range from software and hardware failures to events such as earthquakes, floods, and fires. Tolerance to failures is achieved because of redundancy and logical isolation of Azure services. To ensure resiliency, a minimum of three separate availability zones are present in all availability zone-enabled regions. This design per region is outlined in Fig. 6. Table 8 Risks and controls services and workloads* It is evident that when an application consists of several Azure services, it is not so easy to achieve RTO and RPO values on the application level. When availability and performance are not critical, these measures probably are sufficient. However, for applications that are critical, e.g. financial transaction processing, performance objectives, and RPO = 0 will be difficult to be met and need careful consideration. Fig. 1 BPM lifecycle. (Source: Fundamentals of BPM, 2nd ed., 2018, Dumas et al.) Fig. 2 Example of a procedural process model for a purchasing process ICUULCUUTE LLUIIIds Ul dl., CULO). Broadly speaking, there are two different approaches to capturing a process model. A process can be described procedurally or declaratively. A procedural approach means that all possible process executions are specified exactly in the model. An example of a procedural process model can be found in Fig. 2. Figure 2 is an example of a procedural process model for a purchasing process (in BPMN). This model specifies that there are only three different process paths for the execution of a purchase: Table 1 Sample list of rules to describe a buying process fey VUILL EEE ECLA ELE 0A VIVVIGAUIVI Ui UI Stitt peta S11 Te Ae et et ee ipproach fits well with the modelling of highly structured processes. Several modelling languages exist within the procedural approach. Although, in he past, flowcharts and EPC models have found their way into business, there are 1umerous drawbacks associated with these types of models. These drawbacks are nainly about the ambiguous model interpretation and the specific language depen- lence of software (Dumas et al., 2018). As a solution, a standard was developed for yrocedural process modelling: Business Process Modelling and Notation. The 3PMN standard was created by the Object Management Group (OMG), which is in independent party that develops system-independent standards for computer ystems.’ Process models drawn up according to this standard are easy to interpret. At minimum, a process consists of activities in rectangles, arrows, and additional semantics to indicate relations, like parallelism and choice relationships. For exam- le, Fig. 2 contains a parallelism of activities D and E, indicated by a diamond with a ylus, and a choice after activity A, shown as a diamond with an X. The second way to describe a process is through a declarative approach. In a leclarative process model, relationships between activities are determined by rules. An example of such a rule is as follows: “the activity register order always takes ylace before the activity approve order.” The basic principle of declarative model- Table 2 Process mining terminology Table 3. An excerpt from an event log of a sales process the sales order. The process started for case 1 on April 13, 2021 with the creation of an order by Jan and ended with the receipt of payment on May 20, 2021. The sequence of the four listed activities in this specific order reflects one process variant. In this excerpt, this variant is not repeated. However, it might emerge later in the event log that this variant is the most frequent variant of all the process executions. At a minimum, an event log contains information about the case IDs, activities, and related times (columns 1, 3, and 4) Based on these minimum requirements, process mining is able to represent the real flow of actions over time. In addition, an event log can contain additional information about events, such as the resource and value in this example. In a process mining context, these properties are called attributes. You can add as many attributes as desired to the event log. Note that the more attributes you add, the larger (broader) the event log becomes. It is therefore recommended to only include those attributes that add value to your process analysis. As a consequence, it is important to, as a preparatory step, unambiguously identify business questions that the process analysis should answer. Fig. 6 Two different levels of abstraction of process discovery output Fig. 7 Integrating process mining into internal audit Internal audit External audit ee eee en ee een ee ee en ee NII III OIE NID OI I III DI SOODISAS VIDS NINN support for the planning phase and risk assessment work. Consistent with conducting the internal audit, the external auditor will address exposed nonconformities. Given the external auditor’s focus on financial reporting, a different emphasis may be placed in the deviations to be examined. For example, repeated approvals of the same voucher will generate interest from an efficiency standpoint, but perhaps not from an audit standpoint. Despite a potentially different selection of deviations, the approach to clarify them is similar to what has already been described for the internal auditor. This will require a combination of a review of process executions against business rules, variant analysis and case analysis. Rule testing is well suited as a control test. Indeed, each control mechanism can be formulated as a rule: “if..., then....” For example, “if a receipt is created, then it is approved later.” Variant and case analysis are used to answer more targeted ques- tions and lean closely towards substantive controls.° Examples include reviewing transactions of a specific person, process executions in which manual activities have taken place and activities outside of working hours. As with the internal audit. communication will take place. supported by the visual