
Wan Fokkink
Related Authors
Martin Fabian
Chalmers University of Technology
Ryszard A DANIEL
Rijkswaterstaat
Rong Su
Nanyang Technological University
Wim Koops
HZ university of applied sciences
Geylani Kardas
Ege University
marcel hertogh
Delft University of Technology
Uploads
Papers by Wan Fokkink
These data are often preprocessed (e.g. smoothened, aggregated) to avoid
to expose sensible data, while trying to preserve their reliability. We
present two procedures for tackling the lack of methods for measuring
the open data reliability. The first procedure is based on a comparison between open and closed data, and the second derives reliability estimates
from the analysis of open data only. We evaluate these two procedures
over data from the data.police.uk website and from the Hampshire Police Constabulary in the UK. With the first procedure we show that the
open data reliability is high despite preprocessing, while with the second
one we show how it is possible to achieve interesting results concerning
the open data reliability estimation when analyzing open data alone.
collections make experts from outside the institution indispensable for
acquiring qualitative and comprehensive annotations. We define the concept of nichesourcing and present challenges in the process of obtaining qualitative annotations from people in these niches. We believe that
experts provide better annotations if this process is personalized. We
present a framework called Accurator, that allows to realize and
more or less detail, regard private citizens. For this reason, before publishing
them, public authorities manipulate data to remove any sensitive
information while trying to preserve their reliability. This paper addresses
the lack of tools aimed at measuring the reliability of these data. We
present two procedures for the assessment of the Open Government Data
reliability, one based on a comparison between open and closed data, and
the other based on analysis of open data only.We evaluate the procedures
over data from the data.police.uk website and from the Hampshire Police
Constabulary in the United Kingdom. The procedures effectively allow
estimating the reliability of open data and, actually, their reliability is
high even though they are aggregated and smoothed.
of the automaton. We present an algorithm for detecting such useless transitions. A
finite automaton that captures the possible stack content during runs of the pushdown
automaton, is first constructed in a forward procedure to determine which transitions
are reachable, and then employed in a backward procedure to determine which of these
transitions can lead to a final state. An implementation of the algorithm is shown to
exhibit a favorable performance.
theory and implementation of multi-valued model checking for Promela specifications. We believe our tool Bonsai is the first four-valued model checker capable of multi-valued verification of parallel models, i.e. consisting of multiple concurrent processes. A novel aspect is the ability to only partially abstract a model, keeping parts of it concrete.
tags created within the Waisda? video tagging game from the Netherlands Institute for Sound and Vision. Through a quantitative analysis of the results, we demonstrate that using provenance and demographic information is beneficial for the accuracy of trust assessments.
annotation properties are relevant to the annotation quality. In addition we propose a trust model to build an annotator reputation using subjective logic and assess the relevance of both annotator and annotation properties on the reputation. We applied our models to the Steve.museum dataset and found that a subset of annotation properties can identify useful annotations with a precision of 90%. However, our studied annotator properties were less predictive.
This book offers students and researchers a guide to distributed algorithms that emphasizes examples and exercises rather than the intricacies of mathematical models. It avoids mathematical argumentation, often a stumbling block for students, teaching algorithmic thought rather than proofs and logic. This approach allows the student to learn a large number of algorithms within a relatively short span of time. Algorithms are explained through brief, informal descriptions, illuminating examples, and practical exercises. The examples and exercises allow readers to understand algorithms intuitively and from different perspectives. Proof sketches, arguing the correctness of an algorithm or explaining the idea behind fundamental results, are also included. An appendix offers pseudocode descriptions of many algorithms.
Distributed algorithms are performed by a collection of computers that send messages to each other or by multiple software threads that use the same shared memory. The algorithms presented in the book are for the most part classics, selected because they shed light on the algorithmic design of distributed systems or on key issues in distributed computing and concurrent programming.
Distributed Algorithms can be used in courses for upper-level undergraduates or graduate students in computer science, or as a reference for researchers in the field.
These data are often preprocessed (e.g. smoothened, aggregated) to avoid
to expose sensible data, while trying to preserve their reliability. We
present two procedures for tackling the lack of methods for measuring
the open data reliability. The first procedure is based on a comparison between open and closed data, and the second derives reliability estimates
from the analysis of open data only. We evaluate these two procedures
over data from the data.police.uk website and from the Hampshire Police Constabulary in the UK. With the first procedure we show that the
open data reliability is high despite preprocessing, while with the second
one we show how it is possible to achieve interesting results concerning
the open data reliability estimation when analyzing open data alone.
collections make experts from outside the institution indispensable for
acquiring qualitative and comprehensive annotations. We define the concept of nichesourcing and present challenges in the process of obtaining qualitative annotations from people in these niches. We believe that
experts provide better annotations if this process is personalized. We
present a framework called Accurator, that allows to realize and
more or less detail, regard private citizens. For this reason, before publishing
them, public authorities manipulate data to remove any sensitive
information while trying to preserve their reliability. This paper addresses
the lack of tools aimed at measuring the reliability of these data. We
present two procedures for the assessment of the Open Government Data
reliability, one based on a comparison between open and closed data, and
the other based on analysis of open data only.We evaluate the procedures
over data from the data.police.uk website and from the Hampshire Police
Constabulary in the United Kingdom. The procedures effectively allow
estimating the reliability of open data and, actually, their reliability is
high even though they are aggregated and smoothed.
of the automaton. We present an algorithm for detecting such useless transitions. A
finite automaton that captures the possible stack content during runs of the pushdown
automaton, is first constructed in a forward procedure to determine which transitions
are reachable, and then employed in a backward procedure to determine which of these
transitions can lead to a final state. An implementation of the algorithm is shown to
exhibit a favorable performance.
theory and implementation of multi-valued model checking for Promela specifications. We believe our tool Bonsai is the first four-valued model checker capable of multi-valued verification of parallel models, i.e. consisting of multiple concurrent processes. A novel aspect is the ability to only partially abstract a model, keeping parts of it concrete.
tags created within the Waisda? video tagging game from the Netherlands Institute for Sound and Vision. Through a quantitative analysis of the results, we demonstrate that using provenance and demographic information is beneficial for the accuracy of trust assessments.
annotation properties are relevant to the annotation quality. In addition we propose a trust model to build an annotator reputation using subjective logic and assess the relevance of both annotator and annotation properties on the reputation. We applied our models to the Steve.museum dataset and found that a subset of annotation properties can identify useful annotations with a precision of 90%. However, our studied annotator properties were less predictive.
This book offers students and researchers a guide to distributed algorithms that emphasizes examples and exercises rather than the intricacies of mathematical models. It avoids mathematical argumentation, often a stumbling block for students, teaching algorithmic thought rather than proofs and logic. This approach allows the student to learn a large number of algorithms within a relatively short span of time. Algorithms are explained through brief, informal descriptions, illuminating examples, and practical exercises. The examples and exercises allow readers to understand algorithms intuitively and from different perspectives. Proof sketches, arguing the correctness of an algorithm or explaining the idea behind fundamental results, are also included. An appendix offers pseudocode descriptions of many algorithms.
Distributed algorithms are performed by a collection of computers that send messages to each other or by multiple software threads that use the same shared memory. The algorithms presented in the book are for the most part classics, selected because they shed light on the algorithmic design of distributed systems or on key issues in distributed computing and concurrent programming.
Distributed Algorithms can be used in courses for upper-level undergraduates or graduate students in computer science, or as a reference for researchers in the field.