Papers by Han L.J. van der Maas

Intelligence, Jul 1, 2021
In this paper, we revisit this question. We start with a short overview of modern AI and showcase... more In this paper, we revisit this question. We start with a short overview of modern AI and showcase some of the AI breakthroughs in the four decades since Schank's paper. We follow with a description of the main techniques these AI breakthroughs were based upon, such as deep learning and reinforcement learning; two techniques that have deep roots in psychology. Next, we discuss how psychologically plausible AI is and could become given the modern breakthroughs in AI's ability to learn. We then access the main question of how intelligent AI systems actually are. For example, are there AI systems that can solve human intelligence tests? We conclude that Shank's observation, that intelligence is all about generalization and that AI is not particularly good at this, has, so far, withstood the test of time. Finally, we consider what AI insights could mean for the study of individual differences in intelligence. We close with how AI can further Intelligence research and vice versa, and look forward to fruitful interactions in the future.

Journal of Experimental Child Psychology, Mar 1, 1997
In a longitudinal study, five indicators of a transition in the development of analogical reasoni... more In a longitudinal study, five indicators of a transition in the development of analogical reasoning were examined in young elementary school children, (a) bimodality and (b) inaccessibility in the frequency distributions of test performance, and in the responses of the transitional subjects, respectively, (c) sudden jumps, (d) anomalous variance, and (e) critical slowing down. An open-ended geometric-analogies test was administered eight times during a period of six months to eighty children in Grades 1 and 2 (six-to eight-year-olds). Strong evidence for bimodality was found in the distribution of the test scores and weaker evidence for inaccessibility. In the performance curves of the transitional subjects sudden jumps were demonstrated. Furthermore, the transitional subjects displayed a temporary increase of inconsistent solution behavior and solution time near the sudden jump. The characteristic changes in the analogy performance of the transitional subjects were interpreted as a strategy shift.

Behavioral and Brain Sciences, Jun 1, 2010
The pivotal problem of comorbidity research lies in the psychometric foundation it rests on, that... more The pivotal problem of comorbidity research lies in the psychometric foundation it rests on, that is, latent variable theory, in which a mental disorder is viewed as a latent variable that causes a constellation of symptoms. From this perspective, comorbidity is a (bi)directional relationship between multiple latent variables. We argue that such a latent variable perspective encounters serious problems in the study of comorbidity, and offer a radically different conceptualization in terms of a network approach, where comorbidity is hypothesized to arise from direct relations between symptoms of multiple disorders. We propose a method to visualize comorbidity networks and, based on an empirical network for major depression and generalized anxiety, we argue that this approach generates realistic hypotheses about pathways to comorbidity, overlapping symptoms, and diagnostic boundaries, that are not naturally accommodated by latent variable models: Some pathways to comorbidity through the symptom space are more likely than others; those pathways generally have the same direction (i.e., from symptoms of one disorder to symptoms of the other); overlapping symptoms play an important role in comorbidity; and boundaries between diagnostic categories are necessarily fuzzy.

Learning and Individual Differences, Jun 1, 2010
This paper presents a systematic review of published data on the performance of sub-Saharan Afric... more This paper presents a systematic review of published data on the performance of sub-Saharan Africans on Raven's Progressive Matrices. The specific goals were to estimate the average level of performance, to study the Flynn Effect in African samples, and to examine the psychometric meaning of Raven's test scores as measures of general intelligence. Convergent validity of the Raven's tests is found to be relatively poor, although reliability and predictive validity are comparable to western samples. Factor analyses indicate that the Raven's tests are relatively weak indicators of general intelligence among Africans, and often measure additional factors, besides general intelligence. The degree to which Raven's scores of Africans reflect levels of general intelligence is unknown. Average IQ of Africans is approximately 80 when compared to US norms. Raven's scores among African adults have shown secular increases over the years. It is concluded that the Flynn Effect has yet to take hold in sub-Saharan Africa.

Intelligence, 2010
Washington Summit Publishers.] concluded that the average IQ of the Black population of sub-Sahar... more Washington Summit Publishers.] concluded that the average IQ of the Black population of sub-Saharan Africa lies below 70. In this paper, the authors systematically review published empirical data on the performance of Africans on the following IQ tests: Draw-A-Man (DAM) test, Kaufman-Assessment Battery for Children (K-ABC), the Wechsler scales (WAIS & WISC), and several other IQ tests (but not the Raven's tests). Inclusion and exclusion criteria are explicitly discussed. Results show that average IQ of Africans on these tests is approximately 82 when compared to UK norms. We provide estimates of the average IQ per country and estimates on the basis of alternative inclusion criteria. Our estimate of average IQ converges with the finding that national IQs of sub-Saharan African countries as predicted from several international studies of student achievement are around 82. It is suggested that this estimate should be considered in light of the Flynn Effect. It is concluded that more psychometric studies are needed to address the issue of measurement bias of western IQ tests for Africans.

arXiv (Cornell University), Sep 1, 2021
Networks (graphs) in psychology are often restricted to settings without interventions. Here we c... more Networks (graphs) in psychology are often restricted to settings without interventions. Here we consider a framework borrowed from biology that involves multiple interventions from different contexts (observations and experiments) in a single analysis. The method is called perturbation graphs. In gene regulatory networks, the induced change in one gene is measured on all other genes in the analysis, thereby assessing possible causal relations. This is repeated for each gene in the analysis. A perturbation graph leads to the correct set of causes (not necessarily direct causes). Subsequent pruning of paths in the graph (called transitive reduction) should reveal direct causes. We show that transitive reduction will not in general lead to the correct underlying graph. There is however a close relation with another method, called invariant causal prediction. Invariant causal prediction can be considered as a generalisation of the perturbation graph method where including additional variables (and so conditioning on those variables) does reveal direct causes, and thereby replacing transitive reduction. We explain the basic ideas of perturbation graphs, transitive reduction and invariant causal prediction and investigate their connections. We conclude that perturbation graphs provide a promising new tool for experimental designs in psychology, and combined with invariant prediction make it possible to reveal direct causes instead of causal paths. As an illustration we apply the perturbation graphs and invariant causal prediction to a data set about attitudes on meat consumption.
arXiv (Cornell University), Apr 18, 2019
Psych, Oct 8, 2021
This article is an open access article distributed under the terms and conditions of the Creative... more This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY
Intelligence, Jul 1, 2016
Cognitive Development, 1995

Social Psychological and Personality Science, Aug 6, 2018
Attitude strength is a key characteristic of attitudes. Strong attitudes are durable and impactfu... more Attitude strength is a key characteristic of attitudes. Strong attitudes are durable and impactful, while weak attitudes are fluctuating and inconsequential. Recently, the causal attitude network (CAN) model was proposed as a comprehensive measurement model of attitudes, which conceptualizes attitudes as networks of causally connected evaluative reactions (i.e., beliefs, feelings, and behavior toward an attitude object). Here, we test the central postulate of the CAN model that highly connected attitude networks correspond to strong attitudes. We use data from the American National Election Studies 1980-2012 on attitudes toward presidential candidates (N ¼ 18,795). We first show that political interest predicts connectivity of attitude networks toward presidential candidates. Second, we show that connectivity is strongly related to two defining features of strong attitudes-stability of the attitude and the attitude's impact on behavior. We conclude that network theory provides a promising framework to advance the understanding of attitude strength.

Perspectives on Psychological Science, Oct 24, 2019
The positive manifold of intelligence has fascinated generations of scholars in human ability. In... more The positive manifold of intelligence has fascinated generations of scholars in human ability. In the past century, various formal explanations have been proposed, including the dominant g factor, the revived sampling theory, and the recent multiplier effect model and mutualism model. In this article, we propose a novel idiographic explanation. We formally conceptualize intelligence as evolving networks in which new facts and procedures are wired together during development. The static model, an extension of the Fortuin-Kasteleyn model, provides a parsimonious explanation of the positive manifold and intelligence's hierarchical factor structure. We show how it can explain the Matthew effect across developmental stages. Finally, we introduce a method for studying growth dynamics. Our truly idiographic approach offers a new view on a century-old construct and ultimately allows the fields of human ability and human learning to coalesce.

BMC Psychiatry, Sep 18, 2015
Background: A defining characteristic of Major Depressive Disorder (MDD) is its episodic course, ... more Background: A defining characteristic of Major Depressive Disorder (MDD) is its episodic course, which might indicate that MDD is a nonlinear dynamic phenomenon with two discrete states. We investigated this hypothesis using the symptom time series of individual patients. Methods: In 178 primary care patients with MDD, the presence of the nine DSM-IV symptoms of depression was recorded weekly for two years. For each patient, the time-series plots as well as the frequency distributions of the symptoms over 104 weeks were inspected. Furthermore, two indicators of bimodality were obtained: the bimodality coefficient (BC) and the fit of a 1-and a 2-state Hidden Markov Model (HMM). Results: In 66 % of the sample, high bimodality coefficients (BC > .55) were found. These corresponded to relatively sudden jumps in the symptom curves and to highly skewed or bimodal frequency distributions. The results of the HMM analyses classified 90 % of the symptom distributions as bimodal.
Catastrophe theory can be very useful in research concerning developmental transitions. It delive... more Catastrophe theory can be very useful in research concerning developmental transitions. It delivers us a number of canonical transition models as well as criteria which can be used for the detection of transitions in experimental research. In this article the relevance of using catastrophe theory in cognitive developmental research will be discussed. Particularly the detection criteria and their operationalizations in conservation developmental research will be discussed, as well as a number of fitting techniques for canonical transition models with respect to the applicability and robustness of the methods.

Journal of Intelligence, Mar 3, 2020
One of the highest ambitions in educational technology is the move towards personalized learning.... more One of the highest ambitions in educational technology is the move towards personalized learning. To this end, computerized adaptive learning (CAL) systems are developed. A popular method to track the development of student ability and item difficulty, in CAL systems, is the Elo Rating System (ERS). The ERS allows for dynamic model parameters by updating key parameters after every response. However, drawbacks of the ERS are that it does not provide standard errors and that it results in rating variance inflation. We identify three statistical issues responsible for both of these drawbacks. To solve these issues we introduce a new tracking system based on urns, where every person and item is represented by an urn filled with a combination of green and red marbles. Urns are updated, by an exchange of marbles after each response, such that the proportions of green marbles represent estimates of person ability or item difficulty. A main advantage of this approach is that the standard errors are known, hence the method allows for statistical inference, such as testing for learning effects. We highlight features of the Urnings algorithm and compare it to the popular ERS in a simulation study and in an empirical data example from a large-scale CAL application.

Psychometrika, Jul 13, 2021
The emergence of computer-based assessments has made response times, in addition to response accu... more The emergence of computer-based assessments has made response times, in addition to response accuracies, available as a source of information about test takers' latent abilities. The development of substantively meaningful accounts of the cognitive process underlying item responses is critical to establishing the validity of psychometric tests. However, existing substantive theories such as the diffusion model have been slow to gain traction due to their unwieldy functional form and regular violations of model assumptions in psychometric contexts. In the present work, we develop an attention-based diffusion model based on process assumptions that are appropriate for psychometric applications. This model is straightforward to analyse using Gibbs sampling and can be readily extended. We demonstrate our model's good computational and statistical properties in a comparison with two well-established psychometric models.

Behavior Research Methods, May 1, 2005
Present optimization techniques in latent class analysis apply the expectation maximization algor... more Present optimization techniques in latent class analysis apply the expectation maximization algorithm or the Newton-Raphson algorithm for optimizing the parameter values of a prespecified model. These techniques can be used to find maximum likelihood estimates of the parameters, given the specified structure of the model, which is defined by the number of classes and, possibly, fixation and equality constraints. The model structure is usually chosen on theoretical grounds. A large variety of structurally different latent class models can be compared using goodness-of-fit indices of the chi-square family, Akaike's information criterion, the Bayesian information criterion, and various other statistics. However, finding the optimal structure for a given goodness-of-fit index often requires a lengthy search in which all kinds of model structures are tested. Moreover, solutions may depend on the choice of initial values for the parameters. This article presents a new method by which one can simultaneously infer the model structure from the data and optimize the parameter values. The method consists of a genetic algorithm in which any goodness-of-fit index can be used as a fitness criterion. In a number of test cases in which data sets from the literature were used, it is shown that this method provides models that fit equally well as or better than the models suggested in the original articles.

Scientific Reports, Oct 1, 2020
people's choices are often found to be inconsistent with the assumptions of rational choice theor... more people's choices are often found to be inconsistent with the assumptions of rational choice theory. over time, several probabilistic models have been proposed that account for such deviations from rationality. However, these models have become increasingly complex and are often limited to particular choice phenomena. Here we introduce a network approach that explains a broad set of choice phenomena. We demonstrate that this approach can be used to compare different choice theories and integrates several choice mechanisms from established models. A basic setup implements bounded rationality, loss aversion, and inhibition in a natural fashion, which allows us to predict the occurrence of well-known choice phenomena, such as the endowment effect and the similarity, attraction, compromise, and phantom context effects. Our results show that this network approach provides a simple representation of complex choice behaviour, and can be used to gain a better understanding of how the many choice phenomena and key theoretical principles from different types of decision-making are connected. The response behaviour of humans on (discrete) choice problems has been extensively studied in many fields of science, such as economics 1-4 , psychology 5-8 , psychometrics 9,10 , cognitive science 11-14 , neuroscience 15,16 , and engineering 17,18. Traditional theories of choice assume the decision-maker as a homo economicus 19,20 , i.e., rational 1,5,21. For choices to be rational all choice alternatives must be comparable and have transitive preference relations, so they can be ordered by the decision-maker. A second feature, and a central principle of rational choice theory, is that a rational decision-maker consistently chooses the outcome that maximises utility, or expected utility for risky or uncertain choices 5,22-24. These assumptions clearly fail the scrutiny of everyday experience. To account for the observed inconsistencies, most models nowadays characterise choice as a probabilistic process 6,9,21,24-29. A prominent group of probabilistic choice models, such as Luce's strict utility model 6,24 and the multinomial logit model 21 for preference, and Bock's nominal categories model 30 for aptitude, are characterised by the following distribution for the choices: in which p S (x) ∈ [0, 1] represents the probably of choosing alternative x from the set of possible alternatives S as a function of the utility of alternative x, exp(π x) , where π x ∈ R. This distribution is also known as the Boltzmann distribution 31,32 from statistical mechanics. For binary choice problems (S = {x, y}) Eq. (1) takes a form known as the Bradley-Terry-Luce model in the decision-making literature 33,34 , or as the Rasch model 9 in psychometrics: (1) p S (x) = exp (π x) y∈S exp π y ,

Educational Psychology Review, May 5, 2021
The question of how learners' motivation influences their academic achievement and vice versa has... more The question of how learners' motivation influences their academic achievement and vice versa has been the subject of intensive research due to its theoretical relevance and important implications for the field of education. Here, we present our understanding of how influential theories of academic motivation have conceptualized reciprocal interactions between motivation and achievement and the kinds of evidence that support this reciprocity. While the reciprocal nature of the relationship between motivation and academic achievement has been established in the literature, further insights into several features of this relationship are still lacking. We therefore present a research agenda where we identify theoretical and methodological challenges that could inspire further understanding of the reciprocal relationship between motivation and achievement as well as inform future interventions. Specifically, the research agenda includes the recommendation that future research considers (1) multiple motivation constructs, (2) behavioral mediators, (3) a network approach, (4) alignment of intervals of measurement and the short vs. long time scales of motivation constructs, (5) designs that meet the criteria for making causal, reciprocal inferences, (6) appropriate statistical models, (7) alternatives to self-reports, (8) different ways of measuring achievement, and (9) generalizability of the reciprocal relations to various developmental, ethnic, and sociocultural groups.
Uploads
Papers by Han L.J. van der Maas