{"title":"Jordan William Suchow","subtitle":"Cognitive science + information systems at Stevens Institute of Technology, studying human, machine, and collective intelligence","generator":"Jekyll","link":[{"@attributes":{"rel":"self","type":"application\/atom+xml","href":"https:\/\/suchow.io\/feed.xml"}},{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/suchow.io"}}],"updated":"2026-04-02T10:07:44-04:00","id":"https:\/\/suchow.io\/","author":{"name":"Jordan Suchow","uri":"https:\/\/suchow.io\/","email":"jws@stevens.edu"},"entry":[{"title":{"@attributes":{"type":"html"}},"link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/suchow.io\/reproducibility\/"}},"id":"https:\/\/suchow.io\/reproducibility","published":"2026-01-02T00:00:00-05:00","updated":"2026-01-02T00:00:00-05:00","author":{"name":"Jordan Suchow","uri":"https:\/\/suchow.io","email":"jws@stevens.edu"},"content":"<p>This study investigates the reproducibility of the social and behavioural sciences through a large-scale systematic replication effort.<\/p>\n\n  <p><a href=\"https:\/\/suchow.io\/reproducibility\/\">Investigating the reproducibility of the social and behavioural sciences<\/a> was originally published by Jordan Suchow at <a href=\"https:\/\/suchow.io\">Jordan William Suchow<\/a> on January 02, 2026.<\/p>"},{"title":{"@attributes":{"type":"html"}},"link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/suchow.io\/analytic-robustness\/"}},"id":"https:\/\/suchow.io\/analytic-robustness","published":"2026-01-01T00:00:00-05:00","updated":"2026-01-01T00:00:00-05:00","author":{"name":"Jordan Suchow","uri":"https:\/\/suchow.io","email":"jws@stevens.edu"},"content":"<p>This study investigates the analytic robustness of the social and behavioural sciences through a large-scale collaboration examining how different analytical approaches to the same research questions yield different results.<\/p>\n\n  <p><a href=\"https:\/\/suchow.io\/analytic-robustness\/\">Investigating the analytic robustness of the social and behavioural sciences<\/a> was originally published by Jordan Suchow at <a href=\"https:\/\/suchow.io\">Jordan William Suchow<\/a> on January 01, 2026.<\/p>"},{"title":{"@attributes":{"type":"html"}},"link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/suchow.io\/semantic-priming\/"}},"id":"https:\/\/suchow.io\/semantic-priming","published":"2025-11-17T00:00:00-05:00","updated":"2025-11-17T00:00:00-05:00","author":{"name":"Jordan Suchow","uri":"https:\/\/suchow.io","email":"jws@stevens.edu"},"content":"<p>Semantic priming has been studied for nearly 50 years across various experimental manipulations and theoretical frameworks. Although previous studies provide insight into the cognitive underpinnings of semantic representations, they have suffered from small sample sizes and a lack of linguistic and cultural diversity. In this Registered Report, we measured the size and the variability of the semantic priming effect across 19 languages (n = 25,163 participants analysed) by creating the largest available database of semantic priming values using an adaptive sampling procedure. We found evidence for semantic priming in terms of differences in response latencies between related word-pair conditions and unrelated word-pair conditions. Model comparisons showed that the inclusion of a random intercept for language improved model fit, providing support for variability in semantic priming across languages. This study highlights the robustness and variability of semantic priming across languages and provides a rich, linguistically diverse dataset for further analysis.<\/p>\n\n<p>This massive collaborative effort represents the largest cross-linguistic study of semantic priming to date, spanning 19 different languages and involving over 25,000 participants. The findings demonstrate both the universal nature of semantic priming effects and meaningful cross-linguistic variation, advancing our understanding of how semantic memory operates across diverse linguistic and cultural contexts.<\/p>\n\n  <p><a href=\"https:\/\/suchow.io\/semantic-priming\/\">Measuring the semantic priming effect across many languages<\/a> was originally published by Jordan Suchow at <a href=\"https:\/\/suchow.io\">Jordan William Suchow<\/a> on November 17, 2025.<\/p>"},{"title":{"@attributes":{"type":"html"}},"link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/suchow.io\/truth-neurons\/"}},"id":"https:\/\/suchow.io\/truth-neurons","published":"2025-07-01T00:00:00-04:00","updated":"2025-07-01T00:00:00-04:00","author":{"name":"Jordan Suchow","uri":"https:\/\/suchow.io","email":"jws@stevens.edu"},"content":"<p>An investigation into truth neurons in large language models, examining the internal mechanisms by which neural networks represent and process truthfulness.<\/p>\n\n  <p><a href=\"https:\/\/suchow.io\/truth-neurons\/\">Truth neurons<\/a> was originally published by Jordan Suchow at <a href=\"https:\/\/suchow.io\">Jordan William Suchow<\/a> on July 01, 2025.<\/p>"},{"title":{"@attributes":{"type":"html"}},"link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/suchow.io\/reliability-replications\/"}},"id":"https:\/\/suchow.io\/reliability-replications","published":"2025-03-19T00:00:00-04:00","updated":"2025-03-19T00:00:00-04:00","author":{"name":"Jordan Suchow","uri":"https:\/\/suchow.io","email":"jws@stevens.edu"},"content":"<p>This study investigates researcher variability in computational reproduction, an activity for which it is least expected. Eighty-five independent teams attempted numerical replication of results from an original study of policy preferences and immigration. Reproduction teams were randomly grouped into a \u2018transparent group\u2019 receiving original study and code or \u2018opaque group\u2019 receiving only a method and results description and no code. The transparent group mostly verified original results (95.7% same sign and p-value cutoff), while the opaque group had less success (89.3%). Second-decimal place exact numerical reproductions were less common (76.9 and 48.1%). Qualitative investigation of the workflows revealed many causes of error, including mistakes and procedural variations. When curating mistakes, we still find that only the transparent group was reliably successful. Our findings imply a need for transparency, but also more. Institutional checks and less subjective difficulty for researchers \u2018doing reproduction\u2019 would help, implying a need for better training. We also urge increased awareness of complexity in the research process and in \u2018push button\u2019 replications.<\/p>\n\n<p>This landmark study represents one of the largest systematic investigations of computational reproducibility, involving 85 research teams attempting to reproduce the same published results. The findings reveal alarming levels of variability even in supposedly straightforward computational reproduction tasks, highlighting fundamental challenges in scientific reliability. The study\u2019s key insight\u2014that transparency helps but isn\u2019t sufficient\u2014has profound implications for how we conduct and evaluate scientific research. The identification of six distinct sources of error (mistakes, procedural variations, missing components, interpretational differences, and questionable method knowledge) provides a roadmap for improving reproducibility practices across the sciences.<\/p>\n\n  <p><a href=\"https:\/\/suchow.io\/reliability-replications\/\">The reliability of replications: a study in computational reproductions<\/a> was originally published by Jordan Suchow at <a href=\"https:\/\/suchow.io\">Jordan William Suchow<\/a> on March 19, 2025.<\/p>"},{"title":{"@attributes":{"type":"html"}},"link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/suchow.io\/coling-crypto\/"}},"id":"https:\/\/suchow.io\/coling-crypto","published":"2025-01-20T00:00:00-05:00","updated":"2025-01-20T00:00:00-05:00","author":{"name":"Jordan Suchow","uri":"https:\/\/suchow.io","email":"jws@stevens.edu"},"content":"<p>An agent-based approach to the single cryptocurrency trading challenge at the FinNLP-FNP-LLMFinLegal shared task at COLING 2025.<\/p>\n\n  <p><a href=\"https:\/\/suchow.io\/coling-crypto\/\">FinNLP-FNP-LLMFinLegal @ COLING 2025 shared task: agent-based single cryptocurrency trading challenge<\/a> was originally published by Jordan Suchow at <a href=\"https:\/\/suchow.io\">Jordan William Suchow<\/a> on January 20, 2025.<\/p>"},{"title":{"@attributes":{"type":"html"}},"link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/suchow.io\/investorbench\/"}},"id":"https:\/\/suchow.io\/investorbench","published":"2025-01-17T00:00:00-05:00","updated":"2025-01-17T00:00:00-05:00","author":{"name":"Jordan Suchow","uri":"https:\/\/suchow.io","email":"jws@stevens.edu"},"content":"<p>Recent advancements have underscored the potential of large language model (LLM)-based agents in financial decision-making. Despite this progress, the field currently encounters two main challenges: (1) the lack of a comprehensive LLM agent framework adaptable to a variety of financial tasks, and (2) the absence of standardized benchmarks and consistent datasets for assessing agent performance. To tackle these issues, we introduce InvestorBench, the first benchmark specifically designed for evaluating LLM-based agents in diverse financial decision-making contexts. InvestorBench enhances the versatility of LLM-enabled agents by providing a comprehensive suite of tasks applicable to different financial products, including single equities like stocks and cryptocurrencies, and exchange-traded funds (ETFs). Additionally, we assess the reasoning and decision-making capabilities of our agent framework using thirteen different LLMs as backbone models, across various market environments and tasks. Furthermore, we have curated a diverse collection of open-source, datasets and developed a comprehensive suite of environments for financial decision-making. This establishes a highly accessible platform for evaluating financial agents\u2019 performance across various scenarios.<\/p>\n\n  <p><a href=\"https:\/\/suchow.io\/investorbench\/\">INVESTORBENCH: A benchmark for financial decision-making tasks with LLM-based agents<\/a> was originally published by Jordan Suchow at <a href=\"https:\/\/suchow.io\">Jordan William Suchow<\/a> on January 17, 2025.<\/p>"},{"title":{"@attributes":{"type":"html"}},"link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/suchow.io\/beyond-20-questions\/"}},"id":"https:\/\/suchow.io\/beyond-20-questions","published":"2024-12-21T00:00:00-05:00","updated":"2024-12-21T00:00:00-05:00","author":{"name":"Jordan Suchow","uri":"https:\/\/suchow.io","email":"jws@stevens.edu"},"content":"<p>The dominant paradigm of experiments in the social and behavioral sciences views an experiment as a test of a theory, where the theory is assumed to generalize beyond the experiment\u2019s specific conditions. According to this view, which Alan Newell once characterized as \u201cplaying twenty questions with nature,\u201d theory is advanced one experiment at a time, and the integration of disparate findings is assumed to happen via the scientific publishing process. In this article, we argue that the process of integration is at best inefficient, and at worst it does not, in fact, occur. We further show that the challenge of integration cannot be adequately addressed by recently proposed reforms that focus on the reliability and replicability of individual findings, nor simply by conducting more or larger experiments. Rather, the problem arises from the imprecise nature of social and behavioral theories and, consequently, a lack of commensurability across experiments conducted under different conditions. Therefore, researchers must fundamentally rethink how they design experiments and how the experiments relate to theory. We specifically describe an alternative framework, integrative experiment design, which intrinsically promotes commensurability and continuous integration of knowledge. In this paradigm, researchers explicitly map the design space of possible experiments associated with a given research question, embracing many potentially relevant theories rather than focusing on just one. Researchers then iteratively generate theories and test them with experiments explicitly sampled from the design space, allowing results to be integrated across experiments. Given recent methodological and technological developments, we conclude that this approach is feasible and would generate more-reliable, more-cumulative empirical and theoretical knowledge than the current paradigm \u2013 and with far greater efficiency.<\/p>\n\n  <p><a href=\"https:\/\/suchow.io\/beyond-20-questions\/\">Beyond playing 20 questions with nature: Integrative experiment design in the social and behavioral sciences<\/a> was originally published by Jordan Suchow at <a href=\"https:\/\/suchow.io\">Jordan William Suchow<\/a> on December 21, 2024.<\/p>"},{"title":{"@attributes":{"type":"html"}},"link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/suchow.io\/mirror-reversal\/"}},"id":"https:\/\/suchow.io\/mirror-reversal","published":"2024-11-01T00:00:00-04:00","updated":"2024-11-01T00:00:00-04:00","author":{"name":"Jordan Suchow","uri":"https:\/\/suchow.io","email":"jws@stevens.edu"},"content":"<p>An investigation of how mirror reversal affects the perception of faces, examining the asymmetries in facial appearance and their consequences for recognition and judgment.<\/p>\n\n  <p><a href=\"https:\/\/suchow.io\/mirror-reversal\/\">A reflection on faces seen under mirror reversal<\/a> was originally published by Jordan Suchow at <a href=\"https:\/\/suchow.io\">Jordan William Suchow<\/a> on November 01, 2024.<\/p>"},{"title":{"@attributes":{"type":"html"}},"link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/suchow.io\/finnlp-acl\/"}},"id":"https:\/\/suchow.io\/finnlp-acl","published":"2024-08-01T00:00:00-04:00","updated":"2024-08-01T00:00:00-04:00","author":{"name":"Jordan Suchow","uri":"https:\/\/suchow.io","email":"jws@stevens.edu"},"content":"<p>An approach to the FinNLP-AgentScen-2024 shared task on financial challenges in large language models at ACL 2024.<\/p>\n\n  <p><a href=\"https:\/\/suchow.io\/finnlp-acl\/\">FinNLP-AgentScen-2024 shared task: financial challenges in large language models<\/a> was originally published by Jordan Suchow at <a href=\"https:\/\/suchow.io\">Jordan William Suchow<\/a> on August 01, 2024.<\/p>"},{"title":{"@attributes":{"type":"html"}},"link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/suchow.io\/bayesian-fusion\/"}},"id":"https:\/\/suchow.io\/bayesian-fusion","published":"2024-07-01T00:00:00-04:00","updated":"2024-07-01T00:00:00-04:00","author":{"name":"Jordan Suchow","uri":"https:\/\/suchow.io","email":"jws@stevens.edu"},"content":"<p>A method for actively learning a Bayesian matrix fusion model with deep side information, combining Bayesian data fusion with deep learning representations for improved prediction and inference.<\/p>\n\n  <p><a href=\"https:\/\/suchow.io\/bayesian-fusion\/\">Actively learning a Bayesian matrix fusion model with deep side information<\/a> was originally published by Jordan Suchow at <a href=\"https:\/\/suchow.io\">Jordan William Suchow<\/a> on July 01, 2024.<\/p>"},{"title":{"@attributes":{"type":"html"}},"link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/suchow.io\/finmem-iclr\/"}},"id":"https:\/\/suchow.io\/finmem-iclr","published":"2024-05-01T00:00:00-04:00","updated":"2024-05-01T00:00:00-04:00","author":{"name":"Jordan Suchow","uri":"https:\/\/suchow.io","email":"jws@stevens.edu"},"content":"<p>Workshop version of FinMem, a performance-enhanced LLM trading agent with layered memory and character design, presented at the LLMAgents Workshop at ICLR 2024.<\/p>\n\n  <p><a href=\"https:\/\/suchow.io\/finmem-iclr\/\">FinMem: A performance-enhanced LLM trading agent with layered memory and character design<\/a> was originally published by Jordan Suchow at <a href=\"https:\/\/suchow.io\">Jordan William Suchow<\/a> on May 01, 2024.<\/p>"},{"title":{"@attributes":{"type":"html"}},"link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/suchow.io\/fincon\/"}},"id":"https:\/\/suchow.io\/fincon","published":"2024-01-16T00:00:00-05:00","updated":"2024-01-16T00:00:00-05:00","author":{"name":"Jordan Suchow","uri":"https:\/\/suchow.io","email":"jws@stevens.edu"},"content":"<p>Large language models (LLMs) have demonstrated notable potential in conducting complex tasks and are increasingly utilized in various financial applications. However, high-quality sequential financial investment decision-making remains challenging. These tasks require multiple interactions with a volatile environment for every decision, demanding sufficient intelligence to maximize returns and manage risks. Although LLMs have been used to develop agent systems that surpass human teams and yield impressive investment returns, opportunities to enhance multi-source information synthesis and optimize decision-making outcomes through timely experience refinement remain unexplored. Here, we introduce FinCon, an LLM-based multi-agent framework tailored for diverse financial tasks. Inspired by effective real-world investment firm organizational structures, FinCon utilizes a manager-analyst communication hierarchy. This structure allows for synchronized cross-functional agent collaboration towards unified goals through natural language interactions and equips each agent with greater memory capacity than humans. Additionally, a risk-control component in FinCon enhances decision quality by episodically initiating a self-critiquing mechanism to update systematic investment beliefs. The conceptualized beliefs serve as verbal reinforcement for the future agent\u2019s behavior and can be selectively propagated to the appropriate node that requires knowledge updates. This feature significantly improves performance while reducing unnecessary peer-to-peer communication costs. Moreover, FinCon demonstrates strong generalization capabilities in various financial tasks, including stock trading and portfolio management.<\/p>\n\n  <p><a href=\"https:\/\/suchow.io\/fincon\/\">FinCon: A synthesized LLM multi-agent system with conceptual verbal reinforcement for enhanced financial decision making<\/a> was originally published by Jordan Suchow at <a href=\"https:\/\/suchow.io\">Jordan William Suchow<\/a> on January 16, 2024.<\/p>"},{"title":{"@attributes":{"type":"html"}},"link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/suchow.io\/finmem\/"}},"id":"https:\/\/suchow.io\/finmem","published":"2024-01-15T00:00:00-05:00","updated":"2024-01-15T00:00:00-05:00","author":{"name":"Jordan Suchow","uri":"https:\/\/suchow.io","email":"jws@stevens.edu"},"content":"<p>We introduce FINMEM, a novel Large Language Models (LLM)-based agent framework for financial trading, designed to address the need for automated systems that can transform real-time data into executable decisions. FINMEM comprises three core modules: Profile for customizing agent characteristics, Memory for hierarchical financial data assimilation, and Decision-making for converting insights into investment choices. The Memory module, which mimics human traders\u2019 cognitive structure, offers interpretability and real-time tuning while handling the critical timing of various information types. It employs a layered approach to process and prioritize data based on its timeliness and relevance, ensuring that the most recent and impactful information is given appropriate weight in decision-making. FINMEM\u2019s adjustable cognitive span allows retention of critical information beyond human limits, enabling it to balance historical patterns with current market dynamics. This framework facilitates self-evolution of professional knowledge, agile reactions to investment cues, and continuous refinement of trading decisions in financial environments. When compared against advanced algorithmic agents using a large-scale real-world financial dataset, FINMEM demonstrates superior performance across classic metrics like Cumulative Return and Sharpe ratio. Further tuning of the agent\u2019s perceptual span and character setting enhances its trading performance, positioning FINMEM as a cutting-edge solution for automated trading.<\/p>\n\n  <p><a href=\"https:\/\/suchow.io\/finmem\/\">FinMem: A performance-enhanced LLM trading agent with layered memory and character Design<\/a> was originally published by Jordan Suchow at <a href=\"https:\/\/suchow.io\">Jordan William Suchow<\/a> on January 15, 2024.<\/p>"},{"title":{"@attributes":{"type":"html"}},"link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/suchow.io\/hicss-responsible-ai\/"}},"id":"https:\/\/suchow.io\/hicss-responsible-ai","published":"2024-01-05T00:00:00-05:00","updated":"2024-01-05T00:00:00-05:00","author":{"name":"Jordan Suchow","uri":"https:\/\/suchow.io","email":"jws@stevens.edu"},"content":"<p>An exploration of public opinion on responsible AI using cultural consensus theory, examining how shared beliefs about AI governance and ethics emerge across populations.<\/p>\n\n  <p><a href=\"https:\/\/suchow.io\/hicss-responsible-ai\/\">Exploring public opinion on responsible AI through the lens of cultural consensus theory<\/a> was originally published by Jordan Suchow at <a href=\"https:\/\/suchow.io\">Jordan William Suchow<\/a> on January 05, 2024.<\/p>"},{"title":{"@attributes":{"type":"html"}},"link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/suchow.io\/face-patent-continuation\/"}},"id":"https:\/\/suchow.io\/face-patent-continuation","published":"2023-10-01T00:00:00-04:00","updated":"2023-10-01T00:00:00-04:00","author":{"name":"Jordan Suchow","uri":"https:\/\/suchow.io","email":"jws@stevens.edu"},"content":"<p>A continuation patent (U.S. Patent No. 11,727,717) covering data-driven, photorealistic social face-trait encoding, prediction, and manipulation using deep neural networks. This is a continuation of U.S. Patent No. 11,250,245.<\/p>\n\n  <p><a href=\"https:\/\/suchow.io\/face-patent-continuation\/\">Data-driven, photorealistic social face-trait encoding, prediction, and manipulation using deep neural networks (continuation)<\/a> was originally published by Jordan Suchow at <a href=\"https:\/\/suchow.io\">Jordan William Suchow<\/a> on October 01, 2023.<\/p>"},{"title":{"@attributes":{"type":"html"}},"link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/suchow.io\/folk-theories\/"}},"id":"https:\/\/suchow.io\/folk-theories","published":"2023-08-01T00:00:00-04:00","updated":"2023-08-01T00:00:00-04:00","author":{"name":"Jordan Suchow","uri":"https:\/\/suchow.io","email":"jws@stevens.edu"},"content":"<p>An examination of how folk theories of sociotechnical systems shape the design and operation of digital platforms, with implications for platform governance and user behavior.<\/p>\n\n  <p><a href=\"https:\/\/suchow.io\/folk-theories\/\">The design and operation of digital platforms under folk theories of sociotechnical systems<\/a> was originally published by Jordan Suchow at <a href=\"https:\/\/suchow.io\">Jordan William Suchow<\/a> on August 01, 2023.<\/p>"},{"title":{"@attributes":{"type":"html"}},"link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/suchow.io\/predicting-judgments\/"}},"id":"https:\/\/suchow.io\/predicting-judgments","published":"2023-07-15T00:00:00-04:00","updated":"2023-07-15T00:00:00-04:00","author":{"name":"Jordan Suchow","uri":"https:\/\/suchow.io","email":"jws@stevens.edu"},"content":"<p>Deep neural network representations of entities can serve as inputs to computational models of human mental representations\nto predict people\u2019s behavioral and physiological responses to\nthose entities. Though increasingly successful in their predictive capabilities, the implicit notion of \u201dhuman\u201d that they rely\nupon often glosses over individual-level differences in beliefs,\nattitudes, and associations, as well as group-level cultural constructs. In this paper, we model shared representations of food\nhealthiness by aligning learned word representations with the\nconsensus among a group of respondents. To do so, we extend\nCultural Consensus Theory to include latent constructs structured as fine-tuned word representations. We then apply the\nmodel to a dataset of people\u2019s judgments of food healthiness.\nWe show that our method creates a robust mapping between\nlearned word representations and culturally constructed representations that guide consumer behavior.<\/p>\n\n  <p><a href=\"https:\/\/suchow.io\/predicting-judgments\/\">Predicting judgments of food healthiness with deep latent-construct cultural consensus theory<\/a> was originally published by Jordan Suchow at <a href=\"https:\/\/suchow.io\">Jordan William Suchow<\/a> on July 15, 2023.<\/p>"},{"title":{"@attributes":{"type":"html"}},"link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/suchow.io\/scaling-up-behavioral-studies\/"}},"id":"https:\/\/suchow.io\/scaling-up-behavioral-studies","published":"2023-07-01T00:00:00-04:00","updated":"2023-07-01T00:00:00-04:00","author":{"name":"Jordan Suchow","uri":"https:\/\/suchow.io","email":"jws@stevens.edu"},"content":"<p>A century of experiments on human visual memory have catalogued the many determinants of what people remember about their visual environments. In a massive experimental study of visual memory, Huang leverages mobile gaming to collect a dataset of 35 million behavioural responses that reveals how the mechanisms of visual spatial memory fit together.<\/p>\n\n  <p><a href=\"https:\/\/suchow.io\/scaling-up-behavioral-studies\/\">Scaling up behavioural studies of visual memory<\/a> was originally published by Jordan Suchow at <a href=\"https:\/\/suchow.io\">Jordan William Suchow<\/a> on July 01, 2023.<\/p>"},{"title":{"@attributes":{"type":"html"}},"link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/suchow.io\/psa-covid-dataset\/"}},"id":"https:\/\/suchow.io\/psa-covid-dataset","published":"2023-01-15T00:00:00-05:00","updated":"2023-01-15T00:00:00-05:00","author":{"name":"Jordan Suchow","uri":"https:\/\/suchow.io","email":"jws@stevens.edu"},"content":"<p>This paper presents the dataset from the Psychological Science Accelerator\u2019s COVID-19 rapid-response projects, a global collaborative effort to study psychological and behavioral responses to the pandemic.<\/p>\n\n  <p><a href=\"https:\/\/suchow.io\/psa-covid-dataset\/\">The Psychological Science Accelerator's COVID-19 rapid-response dataset<\/a> was originally published by Jordan Suchow at <a href=\"https:\/\/suchow.io\">Jordan William Suchow<\/a> on January 15, 2023.<\/p>"}]}