Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2012
began to explore the potential for cooperation in an emerging field then still referred to as 'Computational Philology.' Eventually in the School of Humanities the 'Arbeitsstelle Computerphilologie', one of the first institutions of its kind in Germany, was established.
Understanding Digital Humanities, 2012
2017
The structure of the Digital Humanities master’s program at University of Stuttgart is characterized by a big proportion of classes related to natural language processing. In this paper, we discuss the motivation for this design and associated challenges students and teachers are faced with. To provide background information, we also sum up our underlying perspective on Digital Humanities. Our discussion is driven by a qualitative analysis of a survey handed to the students of the program.
Grey Room 73, 2018
Bottom: Gruppo N. Rilievo otticodinamico (Optico-dynamic relief), 1962. As displayed in the 1962 Arte programmata exhibition at the Olivetti Showroom in Milan. Frame enlargement from Enzo Monachesi, dir., Arte programmata (1963). Opposite: Gruppo N. Interferenza geometrica (Geometric interference), 1962. As displayed in the 1962 Arte programmata exhibition at the Olivetti Showroom in Milan. Frame enlargement from Enzo Monachesi, dir., Arte programmata (1963).
This paper describes the intense software filtering that has allowed the arXiv e-print repository to sort and process large numbers of submissions with minimal human intervention, making it one of the most important and influential cases of open access repositories to date. The paper narrates arXiv's transformation, using sophisticated sorting/filtering algorithms to decrease human workload, from a small mailing list used by a few hundred researchers to a site that processes thousands of papers per month. However there are significant negative consequences for authors who have been filtered out of arXiv's main categories. There is thus a continued need to check and balance arXiv's boundaries, based in the essential tension between stability and innovation.
This research paper aims to reflect on the meaning of computation in architecture. We approach the subject by comparing key discussions related to the theme of cyberspace during the middle and end of the 20th century, and significant theoretical discussions during the “new computer age” of the nineties. The discursive atmosphere of the epoch was highly influenced by philosophical thinkers like Gillez Deleuze and Felix Guattari and their reflexions on materiality, complexity and topology. Deleuze`s work on Leibniz in 1988, introduced a new idea of the object and formal continuity in architecture, which has also played a significant role in the use of computers. This was particularly important in regard to the appropriation of simulation and animation techniques available in 3D software packages, where it was the main medium to choreograph kinematics and forces for conceiving forms In contrast, the introduction of the computer also triggered more sophisticated discussions in relation to machines and society and how computers influence the way we understand our world. The Massachusetts Institute of Technology (MIT) was an important research environment and pioneer in computational systems. In 1960 JCR Lickleder was proposing a man machine symbiosis in which both of their cognitive processes would be tightly coupled together. Nicholas Negroponte founder of the Architecture Machine group at MIT (1968-1972), was examining CAD as an intelligent system that augments the object and the process and speculates of an architecture without architects. In 1976, Cedric Price through his “Generator Project” were proposing an architecture with agency, that could learn to perceive human senses and even get bored. If earlier discussions in computing where focused on the relationships between machine-machine, machine-human and machine- environment interactions, why did the nineties adopt digital tools with such vigour, as passive design agents of formal expressionism?
This is the report of a panel discussion held in connection with the special session on computational methods in dialectology at Methods XIII: Methods in Dialectology on 5 August, 2008 at the University of Leeds. We scheduled this panel discussion in order to reflect on what the introduction of computational methods has meant to our subfield of linguistics, dialectology (in alternative divisions of linguistic subfields also known as variationist linguistics), and whether the dialectologists' experience is typical of such introductions in other humanities studies. Let's emphasize that we approach the question as working scientists and scholars in the humanities rather than as methodology experts or as historians or philosophers of science, i.e. we wished to reflect on how the introduction of computational methods has gone in our own field in order to conduct our own future research more effectively, or alternatively, to suggest to colleagues in neighbouring disciplines which aspects of computational studies have been successful, which have not been, and which might have been introduced more effectively. Since we explicitly wished to reflect not only on how things have gone in dialectology, but also to compare our experiences to others, we invited panellists with broad experience in linguistics and other fields. We introduce the chair and panellists briefly. John Nerbonne chaired the panel discussion. He works on dialectology, but also on grammar, and on applications such as language learning and information extraction and information access. He works in Groningen, and is past president of the Association for Computational Linguistics (2002). David Robey is an expert on medieval Italian literature, professor at Reading, and Programme Director of the ICT in Arts and Humanities Research Programme of the (British) Arts and Humanities Research Council. He is also president of the Association for Literary and Linguistic Computing. Paul Heggarty is an expert on the Quechua language family, and also works on Romance and Germanic, studying both synchronic variation and diachronic developments, where he' s one of a growing number of researchers applying phylogenetic inference (and software developed in biology for this purpose) to historical linguistics. He' s at Cambridge. Roeland van Hout is an expert on sociolinguistics, phonetics and second language acquisition. He is co-author (with Toni Rietveld) of the world' s best book on applying statistics to linguistics, (Harald Baayen has just come out (2008) with a contender, but 15 years at #1 is impressive). He' s at Nijmegen, and he's served on the Dutch research council's Humanities board for the last five years. We suggested in inviting the panellists that they deliver some introductory remarks structured in a way that responds inter alia to two questions: Are there elements of the computational work in language variation that should be made known to other scholars in the humanities? Are there other computational lines of work in the humanities that are not used in studying language variation but which should be of interest? Is the primary common interest the computational tools or are there research issues which might benefit from a plurality of perspectives? We emphasized, however, that the panel speakers should feel free to respond to the question of how the disciplines might interact in more creative ways, especially if they feel that the questions above are put too narrowly. Each panellist presented some ideas before the discussion started. Paul Heggarty summarised some basic methodological issues in applying computational techniques to language data, reviewing typical criticisms-and how best to answer them.
Literary and Linguistic Computing, 2005
Dued to the celebration of the thirtieth anniversary of the Department for Literary and Documentary Data Processing in Tuebingen this article is written. It gives an overview of humanities computing developments since the formation of this Research-Department. The paper is devided into three parts. First, the experiences in humanities computing are reviewed. For these purposes the author points out various aspects of the development and exploitation of scholarly materials using computers, considering some of the current work to create new tools for research. This chapter is followed by the discussion of some of the key challenges of this century, by that humanities computing and the scholarship, of which it is a part, are faced with. Finally, the author gives a summary of what in his opinion would be the key roles of humanities computing in the future. * Aus dem Protokoll des 80. Kolloquiums über die Anwendung der Elektronischen Datenverarbeitung in den Geisteswissenschaften an der Universität Tübingen vom 18.
2017
This panel reports on the open, shareable, and reproducible workflow methodology for digital humanities research developed by the 4Humanities.org "WhatEvery1Says" (WE1S) project. WE1S is topic modeling a large corpus of articles related to the humanities in newspapers, magazines, and other media sources in the U.S., U.K., and Canada from 1981 on. While the panel presents WE1S's conceptual goals and prototype experiments in using outcomes in humanities advocacy, its focus is on the technical and interpretive workflow developed by the project for humanities-oriented data work. WE1S's manifest system for data provenance and workflow management, its virtual workspace manager for integrated, containerized data manipulation and processing, and its interpretation protocol for how humans read topic models suggest a generalizable open approach based not on particular technologies and methods but on annotated methods. Moreover, there is a philosophical fit between such an ap...
2016
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.
2012
Once upon a time, a very long time ago in the very recent history of computers, in 1967, an Arts graduate of this University was wandering the corridors of the Physics building. The Arts graduate was looking for someone of whom to ask a simple question about computing and texts. The Physics building then housed the University's computer resources, two, and soon afterwards three, large computers (what are now referred to as 'mainframe computers'), ideally suited for scientific applications requiring complex and lengthy mathematical calculations with numerical data. The Arts graduate had worlced as a programmer for a commercial firm, using a mainframe to do commercial applications. Typically, at that time, these still involved numerical calculations, such as those associated with a large pay-roll, or with stock control of many branches of a large company, but they differed from scientific applications in two ways. First, the calculations were simple but the amount of data ...
Reviews in History, 2018
Over the past years, there has been a lot of debate around the nature of scholarship in the area of Humanities Computing or, more recently, Digital Humanities (DH); more specifically, there have been several attempts to define it and identify its disciplinary characteristics.(1) Despite disagreements in terms of its definition, though, the field has now reached a stage where it is being increasingly institutionalised, with centres and practitioners all over the world and courses teaching some of its core principles and methods. Yet, even though DH-related scholarship has been practiced since the mid-20th century, only recently has interest focussed on the history of the subject, important events or principal figures. As the authors of the book Computation and the Humanities: Towards an Oral History of Digital Humanities argue, we still know very little about the origins of the field: 'Indeed, our understanding of the history of the field can, at the present time, be best described as a shattered mosaic of uncertain but intricate design.' (p. 10) We are informed that documenting the history of DH is an essential part of understanding its diversity and reach as well as how it might progress in the future; besides, recording and studying key moments from the past has been a long established practice in other academic areas, such as the History of Science. Considering the above points, this is a good time for retrospection on the development of the field and the achievements of its founding members, as well as the challenges they met. However, in the first chapter, it is clearly argued that given the complexities involved in defining the fieldincluding its inherently interdisciplinary nature, different scholarly attitudes and international character-it may be more appropriate to think of 'histories' rather than 'a history' of DH. At this point, it should be mentioned that this book is based on work done for the project 'Hidden Histories: Computing and the Humanities [2]'. The project focuses on how computing technologies have been applied to humanistic work since 1949 through collecting evidence that demonstrate the context within which early Humanities
In this article, originally published in 2001, I sketched out a brief (and by no means impartial) outline of the debate on Humanities Computing from an Italian and, so to speak, “Southern-European” perspective. Since the discussion involves countless areas (ranging from theoretical research to applications, from education to the digitization of resources, etc.) I selected, from among the many possible arguments, one which seemed to be a point of convergence of the practical and theoretical efforts of the discipline: the curriculum. I am still convinced that behind this topic an underground river is flowing and nourishing the most delicate questions of Digital Humanities.
Semantic Metadata, Humanist Computing, and Digital Humanities
At its beginnings Humanities Computing was characterized by a primary interest in methodological issues and their epistemological background. Subsequently, Humanities Computing practice has been prevailingly driven by technological developments and the main concern has shifted from content processing to the representation in digital form of documentary sources. The Digital Humanities turn has brought more to the fore artistic and literary practice in direct digital form, as opposed to a supposedly commonplace application of computational methods to scholarly research. As an example of a way back to the original motivations of applied computation in the humanities, a formal model of the interpretive process is here proposed, whose implementation may be contrived through the application of data processing procedures typical of the so called artificial adaptive systems.
Literary and Linguistic Computing, 2005
At the University of Groningen we have emphasized a simple view of humanities computing as computing in service of the humanities. This means that we seek to answer scholarly questions in linguistics, history, and art history by using the computer, exploiting especially its ability to process large amounts of data and the transparency of its processing. We have shied away from questions of digital culture, avoided overemphasis on pedagogical applications of computers, and eschewed visions of scientific revolutionincluding, in particular, the revolutionary idea that humanities computing is a discipline, preferring to think of it instead as a federation of disciplines, whose practitioners find it opportune to collaborate for reasons of some common problems. We have discovered that our ability to deal with large amounts of data marks the distinctive contributions we can make to humanities scholarship.
In: Petr Macek / Jana Horakova / Martin Flasar (eds.): Umeni a nova media , Brno:Masarykova univerzita, 2011, 160-178., 2011
2015
The digital electronic computer was developed as a technology for automating the large-scale calculation work that since the time of the French Revolution had been performed cooperatively, in an advanced form of division of labor ('human computers'). The resulting digital electronic computer was little more than a large-scale automatic calculator. Its development involved various scientists for whom the new kind of calculator was scientific equipment of essential importance in their own work (cryptology, weapons design), as well as various engineers in a supporting role. Technologically, this turned out to be close to an aberration of which little survived, apart from batch-computing such as payroll calculation, tax calculation, compiling, and of course certain scientific calculations.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.