
Liviu Gaita
Address: Bucharest, Bucuresti, Romania
less
Related Authors
Galen Strawson
The University of Texas at Austin
Steven Pinker
Harvard University
Alejandra B Osorio
Wellesley College
Shaun Gallagher
University of Memphis
John Johnson
Pennsylvania State University
Florin Curta
University of Florida
Michael J. Gonzalez
Universidad de Puerto Rico Rio
Carole Cusack
The University of Sydney
Veena Das
Johns Hopkins University
Christoph Uehlinger
University of Zurich, Switzerland
InterestsView All (15)
Uploads
Papers by Liviu Gaita
This progress in information technology has submitted the actual paradigm of scientific knowledge to revision pressures unprecedented from Karl Popper to the present day. The new debate has coagulated like a show around the confrontation between Noam Chomski, the illustrious American linguist and political scientist and Peter Norvig, the leader of the research department in Google company. This confrontation of ideas began in 2011, at the Brains, Minds, and Machines symposium hosted by the Massachusetts Institute of Technology, from „how one can obtain artificial intelligence” and, in a few months, through articles and relatively elaborate interventions [Norvig, 2012] became about „what human knowledge is”. Essentially, it is about knowledge understood as an explicative model, simple enough for every link to be logically comprehensible (Chomski) and, alternatively, knowledge as an informational model, a „black box”, capable of offering increasingly better predictions based on historical data and internal statistical search algorithms (Norvig). The question is: does the progress of knowledge still depend on models that are elegant in their simplicity, logically accessible – in integrity and in detail, models that are continually reviewed to approximate the available data as well as possible? Or are we entering a new era of statistical informational models, capable of continuously processing enormous quantities of data – historical or real time data and providing classifications, predictions, etc. – but without subjacent explanations: we find out how much, when, with associated probabilities, but the question „why?” loses its relevance.
In the applied medical sciences, the most significant reflection of this pressure to change the relationship paradigm between historical data – formalized knowledge – the practice of decisions and predictions is Evidence Based Medicine, EBM. It is not a new concept – its origins can be traced (at least) to the comparative clinical study on six groups of patients, through which the British physician James Lind, of the Royal Navy, established in 1747 what the foods than can prevent and treat scurvy – lemons and oranges – were. Isolating vitamin C and clarifying the pathogenesis of scurvy came about almost 200 years later, in 1932, but the prophylactic treatment had already been generalized by the Royal Navy as early as 1787, based on the evidence collected by Lind. The reference definition of EBM was settled in 1996:
„the conscious, explicit and judicious use of the best evidence in order to formulate decisions with regard to the treatment of individual patients” [Sackett, 1996]. Best evidence meaning proof (as certain as possible), not explanations (as plausible as possible). In the last few decades, the scope of medical research and the increasing pressure for the maximum but responsible use of the results of this research imposed EBM as a dominant technique for collecting, organizing, evaluating and using medical information; EBM procedures are now standardized at a national and international level, with specialized associations and regulation and certification organizations facilitating the development of work methods [Mayer, 2010].
In a good correlation with EBM, a direction of accelerated progress in applying new information techniques in medicine is that of prediction models capable of continuous development, through what is suggestively known as „learning” or „training”. Artificial neural networks are one of the forms that were intensely studied and tested in this context, as part of the research surrounding „artificial intelligence”. Promising results have been reported more and more frequently in applying neural networks to diagnosis [Ganesan, 2010], [Jiang, 2010], [Barwad, 2012], [Amato, 2013], but also to evaluating prognosis and prediction factors [Lundin, 1999]. The number of such factors under evaluation has risen exponentially lately, especially due to the progress in genomics and molecular biology, which gave rise to a necessity for new information instruments, capable of processing such a large data volume. Prediction based on such information models supplied with huge databases is already transitioning into the current medical practice [Silva, 2015], [Prigg, 2015]: at Beth Israel Deaconess Hospital in Boston USA, Dr. Steven Horng recently announced the operational phase for of an information instrument that allows the formulation of a diagnosis, indicates recommended therapeutic procedures and estimates the prognosis for every patient in intensive therapy. Dr. Horng, a physician specializing in both emergency medicine and biomedical IT, believes that the accuracy with which the program formulates, for example, the prediction for the imminence of the patient’s death in the next 30 days is over 96%. The recent and historical clinical and paraclinical data of the patient are introduced into the information model based on similar historical data collected in the last 30 years from over 250000 patients. The database is augmented continuously through the regular collection of information, every 3 minutes, from the physiological data monitors installed for all the patients admitted into the ICU of the hospital.
In the last 30 years, technological progress has brought about a true revolution in medical imagery [Dougherty, 2009]. On one hand, minimally invasive investigation techniques have become accessible for a very large number of patients, both in human and veterinary medicine: MRIs, computerized tomography, nuclear magnetic resonance, confocal reflection microscopy. On the other hand, the increased performance of information instruments facilitates new approaches in distinguishing useful information in medical images [Bubnov, 2014], [Lennon, 2015]. The digital format, practically generalized both for the results in new investigation techniques, as well as for classical microscopy and radiology, now facilitates processing for the extraction of useful information, storing and accessing images, as well as the comparison, morphometric and statistical analysis, identification of regions of interest/lesions etc. [Rangayyan, 2005].
The road towards semi-automated and even automated procedures that can be used directly in research, screening, pre-diagnosis or diagnosis is open and is beginning to be intensely exploited, including by the commercial component of medical activity. Using automated and semi-automated procedures also leads to an exponential increase in the volume of collected data which, in turn, reclaims information approaches such as neural networks for processes or such as EBM for testing and application. It is a progress spiral that is presently accelerating and in which man, the professional, the expert, can be centrifugally marginalized towards positions in which he only contributes punctually to ample, interdependent processes that he does not control anymore. It is possible that we are witnessing the dawn of a new historical era, in which the virtual reality created continuously by humans becomes as significant and autonomous as objective reality. Our attempt to know and change reality in its entirety for our own purposes shall inevitably adapt to the new situation.
Simon Rosenfeld, from the National Cancer Institute – USA, considers that oncology and oncological research are at a critical crossroads with the dynamics of nonlinear systems and research regarding swarm intelligence in the contemporary attempt to offer a new, unifying vision with regard to the way in which systems as diverse as „biological networks, communities of social insects, robotic communities, microbial communities, communities of somatic cells, (…) social networks, and (…) many other systems” [Rosenfeld, 2015] appear and function. Rosenfeld’s unifying vision marks an essential link between carcinogenesis, fractal analysis – that was mainly developed as a modeling instrument for nonlinear dynamic systems and artificial neural networks – that are one of the best documented examples of man-made swarm intelligence.
Fractal analysis facilitates the obtaining of measurements, of numbers that quantify the non-periodical complexity that is so characteristic of biological structures, on all size scales in which life is manifested. This research has investigated the possibility of including fractal analysis applied to histopathological or cytopathological microscopic images amongst the instruments used for diagnosing tumours in animals. I then tested the introduction of the fractal dimension amongst oncological prognostic factors. For this purpose, I developed statistical models and artificial neural network-type models for the survival duration, including the fractal dimension in the set of predictors based on which these models estimate the prognostic for cancer patients.