Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2000
43 44 CHAPTER 3. SEARCHING NETWORKS order to gain access to local services and data. It is apparent that mobile network agents are going to undergo considerable development and become extensively used. The networked world is going to see many of these objects.
SODA, 1998
We define a new type of search problem called "mutual search", where k players arbitrarily spread over n nodes are required to locate each other by sending "Anybody at node i?" query messages (for example processes in a computer network). If the messages are not delivered in the order they were sent (for example when the communication delay time is arbitrary) then two players require at least n-1 messages. In an asynchronous network, where the messages are delivered in the order they were sent, 0.88n messages suffice. In a synchronous network 0.586n messages suffice and 0.536n messages are required in the worst case. We exhibit a simple randomized algorithm with expected worst-case cost of 0.5n messages, and a deterministic algorithm for k 2' .: 2 players with a cost well below n for all k = o(vfn). The graph-theoretic framework we formulate for expressing and analyzing algorithms for this problem may be of independent interest.
Computers & Chemistry, 2000
Classical information theory concerns itself with communication through a noisy channel and how much one can infer about the channel input from a knowledge of the channel output. Because the channel is noisy the input and output are only related statistically and the rate of information transmission is a statistical concept with little meaning for the individual symbol used in transmission. Here we develop a more intuitive notion of information that is concerned with asking the right questions-that is, with finding those questions whose answer conveys the most information. We call this confirmatory information. In the first part of the paper we develop the general theory, show how it relates to classical information theory, and how in the special case of search problems it allows us to quantify the efficacy of information transmission regarding individual events. That is, confirmatory information measures how well a search for items having certain observable properties retrieves items having some unobserved property of interest. Thus confirmatory information facilitates a useful analysis of search problems and contrasts with classical information theory, which quantifies the efficiency of information transmission but is indifferent to the nature of the particular information being transmitted. The last part of the paper presents several examples where confirmatory information is used to quantify protein structural properties in a search setting.
Abstract—Searching in the Internet for some object characterised by its attributes in the form of data, such as a hotel in a certain city whose price is less than something, is one of our most common activities when we access the Web. We discuss this problem in a general setting, and compute the average amount of time and the energy it takes to find an object in an infinitely large search space. We consider the use of N search agents which act concurrently. Both the case where the search agent knows which way it needs to go to find the object, and the case where the search agent is perfectly ignorant and may even head progressively away from the object being sought.We show that under mild conditions regarding the randomness of the search and the use of a time-out, the search agent will always find the object despite the fact that the search space is infinite. We obtain a formula for the average search time and the average energy expended by N search agents acting concurrently and independently of each other. We see that the time-out itself can be used to minimise both the search time, and the amount of energy that is consumed to find an object. A simple approximate formula is derived for the number of search agents that can help us guarantee that an object is found in a given time, and discuss how the competition between search agents and other agents that try to hide the data object, can be used by opposing parties to guarantee their own success.
Journal of the ACM, 1999
We introduce a search problem called "mutual search" where k agents , arbitrarily distributed over n sites, are required to locate one another by posing queries of the form "Anybody at site i?". We ask for the least number of queries that is necessary and sufficient. For the case of two agents using deterministic protocols we obtain the following worst-case results: In an oblivious setting (where all pre-planned queries are executed) there is no savings: n−1 queries are required and are sufficient. In a nonoblivious setting we can exploit the paradigm of "no news is also news" to obtain significant savings: in the synchronous case 0.586n queries suffice and 0.536n queries are required; in the asynchronous case 0.896n queries suffice and a fortiori 0.536 queries are required; for o( √ n) agents using a deterministic protocol less than n queries suffice; there is a simple randomized protocol for two agents with worst-case expected 0.5n queries and all randomized protocols require at least 0.125n worst-case expected queries. The graph-theoretic framework we formulate for expressing and analyzing algorithms for this problem may be of independent interest.
In present days, there are a lot of structured data stored in information systems. On the other hand, information systems store huge amount of not clearly structured data. For not clearly structured data, the full-text search algorithms have some limits. Wide range of information sources, together with vaguely defined structure of resources, is the reasons for which this area can be claimed to be the search under uncertainty. For this reasons, the methodology for searching information sources under uncertain conditions is proposed. The paper identifies the fundamental sources of uncertainty in this problem domain. Subsequently, a model representing the search methodology over resources with static, dynamic and dynamic character with authorization is proposed. Then, the expert system with knowledge base composed of IF-THEN rules, is proposed. The model deals with uncertainties on several levels, so the paper is also identifying them.
International Journal of Game Theory, 2001
Consider a search game with an immobile hider in a graph. A Chinese postman tour is a closed trajectory which visits all the points of the graph and has minimal length. We show that encircling the Chinese postman tour in a random direction is an optimal search strategy if and only if the graph is weakly Eulerian (i.e it consists of several Eulerian curves connected in a treelike structure).
In this paper the authors propose a new method of intelligent search using the vague set theory of Gau and Buehrer.
2009
One or more searchers must capture an invisible evader hiding in the nodes of a graph. We study this version of the graph search problem under additional restrictions, such as monotonicity and connectedness. We emphasize that we study node search, i.e., the capture of a node-located evader; this problem has so far received much less attention than edge search, i.e., the capture of an edge-located evader.
Information Processing Letters, 1977
Test, 1994
The statistical approach to search is a subject which interlaces with many other fields. This work continues a series of papers which studies these relationship with special emphasis on geometric problems in non-sequential search. A target T is sought using a series of tests X1, X2 .... From each test there is an observation Yi = f(Xi, 0), where O is an unknown parameter and T = T(O). Spacing theories and the theories of Voronoi regions can be used to represent the solutions to Bayes and entropy-based formttlations to the search problem by partitioning the search field into consistent regions
2012
Despite the occurrence of elegant algorithms for solving complex problem, exhaustive search has retained its significance since many real-life problems exhibit no regular structure and exhaustive search is the only possible solution. The advent of high-performance computing either via multicore processors or distributed processors emphasizes the possibility for exhaustive search by multiple search agents. Here we analyse the performance of exhaustive search when it is conducted by multiple search agents. Several strategies for cooperation between the search agents are evaluated. We discover that the performance of the search improves with the increase in the level of cooperation. Same search performance can be achieved with homogeneous and heterogeneous search agents provided that the length of subregions allocated to individual search regions follow the differences in the speeds of heterogeneous search agents.
IEEE Transactions on Computers, 1992
finite maximum arcs. Overall, the incremental algorithm saves around 48% of excess time, and neither algorithm fails in compacting the examples. However, since this is still an increase over no foresight, it is better to use the incremental foresight algorithm only when the DPS fails.
Distributed systems are populated by a large number of heterogeneous entities that join and leave the systems dynamically. These entities act as clients and providers and interact with each other in order to get a resource or to achieve a goal. To facilitate the collaboration between entities the system should provide mechanisms to manage the information about which entities or resources are available in the system at a certain moment, as well as how to locate them in an efficient way. However, this is not an easy task in open and dynamic environments where there are changes in the available resources and global information is not always available. In this paper, we present a comprehensive vision of search in distributed environments. This review does not only considers the approaches of the Peer-to-Peer area, but also the approaches from three more areas: Service-Oriented Environments, Multi-Agent Systems, and Complex Networks. In these areas, the search for resources, services, or entities plays a key role for the proper performance of the systems built on them. The aim of this analysis is to compare approaches from these areas taking into account the underlying system structure and the algorithms or strategies that participate in the search process.
N searchers are sent out by a source in order to locate a fixed object which is at a finite distance D, but the search space is infinite and D would be in general unknown. Each of the searchers has a finite random lifetime , and may be subject to destruction or failures, and it moves independently of other searchers, and at intermediate locations some partial random information may be available about which way to go. When a searcher is destroyed or disabled, or when it " dies naturally " , after some time the source becomes aware of this and it sends out another searcher, which proceeds similarly to the one that it replaces. The search ends when one of the searchers finds the object being sought. We use N coupled Brownian motions to derive a closed form expression for the average search time as a function of D which will depend on the parameters of the problem: the number of searchers, the average lifetime of searchers, the routing uncertainty, and the failure or destruction rate of searchers. We also examine the cost in terms of the total energy that is expended in the search.
2011
This paper surveys search theory with an emphasis on the contributions of the 2010 Nobel Memorial Prize winners, Peter Diamond, Dale Mortensen and Christopher Pissarides.
Mathematical Methods of Operations Research, 2008
A target is hidden in one of several possible locations, and the objective is to nd the target as fast as possible. One common measure of eectiveness for the search process is the expected time of the search. This type of search optimization problem has been addressed and solved in the literature for the case where the searcher has imperfect sensitivity (possible false negative results), but perfect specicity (no false positive detections). In this paper, which is motivated by recent military and homeland security search situations, we extend the results to the case where the search is subject to false positive detections. ueywordsX disrete serhD imperfet speiityD uniformly optimlF 1
Biological Information, 2013
This paper provides a general framework for understanding targeted search. It begins by defining the search matrix, which makes explicit the sources of information that can affect search progress. The search matrix enables a search to be represented as a probability measure on the original search space. This representation facilitates tracking the information cost incurred by successful search (success being defined as finding the target). To categorize such costs, various information and efficiency measures are defined, notably, active information. Conservation of information characterizes these costs and is precisely formulated via two theorems, one restricted (proved in previous work of ours), the other general (proved for the first time here). The restricted version assumes a uniform probability search baseline, the general, an arbitrary probability search baseline. When a search with probability q of success displaces a baseline search with probability p of success where q > p, conservation of information states that raising the probability of successful search by a factor of q/p(>1) incurs an information cost of at least log(q/p). Conservation of information shows that information, like money, obeys strict accounting principles.
Proceedings of the 2012 ACM symposium on Principles of distributed computing - PODC '12, 2012
We generalize the classical cow-path problem [7, 14, 38, 39] into a question that is relevant for collective foraging in animal groups. Specifically, we consider a setting in which k identical (probabilistic) agents, initially placed at some central location, collectively search for a treasure in the two-dimensional plane. The treasure is placed at a target location by an adversary and the goal is to find it as fast as possible as a function of both k and D, where D is the distance between the central location and the target. This is biologically motivated by cooperative, central place foraging such as performed by ants around their nest. In this type of search there is a strong preference to locate nearby food sources before those that are further away. Our focus is on trying to find what can be achieved if communication is limited or altogether absent. Indeed, to avoid overlaps agents must be highly dispersed making communication difficult. Furthermore, if agents do not commence the search in synchrony then even initial communication is problematic. This holds, in particular, with respect to the question of whether the agents can communicate and conclude their total number, k. It turns out that the knowledge of k by the individual agents is crucial for performance. Indeed, it is a straightforward observation that the time required for finding the treasure is Ω(D + D 2 /k), and we show in this paper that this bound can be matched if the agents have knowledge of k up to some constant approximation. We present an almost tight bound for the competitive penalty that must be paid, in the running time, if agents have no information about k. Specifically, on the negative side, we show that in such a case, there is no algorithm whose competitiveness is O(log k). On the other hand, we show that for every constant ǫ > 0, there exists a rather simple uniform search algorithm which is O(log 1+ǫ k)-competitive. In addition, we give a lower bound for the setting in which agents are given some estimation of k. As a special case, this lower bound implies that for any constant ǫ > 0, if each agent is given a (one-sided) k ǫ-approximation to k, then the competitiveness is Ω(log k). Informally, our results imply that the agents can potentially perform well without any knowledge of their total number k, however, to further improve, they must be given a relatively good approximation of k. Finally, we propose a uniform algorithm that is both efficient and extremely simple suggesting its relevance for actual biological scenarios.
Mathematical and Computer Modelling, 1995
primary function of most advanced traveler information systems involves the ability to plan optimal routes. Although the route planning ability of AT1 systems can be facilitated using centralized computing resources, most AT1 systems currently under development use in-vehicle computational resources. The primary advantage of this type of approach is fault tolerance; vehicles can continue to plan routes even in the absence of a centralized computing center. However, there are significant challenges associated with planning routes using in-vehicle electronics. These challenges result from trying to keep the cost of the in-vehicle electronics to a reasonable level. Thus, the route planning algorithm must be both space and time efficient to compensate for the limited amount of memory and computational resources available. This paper describes a new heuristic search algorithm that can be used for in-vehicle route planning. Certain types of heuristic search problems may contain an identifiable subgoal. This subgoal can be used to break a search into two parts, thus reducing search complexity. However, it is often the case that instead of one subgoal, there will be many possible subgoals, not all of which lie on an optimal path to the eventual goal. For this case, the search cannot be simply broken into two parts. However, the possibility of reducing the search complexity still exists. Previously, Chakrabarti et al. [l] developed a search algorithm called Algorithm I which exploits islands to improve search efficiency. An island, as defined by Chakrabarti et al. [l], is a possible subgoal. Previously, this algorithm had only been analyzed theoretically. In this paper, some experimental results comparing Algorithm I to A* are presented. Algorithm I has also been generalized to cases where more than one possible subgoal can appear on an optimal path. Two new heuristic island search algorithms have been created from this generalization and are shown to provide even further improvement over Algorithms I and A*. The use of possible subgoals can make any type of search more efficient, not just A*. Modifications are discussed which describe how to incorporate possible subgoal knowledge into IDA* search.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.