Academia.eduAcademia.edu

A parallel learning algorithm for Bayesian inference networks

2002, IEEE Transactions on Knowledge and Data …

Abstract
sparkles

AI

This research presents a novel distributed learning algorithm for Bayesian inference networks, designed to alleviate the knowledge engineering bottleneck associated with manual network construction. Leveraging under-utilized computing resources, the approach employs the Minimum Description Length (MDL) principle to formulate a serial search algorithm, which is then parallelized through an asynchronous distributed search technique known as nagging. Empirical results demonstrate significant improvements in learning performance and computational efficiency, enabling the learning of large Bayesian networks with up to 150 nodes across multiple workstations.