Cooper and Herskovits  describe a Bayesian framework and an algorithm for computing the probability of network structures given a database of cases. The Cooper and Herskovits paper does a good job of summarizing earlier work on constructing networks due to Chow and Liu, Spirtes, Glymour and Scheines, Pearl and Verma, and others.
Cooper and Herskovits employ a Bayesian approach in their work and as a consequence must assume a prior over the space of all possible network structures. Cooper and Herskovits  assume this prior to be uniform, though other possibilities are mentioned. Click here for additional discussion of the Cooper and Herskovits paper.
Lam and Bacchus  point out that the uniform prior would result in choosing a more complex model even if that model is only slightly more accurate. Empirical and theoretical results in machine learning suggest that less complex models are often better at generalizing to unseen cases in situations in which data is sparse. Allowing arbitrarily complex models may result in over fitting the model to the data.
Lam and Bacchus suggest as an alternative using the minimum description length (MDL) principle to bias the choice of model toward simpler ones. The minimum description length principle counsels that the best model of a collection of data items is the model that minimizes the sum of (a) the length of the encoding of the model, and (b) the length of the encoding of the data given the model, both of which can measured in bits. Lam and Bacchus show that the encoding length of the data is a monotonically increasing function of the (Kullback-Leibler) cross-entropy between the distribution defined by the model and the true distribution, and they use this fact to guide search for network of minimum description length. Click here for additional discussion of the Lam and Bacchus paper.
Buntine  has developed a system that takes as input a set of priors for a set of possible models using Bayesian networks and then constructs a computer program that learns from a database.