content
stringlengths
633
9.91k
label
stringclasses
7 values
category
stringclasses
7 values
dataset
stringclasses
1 value
node_id
int64
0
2.71k
split
stringclasses
3 values
1-hop neighbor's text information: Exploratory Learning in the Game of GO: Initial Results. : This paper considers the importance of exploration to game-playing programs which learn by playing against opponents. The central question is whether a learning program should play the move which offers the best chance of winning the present game, or if it should play the move which has the best chance of providing useful information for future games. An approach to addressing this question is developed using probability theory, and then implemented in two different learning methods. Initial experiments in the game of Go suggest that a program which takes exploration into account can learn better against a knowledgeable opponent than a program which does not. 1-hop neighbor's text information: Learning to predict by the methods of temporal differences. : This article introduces a class of incremental learning procedures specialized for prediction|that is, for using past experience with an incompletely known system to predict its future behavior. Whereas conventional prediction-learning methods assign credit by means of the difference between predicted and actual outcomes, the new methods assign credit by means of the difference between temporally successive predictions. Although such temporal-difference methods have been used in Samuel's checker player, Holland's bucket brigade, and the author's Adaptive Heuristic Critic, they have remained poorly understood. Here we prove their convergence and optimality for special cases and relate them to supervised-learning methods. For most real-world prediction problems, temporal-difference methods require less memory and less peak computation than conventional methods; and they produce more accurate predictions. We argue that most problems to which supervised learning is currently applied are really prediction problems of the sort to which temporal-difference methods can be applied to advantage. 1-hop neighbor's text information: Associative reinforcement learning: A generate and test algorithm. : An agent that must learn to act in the world by trial and error faces the reinforcement learning problem, which is quite different from standard concept learning. Although good algorithms exist for this problem in the general case, they are often quite inefficient and do not exhibit generalization. One strategy is to find restricted classes of action policies that can be learned more efficiently. This paper pursues that strategy by developing an algorithm that performans an on-line search through the space of action mappings, expressed as Boolean formulae. The algorithm is compared with existing methods in empirical trials and is shown to have very good performance. Target text information: Learning functions in k-DNF from reinforcement. : An agent that must learn to act in the world by trial and error faces the reinforcement learning problem, which is quite different from standard concept learning. Although good algorithms exist for this problem in the general case, they are often quite inefficient and do not exhibit generalization. One strategy is to find restricted classes of action policies that can be learned more efficiently. This paper pursues that strategy by developing algorithms that can efficiently learn action maps that are expressible in k-DNF. The algorithms are compared with existing methods in empirical trials and are shown to have very good performance. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
5
Reinforcement Learning
cora
1,701
test
1-hop neighbor's text information: Generalization in reinforcement learning: Safely approximating the value function. : To appear in: G. Tesauro, D. S. Touretzky and T. K. Leen, eds., Advances in Neural Information Processing Systems 7, MIT Press, Cambridge MA, 1995. A straightforward approach to the curse of dimensionality in reinforcement learning and dynamic programming is to replace the lookup table with a generalizing function approximator such as a neural net. Although this has been successful in the domain of backgammon, there is no guarantee of convergence. In this paper, we show that the combination of dynamic programming and function approximation is not robust, and in even very benign cases, may produce an entirely wrong policy. We then introduce Grow-Support, a new algorithm which is safe from divergence yet can still reap the benefits of successful generalization. 1-hop neighbor's text information: Learning to predict by the methods of temporal differences. : This article introduces a class of incremental learning procedures specialized for prediction|that is, for using past experience with an incompletely known system to predict its future behavior. Whereas conventional prediction-learning methods assign credit by means of the difference between predicted and actual outcomes, the new methods assign credit by means of the difference between temporally successive predictions. Although such temporal-difference methods have been used in Samuel's checker player, Holland's bucket brigade, and the author's Adaptive Heuristic Critic, they have remained poorly understood. Here we prove their convergence and optimality for special cases and relate them to supervised-learning methods. For most real-world prediction problems, temporal-difference methods require less memory and less peak computation than conventional methods; and they produce more accurate predictions. We argue that most problems to which supervised learning is currently applied are really prediction problems of the sort to which temporal-difference methods can be applied to advantage. 1-hop neighbor's text information: Reinforcement Learning for Job-Shop Scheduling, : We apply reinforcement learning methods to learn domain-specific heuristics for job shop scheduling. A repair-based scheduler starts with a critical-path schedule and incrementally repairs constraint violations with the goal of finding a short conflict-free schedule. The temporal difference algorithm T D() is applied to train a neural network to learn a heuristic evaluation function over states. This evaluation function is used by a one-step looka-head search procedure to find good solutions to new scheduling problems. We evaluate this approach on synthetic problems and on problems from a NASA space shuttle payload processing task. The evaluation function is trained on problems involving a small number of jobs and then tested on larger problems. The TD sched-uler performs better than the best known existing algorithm for this task|Zweben's iterative repair method based on simulated annealing. The results suggest that reinforcement learning can provide a new method for constructing high-performance scheduling systems. Target text information: Value Function Approximations and Job-Shop Scheduling: We report a successful application of TD() with value function approximation to the task of job-shop scheduling. Our scheduling problems are based on the problem of scheduling payload processing steps for the NASA space shuttle program. The value function is approximated by a 2-layer feedforward network of sigmoid units. A one-step lookahead greedy algorithm using the learned evaluation function outperforms the best existing algorithm for this task, which is an iterative repair method incorporating simulated annealing. To understand the reasons for this performance improvement, this paper introduces several measurements of the learning process and discusses several hypotheses suggested by these measurements. We conclude that the use of value function approximation is not a source of difficulty for our method, and in fact, it may explain the success of the method independent of the use of value iteration. Additional experiments are required to discriminate among our hypotheses. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
5
Reinforcement Learning
cora
170
test
1-hop neighbor's text information: "Mass Reconstruction with a Neural Network", : A feed-forward neural network method is developed for reconstructing the invariant mass of hadronic jets appearing in a calorimeter. The approach is illustrated in W ! q q, where W -bosons are produced in pp reactions at SPS collider energies. The neural network method yields results that are superior to conventional methods. This neural network application differs from the classification ones in the sense that an analog number (the mass) is computed by the network, rather than a binary decision being made. As a by-product our application clearly demonstrates the need for using "intelligent" variables in instances when the amount of training instances is limited. 1-hop neighbor's text information: "Finding Gluon Jets with a Neural trigger", : Using a neural network classifier we are able to separate gluon from quark jets originating from Monte Carlo generated e + e events with 85 90% accuracy. PACS numbers: 13.65.+i, 12.38Qk, 13.87.Fh 1-hop neighbor's text information: "Self-organizing Networks for Extracting Jet Features", : Self-organizing neural networks are briefly reviewed and compared with supervised learning algorithms like back-propagation. The power of self-organization networks is in their capability of displaying typical features in a transparent manner. This is successfully demonstrated with two applications from hadronic jet physics; hadronization model discrimination and separation of b,c and light quarks. Target text information: "Using Neural Networks to Identify Jets", Nucl. : A neural network method for identifying the ancestor of a hadron jet is presented. The idea is to find an efficient mapping between certain observed hadronic kinematical variables and the quark/gluon identity. This is done with a neuronic expansion in terms of a network of sigmoidal functions using a gradient descent procedure, where the errors are back-propagated through the network. With this method we are able to separate gluon from quark jets originating from Monte Carlo generated e + e events with ~ 85% accuracy. The result is independent on the MC model used. This approach for isolating the gluon jet is then used to study the so-called string effect. In addition, heavy quarks (b and c) in e + e reactions can be identified on the 50% level by just observing the hadrons. In particular we are able to separate b-quarks with an efficiency and purity, which is comparable with what is expected from vertex detectors. We also speculate on how the neural network method can be used to disentangle different hadronization schemes by compressing the dimensionality of the state space of hadrons. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
2,576
test
1-hop neighbor's text information: "The third generation of neural network models," : The computational power of formal models for networks of spiking neurons is compared with that of other neural network models based on McCulloch Pitts neurons (i.e. threshold gates) respectively sigmoidal gates. In particular it is shown that networks of spiking neurons are computationally more powerful than these other neural network models. A concrete biologically relevant function is exhibited which can be computed by a single spiking neuron (for biologically reasonable values of its parameters), but which requires hundreds of hidden units on a sigmoidal neural net. This article does not assume prior knowledge about spiking neurons, and it contains an extensive list of references to the currently available literature on computations in networks of spiking neurons and relevant results from neuro biology. 1-hop neighbor's text information: "On the effect of analog noise on discrete-time analog computations", : We introduce a model for analog computation with discrete time in the presence of analog noise that is flexible enough to cover the most important concrete cases, such as noisy analog neural nets and networks of spiking neurons. This model subsumes the classical model for digital computation in the presence of noise. We show that the presence of arbitrarily small amounts of analog noise reduces the power of analog computational models to that of finite automata, and we also prove a new type of upper bound for the 1-hop neighbor's text information: "Neural networks with quadratic VC dimension," : This paper shows that neural networks which use continuous activation functions have VC dimension at least as large as the square of the number of weights w. This result settles a long-standing open question, namely whether the well-known O(w log w) bound, known for hard-threshold nets, also held for more general sigmoidal nets. Implications for the number of samples needed for valid gen eralization are discussed. Target text information: Vapnik-Chervonenkis dimension of neural nets. The Handbook of Brain Theory and Neural Networks (M. : Most of the work on the Vapnik-Chervonenkis dimension of neural networks has been focused on feedforward networks. However, recurrent networks are also widely used in learning applications, in particular when time is a relevant parameter. This paper provides lower and upper bounds for the VC dimension of such networks. Several types of activation functions are discussed, including threshold, polynomial, piecewise-polynomial and sigmoidal functions. The bounds depend on two independent parameters: the number w of weights in the network, and the length k of the input sequence. In contrast, for feedforward networks, VC dimension bounds can be expressed as a function of w only. An important difference between recurrent and feedforward nets is that a fixed recurrent net can receive inputs of arbitrary length. Therefore we are particularly interested in the case k w. Ignoring multiplicative constants, the main results say roughly the following: For architectures with activation = any fixed nonlinear polynomial, the VC dimension is wk. For architectures with activation = any fixed piecewise polynomial, the VC dimension is between wk and w 2 k. For architectures with activation = H (threshold nets), the VC dimension is between w log(k=w) and minfwk log wk; w 2 +w log wkg. Forthe standard sigmoid (x) = 1=(1 + e x ), the VC dimension is between wk and w 4 k 2 . I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
43
test
1-hop neighbor's text information: Co-evolving soccer softbot team coordination with genetic programming. In RoboCup-97: The first robot world cup soccer games and conferences. : Genetic Programming is a promising new method for automatically generating functions and algorithms through natural selection. In contrast to other learning methods, Genetic Programming's automatic programming makes it a natural approach for developing algorithmic robot behaviors. In this paper we present an overview of how we apply Genetic Programming to behavior-based team coordination in the RoboCup Soccer Server domain. The result is not just a hand-coded soccer algorithm, but a team of softbots which have learned on their own how to play a reasonable game of soccer. 1-hop neighbor's text information: Incremental self-improvement for lifetime multi-agent reinforcement learning. : Previous approaches to multi-agent reinforcement learning are either very limited or heuristic by nature. The main reason is: each agent's or "animat's" environment continually changes because the other learning animats keep changing. Traditional reinforcement learning algorithms cannot properly deal with this. Their convergence theorems require repeatable trials and strong (typically Markovian) assumptions about the environment. In this paper, however, we use a novel, general, sound method for multiple, reinforcement learning "animats", each living a single life with limited computational resources in an unrestricted, changing environment. The method is called "incremental self-improvement" (IS | Schmidhuber, 1994). IS properly takes into account that whatever some animat learns at some point may affect learning conditions for other animats or for itself at any later point. The learning algorithm of an IS-based animat is embedded in its own policy | the animat cannot only improve its performance, but in principle also improve the way it improves etc. At certain times in the animat's life, IS uses reinforcement/time ratios to estimate from a single training example (namely the entire life so far) which previously learned things are still useful, and selectively keeps them but gets rid of those that start appearing harmful. IS is based on an efficient, stack-based backtracking procedure which is guaranteed to make each animat's learning history a history of long-term reinforcement accelerations. Experiments demonstrate IS' effectiveness. In one experiment, IS learns a sequence of more and more complex function approximation problems. In another, a multi-agent system consisting of three co-evolving, IS-based animats chasing each other learns interesting, stochastic predator and prey strategies. 1-hop neighbor's text information: Markov games as a framework for multi-agent reinforcement learning. : In the Markov decision process (MDP) formalization of reinforcement learning, a single adaptive agent interacts with an environment defined by a probabilistic transition function. In this solipsistic view, secondary agents can only be part of the environment and are therefore fixed in their behavior. The framework of Markov games allows us to widen this view to include multiple adaptive agents with interacting or competing goals. This paper considers a step in this direction in which exactly two agents with diametrically opposed goals share an environment. It describes a Q-learning-like algorithm for finding optimal policies and demonstrates its application to a simple two-player game in which the optimal policy is probabilistic. Target text information: Team-Partitioned, Opaque-Transition Reinforcement Learning: In this paper, we present a novel multi-agent learning paradigm called team-partitioned, opaque-transition reinforcement learning (TPOT-RL). TPOT-RL introduces the concept of using action-dependent features to generalize the state space. In our work, we use a learned action-dependent feature space. TPOT-RL is an effective technique to allow a team of agents to learn to cooperate towards the achievement of a specific goal. It is an adaptation of traditional RL methods that is applicable in complex, non-Markovian, multi-agent domains with large state spaces and limited training opportunities. Multi-agent scenarios are opaque-transition, as team members are not always in full communication with one another and adversaries may affect the environment. Hence, each learner cannot rely on having knowledge of future state transitions after acting in the world. TPOT-RL enables teams of agents to learn effective policies with very few training examples even in the face of a large state space with large amounts of hidden state. The main responsible features are: dividing the learning task among team members, using a very coarse, action-dependent feature space, and allowing agents to gather reinforcement directly from observation of the environment. TPOT-RL is fully implemented and has been tested in the robotic soccer domain, a complex, multi-agent framework. This paper presents the algorithmic details of TPOT-RL as well as empirical results demonstrating the effectiveness of the developed multi-agent learning approach with learned features. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
5
Reinforcement Learning
cora
87
test
1-hop neighbor's text information: ON MCMC METHODS IN BAYESIAN REGRESSION ANALYSIS AND MODEL SELECTION: The objective of statistical data analysis is not only to describe the behaviour of a system, but also to propose, construct (and then to check) a model of observed processes. Bayesian methodology offers one of possible approaches to estimation of unknown components of the model (its parameters or functional components) in a framework of a chosen model type. However, in many instances the evaluation of Bayes posterior distribution (which is basal for Bayesian solutions) is difficult and practically intractable (even with the help of numerical approximations). In such cases the Bayesian analysis may be performed with the help of intensive simulation techniques called the `Markov chain Monte Carlo'. The present paper reviews the best known approaches to MCMC generation. It deals with several typical situations of data analysis and model construction where MCMC methods have been successfully applied. Special attention is devoted to the problem of selection of optimal regression model constructed from regression splines or from other functional units. Target text information: Monte Carlo approach to Bayesian regression modeling. In: Computer Intensive Methods in Control and Signal Processing. : In the framework of a functional response model (i.e. a regression model, or a feedforward neural network) an estimator of a nonlinear response function is constructed from a set of functional units. The parameters defining these functional units are estimated using the Bayesian approach. A sample representing the Bayesian posterior distribution is obtained by applying the Markov chain Monte Carlo procedure, namely the combination of Gibbs and Metropolis-Hastings algorithms. The method is described for histogram, B-spline and radial basis function estimators of a response function. In general, the proposed approach is suitable for finding Bayes-optimal values of parameters in a complicated parameter space. We illustrate the method on numerical examples. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
1,338
test
1-hop neighbor's text information: An investigation of marker-passing algorithms for analogue retrieval. : If analogy and case-based reasoning systems are to scale up to very large case bases, it is important to analyze the various methods used for retrieving analogues to identify the features of the problem for which they are appropriate. This paper reports on one such analysis, a comparison of retrieval by marker passing or spreading activation in a semantic network with Knowledge-Directed Spreading Activation, a method developed to be well-suited for retrieving semantically distant analogues from a large knowledge base. The analysis has two complementary components: (1) a theoretical model of the retrieval time based on a number of problem characteristics, and (2) experiments showing how the retrieval time of the approaches varies with the knowledge base size. These two components, taken together, suggest that KDSA is more likely than SA to be able to scale up to retrieval in large knowledge bases. 1-hop neighbor's text information: Case-based reasoning: Foundational issues, methodological variations, and system approaches. : 10 resources, Alan Schultz for installing a WWW server and providing knowledge on CGI scripts, and John Grefenstette for his comments on an earlier version of this paper. 1-hop neighbor's text information: A Memory Model for Case Retrieval by Activation Passing. : Target text information: Preparing Case Retrieval Nets for Distributed Processing: In this paper, we discuss two approaches of applying the memory model of Case Retrieval Nets to applications where a distributed processing of information is required. For this, we distinguish two types of such applications, namely (a) the case of distributed case libraries and (b) the case of distributed cases. While a solution to the former is straightforward, the latter requires an extension to Case Retrieval Nets which provides a kind of partitioning of the entire net structure. This extended model even allows for a concurrent implementation of the retrieval process or for the use of collaborative agents for retrieval. Keywords: Case-based reasoning, case retrieval, memory structures, distributed processing. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
410
test
1-hop neighbor's text information: Query by Committee, : We propose an algorithm called query by committee, in which a committee of students is trained on the same data set. The next query is chosen according to the principle of maximal disagreement. The algorithm is studied for two toy models: the high-low game and perceptron learning of another perceptron. As the number of queries goes to infinity, the committee algorithm yields asymptotically finite information gain. This leads to generalization error that decreases exponentially with the number of examples. This in marked contrast to learning from randomly chosen inputs, for which the information gain approaches zero and the generalization error decreases with a relatively slow inverse power law. We suggest that asymptotically finite information gain may be an important characteristic of good query algorithms. 1-hop neighbor's text information: More Efficient Windowing: Windowing has been proposed as a procedure for efficient memory use in the ID3 decision tree learning algorithm. However, previous work has shown that windowing may often lead to a decrease in performance. In this work, we try to argue that separate-and-conquer rule learning algorithms are more appropriate for windowing than divide-and-conquer algorithms, because they learn rules independently and are less susceptible to changes in class distributions. In particular, we will present a new windowing algorithm that achieves additional gains in efficiency by exploiting this property of separate-and-conquer algorithms. While the presented algorithm is only suitable for redundant, noise-free data sets, we will also briefly discuss the problem of noisy data in windowing and present some preliminary ideas how it might be solved with an extension of the algorithm introduced in this paper. 1-hop neighbor's text information: Learning Trees and Rules with Set-valued Features. : In most learning systems examples are represented as fixed-length "feature vectors", the components of which are either real numbers or nominal values. We propose an extension of the feature-vector representation that allows the value of a feature to be a set of strings; for instance, to represent a small white and black dog with the nominal features size and species and the set-valued feature color, one might use a feature vector with size=small, species=canis-familiaris and color=fwhite,blackg. Since we make no assumptions about the number of possible set elements, this extension of the traditional feature-vector representation is closely connected to Blum's "infinite attribute" representation. We argue that many decision tree and rule learning algorithms can be easily extended to set-valued features. We also show by example that many real-world learning problems can be efficiently and naturally represented with set-valued features; in particular, text categorization problems and problems that arise in propositionalizing first-order representations lend themselves to set-valued features. Target text information: Heterogeneous uncertainty sampling for supervised learning. : Uncertainty sampling methods iteratively request class labels for training instances whose classes are uncertain despite the previous labeled instances. These methods can greatly reduce the number of instances that an expert need label. One problem with this approach is that the classifier best suited for an application may be too expensive to train or use during the selection of instances. We test the use of one classifier (a highly efficient probabilistic one) to select examples for training another (the C4.5 rule induction program). Despite being chosen by this heterogeneous approach, the uncertainty samples yielded classifiers with lower error rates than random samples ten times larger. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
1,208
test
1-hop neighbor's text information: A theory of inferred causation. : This paper concerns the empirical basis of causation, and addresses the following issues: We propose a minimal-model semantics of causation, and show that, contrary to common folklore, genuine causal influences can be distinguished from spurious covariations following standard norms of inductive reasoning. We also establish a sound characterization of the conditions under which such a distinction is possible. We provide an effective algorithm for inferred causation and show that, for a large class of data the algorithm can uncover the direction of causal influences as defined above. Finally, we ad dress the issue of non-temporal causation. 1-hop neighbor's text information: A statistical semantics for causation. : We propose a model-theoretic definition of causation, and show that, contrary to common folklore, genuine causal influences can be distinguished from spurious covari-ations following standard norms of inductive reasoning. We also establish a complete characterization of the conditions under which such a distinction is possible. Finally, we provide a proof-theoretical procedure for inductive causation and show that, for a large class of data and structures, effective algorithms exist that uncover the direction of causal influences as defined above. 1-hop neighbor's text information: Causal inference, path analysis, and recursive structural equations models. : Lipid Research Clinic Program 84] Lipid Research Clinic Program. The Lipid Research Clinics Coronary Primary Prevention Trial results, parts I and II. Journal of the American Medical Association, 251(3):351-374, January 1984. [Pearl 93] Judea Pearl. Aspects of graphical models connected with causality. Technical Report R-195-LL, Cognitive Systems Laboratory, UCLA, June 1993. Submitted to Biometrika (June 1993). Short version in Proceedings of the 49th Session of the International Statistical Institute: Invited papers, Flo rence, Italy, August 1993, Tome LV, Book 1, pp. 391-401. Target text information: Experiments with a regression-based causal induction algorithm. EKSL memo number 94-33, : Covariance information can help an algorithm search for predictive causal models and estimate the strengths of causal relationships. This information should not be discarded after conditional independence constraints are identified, as is usual in contemporary causal induction algorithms. Our fbd algorithm combines covariance information with an effective heuristic to build predictive causal models. We demonstrate that fbd is accurate and efficient. In one experiment we assess fbd's ability to find the best predictors for variables; in another we compare its performance, using many measures, with Pearl and Verma's ic algorithm. And although fbd is based on multiple linear regression, we cite evidence that it performs well on problems that are very difficult for regression algorithms. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
1,942
test
1-hop neighbor's text information: A machine learning library in C++. : We present MLC ++ , a library of C ++ classes and tools for supervised Machine Learning. While MLC ++ provides general learning algorithms that can be used by end users, the main objective is to provide researchers and experts with a wide variety of tools that can accelerate algorithm development, increase software reliability, provide comparison tools, and display information visually. More than just a collection of existing algorithms, MLC ++ is an attempt to extract commonalities of algorithms and decompose them for a unified view that is simple, coherent, and extensible. In this paper we discuss the problems MLC ++ aims to solve, the design of MLC ++ , and the current functionality. 1-hop neighbor's text information: Feature subset selection as search with probabilistic estimates. : Irrelevant features and weakly relevant features may reduce the comprehensibility and accuracy of concepts induced by supervised learning algorithms. We formulate the search for a feature subset as an abstract search problem with probabilistic estimates. Searching a space using an evaluation function that is a random variable requires trading off accuracy of estimates for increased state exploration. We show how recent feature subset selection algorithms in the machine learning literature fit into this search problem as simple hill climbing approaches, and conduct a small experiment using a best-first search technique. 1-hop neighbor's text information: Visualizing the simple bayesian classifier. In KDD Workshop on Issues in the Integration of Data Mining and Data Visualization. : The simple Bayesian classifier (SBC), sometimes called Naive-Bayes, is built based on a conditional independence model of each attribute given the class. The model was previously shown to be surprisingly robust to obvious violations of this independence assumption, yielding accurate classification models even when there are clear conditional dependencies. The SBC can serve as an excellent tool for initial exploratory data analysis when coupled with a visualizer that makes its structure comprehensible. We describe such a visual representation of the SBC model that has been successfully implemented. We describe the requirements we had for such a visualization and the design decisions we made to satisfy them. Target text information: Feature subset selection using the wrapper method: Overfitting and dynamic search space. : In the wrapper approach to feature subset selection, a search for an optimal set of features is made using the induction algorithm as a black box. The estimated future performance of the algorithm is the heuristic guiding the search. Statistical methods for feature subset selection including forward selection, backward elimination, and their stepwise variants can be viewed as simple hill-climbing techniques in the space of feature subsets. We utilize best-first search to find a good feature subset and discuss overfitting problems that may be associated with searching too many feature subsets. We introduce compound operators that dynamically change the topology of the search space to better utilize the information available from the evaluation of feature subsets. We show that compound operators unify previous approaches that deal with relevant and irrelevant features. The improved feature subset selection yields significant improvements for real-world datasets when using the ID3 and the Naive-Bayes induction algorithms. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
1,641
test
1-hop neighbor's text information: `Machine learning in prognosis of the femoral neck fracture recovery\', : We compare the performance of several machine learning algorithms in the problem of prognos-tics of the femoral neck fracture recovery: the K-nearest neighbours algorithm, the semi-naive Bayesian classifier, backpropagation with weight elimination learning of the multilayered neural networks, the LFC (lookahead feature construction) algorithm, and the Assistant-I and Assistant-R algorithms for top down induction of decision trees using information gain and RELIEFF as search heuristics, respectively. We compare the prognostic accuracy and the explanation ability of different classifiers. Among the different algorithms the semi-naive Bayesian classifier and Assistant-R seem to be the most appropriate. We analyze the combination of decisions of several classifiers for solving prediction problems and show that the combined classifier improves both performance and the explanation ability. 1-hop neighbor's text information: Using sampling and queries to extract rules from trained neural networks. : Concepts learned by neural networks are difficult to understand because they are represented using large assemblages of real-valued parameters. One approach to understanding trained neural networks is to extract symbolic rules that describe their classification behavior. There are several existing rule-extraction approaches that operate by searching for such rules. We present a novel method that casts rule extraction not as a search problem, but instead as a learning problem. In addition to learning from training examples, our method exploits the property that networks can be efficiently queried. We describe algorithms for extracting both conjunctive and M -of-N rules, and present experiments that show that our method is more efficient than conventional search-based approaches. 1-hop neighbor's text information: Knowledge Integration and Rule Extraction in Neural Networks Ph.D. Proposal: Target text information: Learning symbolic rules using artificial neural networks. : A distinct advantage of symbolic learning algorithms over artificial neural networks is that typically the concept representations they form are more easily understood by humans. One approach to understanding the representations formed by neural networks is to extract symbolic rules from trained networks. In this paper we describe and investigate an approach for extracting rules from networks that uses (1) the NofM extraction algorithm, and (2) the network training method of soft weight-sharing. Previously, the NofM algorithm had been successfully applied only to knowledge-based neural networks. Our experiments demonstrate that our extracted rules generalize better than rules learned using the C4.5 system. In addition to being accurate, our extracted rules are also reasonably comprehensible. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
1,491
val
1-hop neighbor's text information: The EM algorithm for mixtures of factor analyzers. : Technical Report CRG-TR-96-1 May 21, 1996 (revised Feb 27, 1997) Abstract Factor analysis, a statistical method for modeling the covariance structure of high dimensional data using a small number of latent variables, can be extended by allowing different local factor models in different regions of the input space. This results in a model which concurrently performs clustering and dimensionality reduction, and can be thought of as a reduced dimension mixture of Gaussians. We present an exact Expectation-Maximization algorithm for fitting the parameters of this mixture of factor analyzers. 1-hop neighbor's text information: TK (1994). Fast non-linear dimension reduction. : We present a fast algorithm for non-linear dimension reduction. The algorithm builds a local linear model of the data by merging PCA with clustering based on a new distortion measure. Experiments with speech and image data indicate that the local linear algorithm produces encodings with lower distortion than those built by five layer auto-associative networks. The local linear algorithm is also more than an order of magnitude faster to train. 1-hop neighbor's text information: "Using generative models for handwritten digit recognition", : Target text information: "Recognizing handwritten digits using mixtures of linear models", : We construct a mixture of locally linear generative models of a collection of pixel-based images of digits, and use them for recognition. Different models of a given digit are used to capture different styles of writing, and new images are classified by evaluating their log-likelihoods under each model. We use an EM-based algorithm in which the M-step is computationally straightforward principal components analysis (PCA). Incorporating tangent-plane information [12] about expected local deformations only requires adding tangent vectors into the sample covariance matrices for the PCA, and it demonstrably improves performance. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
932
val
1-hop neighbor's text information: Multiagent reinforcement learning: Theoretical framework and an algorithm. : In this paper, we adopt general-sum stochastic games as a framework for multiagent reinforcement learning. Our work extends previous work by Littman on zero-sum stochastic games to a broader framework. We design a multiagent Q-learning method under this framework, and prove that it converges to a Nash equilibrium under specified conditions. This algorithm is useful for finding the optimal strategy when there exists a unique Nash equilibrium in the game. When there exist multiple Nash equilibria in the game, this algorithm should be combined with other learning techniques to find optimal strategies. 1-hop neighbor's text information: Integrating motor schemas and reinforcement learning. : Clay is an evolutionary architecture for autonomous robots that integrates motor schema-based control and reinforcement learning. Robots utilizing Clay benefit from the real-time performance of motor schemas in continuous and dynamic environments while taking advantage of adaptive reinforcement learning. Clay coordinates assemblages (groups of motor schemas) using embedded reinforcement learning modules. The coordination modules activate specific assemblages based on the presently perceived situation. Learning occurs as the robot selects assemblages and samples a reinforcement signal over time. Experiments in a robot soccer simulation illustrate the performance and utility of the system. Target text information: Learning roles: Behavioral diversity in robot teams. : This paper describes research investigating behavioral specialization in learning robot teams. Each agent is provided a common set of skills (motor schema-based behavioral assemblages) from which it builds a task-achieving strategy using reinforcement learning. The agents learn individually to activate particular behavioral assemblages given their current situation and a reward signal. The experiments, conducted in robot soccer simulations, evaluate the agents in terms of performance, policy convergence, and behavioral diversity. The results show that in many cases, robots will automatically diversify by choosing heterogeneous behaviors. The degree of diversification and the performance of the team depend on the reward structure. When the entire team is jointly rewarded or penalized (global reinforcement), teams tend towards heterogeneous behavior. When agents are provided feedback individually (local reinforcement), they converge to identical policies. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
5
Reinforcement Learning
cora
1,418
train
1-hop neighbor's text information: Using qualitative relationships for bounding probability distributions. : We exploit qualitative probabilistic relationships among variables for computing bounds of conditional probability distributions of interest in Bayesian networks. Using the signs of qualitative relationships, we can implement abstraction operations that are guaranteed to bound the distributions of interest in the desired direction. By evaluating incrementally improved approximate networks, our algorithm obtains monotonically tightening bounds that converge to exact distributions. For supermodular utility functions, the tightening bounds monotonically reduce the set of admissible decision alternatives as well. 1-hop neighbor's text information: "Bucket elimination: A unifying framework for probabilistic inference," : Probabilistic inference algorithms for finding the most probable explanation, the maximum aposteriori hypothesis, and the maximum expected utility and for updating belief are reformulated as an elimination-type algorithm called bucket elimination. This emphasizes the principle common to many of the algorithms appearing in that literature and clarifies their relationship to nonserial dynamic programming algorithms. We also present a general way of combining conditioning and elimination within this framework. Bounds on complexity are given for all the algorithms as a function of the problem's struc ture. 1-hop neighbor's text information: Operations for learning with graphical models. : This paper is a multidisciplinary review of empirical, statistical learning from a graphical model perspective. Well-known examples of graphical models include Bayesian networks, directed graphs representing a Markov chain, and undirected networks representing a Markov field. These graphical models are extended to model data analysis and empirical learning using the notation of plates. Graphical operations for simplifying and manipulating a problem are provided including decomposition, differentiation, and the manipulation of probability models from the exponential family. Two standard algorithm schemas for learning are reviewed in a graphical framework: Gibbs sampling and the expectation maximization algorithm. Using these operations and schemas, some popular algorithms can be synthesized from their graphical specification. This includes versions of linear regression, techniques for feed-forward networks, and learning Gaussian and discrete Bayesian networks from data. The paper concludes by sketching some implications for data analysis and summarizing how some popular algorithms fall within the framework presented. Target text information: Robust analysis of bayesian networks with finitely generated convex sets of distributions. : This paper presents exact solutions and convergent approximations for inferences in Bayesian networks associated with finitely generated convex sets of distributions. Robust Bayesian inference is the calculation of bounds on posterior values given perturbations in a probabilistic model. The paper presents exact inference algorithms and analyzes the circumstances where exact inference becomes intractable. Two classes of algorithms for numeric approximations are developed through transformations on the original model. The first transformation reduces the robust inference problem to the estimation of probabilistic parameters in a Bayesian network. The second transformation uses Lavine's bracketing algorithm to generate a sequence of maximization problems in a Bayesian network. The analysis is extended to the *-contaminated, the lower density bounded, the belief function, the sub-sigma, the density bounded, the total variation and the density ratio classes of distributions. c fl1996 Carnegie Mellon University I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
1,124
test
1-hop neighbor's text information: A theory of inferred causation. : This paper concerns the empirical basis of causation, and addresses the following issues: We propose a minimal-model semantics of causation, and show that, contrary to common folklore, genuine causal influences can be distinguished from spurious covariations following standard norms of inductive reasoning. We also establish a sound characterization of the conditions under which such a distinction is possible. We provide an effective algorithm for inferred causation and show that, for a large class of data the algorithm can uncover the direction of causal influences as defined above. Finally, we ad dress the issue of non-temporal causation. 1-hop neighbor's text information: On estimation of a probability density function and mode. : To apply the algorithm for classification we assign each class a separate set of codebook Gaussians. Each set is only trained with patterns from a single class. After having trained the codebook Gaussians, each set provides an estimate of the probability function of one class; just as with Parzen window estimation, we take as the estimate of the pattern distribution the average of all Gaussians in the set. Classification of a pattern may now be done by calculating the probability of each class at the respective sample point, and assigning to the pattern the class with the highest probability. Hence the whole codebook plays a role in the classification of patterns. This is not the case with regular classification schemes using codebooks. We have tested the classification scheme on several classification tasks including the two spiral problem. We compared our algorithm to various other classification algorithms and it came out second; the best algorithm for the applications is the Parzen window estimation. However, the computing time and memory for Parzen window estimation are excessive when compared to our algorithm, and hence, in practical situations, our algorithm is to be preferred. We have developed a fast algorithm which combines attractive properties of both Parzen window estimation and vector quantization. The scale parameter is tuned adaptively and, therefore, is not set in an ad hoc manner. It allows a classification strategy in which all the codebook vectors are taken into account. This yields better results than the standard vector quantization techniques. An interesting topic for further research is to use radially non-symmetric Gaussians. 1-hop neighbor's text information: Massively parallel case-based reasoning with probabilistic similarity metrics. In Topics in Case-Based Reasoning, : We propose a probabilistic case-space metric for the case matching and case adaptation tasks. Central to our approach is a probability propagation algorithm adopted from Bayesian reasoning systems, which allows our case-based reasoning system to perform theoretically sound probabilistic reasoning. The same probability propagation mechanism actually offers a uniform solution to both the case matching and case adaptation problems. We also show how the algorithm can be implemented as a connectionist network, where efficient massively parallel case retrieval is an inherent property of the system. We argue that using this kind of an approach, the difficult problem of case indexing can be completely avoided. Pp. 144-154 in Topics in Case-Based Reasoning, edited by Stefan Wess, Klaus-Dieter Althoff and Michael M. Richter. Volume 837, Lecture Target text information: MDL learning of probabilistic neural networks for discrete problem domains. : Given a problem, a case-based reasoning (CBR) system will search its case memory and use the stored cases to find the solution, possibly modifying retrieved cases to adapt to the required input specifications. In discrete domains CBR reasoning can be based on a rigorous Bayesian probability propagation algorithm. Such a Bayesian CBR system can be implemented as a probabilistic feedforward neural network with one of the layers representing the cases. In this paper we introduce a Minimum Description Length (MDL) based learning algorithm to obtain the proper network structure with the associated conditional probabilities. This algorithm together with the resulting neural network implementation provide a massively parallel architecture for solving the efficiency bottleneck in case-based reasoning. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
1,565
test
1-hop neighbor's text information: Applying machine learning to agricultural data. : Many techniques have been developed for learning rules and relationships automatically from diverse data sets, to simplify the often tedious and error-prone process of acquiring knowledge from empirical data. While these techniques are plausible, theoretically well-founded, and perform well on more or less artificial test data sets, they depend on their ability to make sense of real-world data. This paper describes a project that is applying a range of machine learning strategies to problems in agriculture and horticulture. We briefly survey some of the techniques emerging from machine learning research, describe a software workbench for experimenting with a variety of techniques on real-world data sets, and describe a case study of dairy herd management in which culling rules were inferred from a mediumsized database of herd information. 1-hop neighbor's text information: "Learning structural descriptions from examples." : Target text information: R.S. EMERALD: An Integrated System of Machine Learning and Discovery Programs to Support Education and Experimental Research. Reports of the Machine Learning and Inference Laboratory, MLI 93-10, Machine Learning and Inference Laboratory, : With the rapid expansion of machine learning methods and applications, there is a strong need for computer-based interactive tools that support education in this area. The EMERALD system was developed to provide hands-on experience and an interactive demonstration of several machine learning and discovery capabilities for students in AI and cognitive science, and for AI professionals. The current version of EMERALD integrates five programs that exhibit different types of machine learning and discovery: learning rules from examples, determining structural descriptions of object classes, inventing conceptual clusterings of entities, predicting sequences of objects, and discovering equations characterizing collections of quantitative and qualitative data. EMERALD extensively uses color graphic capabilities, voice synthesis, and a natural language representation of the knowledge acquired by the learning programs. Each program is presented as a "learning robot," which has its own "personality," expressed by its icon, its voice, the comments it generates during the learning process, and the results of learning presented as natural language text and/or voice output. Users learn about the capabilities of each "robot" both by being challenged to perform some learning tasks themselves, and by creating their own similar tasks to challenge the "robot." EMERALD is an extension of ILLIAN, an initial, much smaller version that toured eight major US Museums of Science, and was seen by over half a million visitors. EMERALD's architecture allows it to incorporate new programs and new capabilities. The system runs on SUN workstations, and is available to universities and educational institutions. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
0
Rule Learning
cora
1,427
test
1-hop neighbor's text information: Coordinating Reactive Behaviors keywords: reactive systems, planning and learning: Combinating reactivity with planning has been proposed as a means of compensating for potentially slow response times of planners while still making progress toward long term goals. The demands of rapid response and the complexity of many environments make it difficult to decompose, tune and coordinate reactive behaviors while ensuring consistency. Neural networks can address the tuning problem, but are less useful for decomposition and coordination. We hypothesize that interacting reactions can be decomposed into separate behaviors resident in separate networks and that the interaction can be coordinated through the tuning mechanism and a higher level controller. To explore these issues, we have implemented a neural network architecture as the reactive component of a two layer control system for a simulated race car. By varying the architecture, we test whether decomposing reactivity into separate behaviors leads to superior overall performance, coordination and learning convergence. 1-hop neighbor's text information: "Induction of Decision Trees," : 1-hop neighbor's text information: Combining neural and symbolic learning to revise probabilistic rule bases. : This paper describes Rapture | a system for revising probabilistic knowledge bases that combines connectionist and symbolic learning methods. Rapture uses a modified version of backpropagation to refine the certainty factors of a probabilistic rule base and it uses ID3's information-gain heuristic to add new rules. Results on refining three actual expert knowledge bases demonstrate that this combined approach generally performs better than previous methods. Target text information: A Framework for Combining Symbolic and Neural Learning. In: Artificial Intelligence and Neural Networks: Steps Toward Principled Integration. Honavar, : Technical Report 1123, Computer Sciences Department, University of Wisconsin - Madison, Nov. 1992 ABSTRACT This article describes an approach to combining symbolic and connectionist approaches to machine learning. A three-stage framework is presented and the research of several groups is reviewed with respect to this framework. The first stage involves the insertion of symbolic knowledge into neural networks, the second addresses the refinement of this prior knowledge in its neural representation, while the third concerns the extraction of the refined symbolic knowledge. Experimental results and open research issues are discussed. A shorter version of this paper will appear in Machine Learning. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
1,025
test
1-hop neighbor's text information: PAC-learning recursive logic programs: Efficient algorithms. : We present algorithms that learn certain classes of function-free recursive logic programs in polynomial time from equivalence queries. In particular, we show that a single k-ary recursive constant-depth determinate clause is learnable. Two-clause programs consisting of one learnable recursive clause and one constant-depth determinate non-recursive clause are also learnable, if an additional "basecase" oracle is assumed. These results immediately imply the pac-learnability of these classes. Although these classes of learnable recursive programs are very constrained, it is shown in a companion paper that they are maximally general, in that generalizing either class in any natural way leads to a compu-tationally difficult learning problem. Thus, taken together with its companion paper, this paper establishes a boundary of efficient learnability for recursive logic programs. 1-hop neighbor's text information: Inverse Entailment and Progol. : This paper firstly provides a re-appraisal of the development of techniques for inverting deduction, secondly introduces Mode-Directed Inverse Entailment (MDIE) as a generalisation and enhancement of previous approaches and thirdly describes an implementation of MDIE in the Progol system. Progol is implemented in C and available by anonymous ftp. The re-assessment of previous techniques in terms of inverse entailment leads to new results for learning from positive data and inverting implication between pairs of clauses. Target text information: Which Hypotheses Can Be Found with Inverse Entailment? -Extended Abstract: In this paper we give a completeness theorem of an inductive inference rule inverse entailment proposed by Muggleton. Our main result is that a hypothesis clause H can be derived from an example E under a background theory B with inverse entailment iff H subsumes E relative to B in Plotkin's sense. The theory B can be any clausal theory, and the example E can be any clause which is neither a tautology nor implied by B. The derived hypothesis H is a clause which is not always definite. In order to prove the result we give declarative semantics for arbitrary consistent clausal theories, and show that SB-resolution, which was originally introduced by Plotkin, is complete procedural semantics. The completeness is shown as an extension of the completeness theorem of SLD-resolution. We also show that every hypothesis H derived with saturant generalization, proposed by Rouveirol, must subsume E w.r.t. B in Buntine's sense. Moreover we show that saturant generalization can be obtained from inverse entailment by giving some restriction to its usage. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
0
Rule Learning
cora
322
train
1-hop neighbor's text information: Incremental induction of decision trees. : Technical Report 94-07 February 7, 1994 (updated April 25, 1994) This paper will appear in Proceedings of the Eleventh International Conference on Machine Learning. Abstract This paper presents an algorithm for incremental induction of decision trees that is able to handle both numeric and symbolic variables. In order to handle numeric variables, a new tree revision operator called `slewing' is introduced. Finally, a non-incremental method is given for finding a decision tree based on a direct metric of a candidate tree. Target text information: What DaimlerBenz has learned as an industrial partner from the machine learning project Statlog. Working Notes for Applying Machine Learning in Practice: : Author of this paper was co-ordinator of the Machine Learning project StatLog during 1990-1993. This project was supported financially by the European Community. The main aim of StatLog was to evaluate different learning algorithms using real industrial and commercial applications. As an industrial partner and contributor, Daimler-Benz has introduced different applications to Stat-Log among them fault diagnosis, letter and digit recognition, credit-scoring and prediction of the number of registered trucks. We have learned a lot of lessons from this project which have effected our application oriented research in the field of Machine Learning (ML) in Daimler-Benz. We have distinguished that, especially, more research is necessary to prepare the ML-algorithms to handle the real industrial and commercial applications. In this paper we describe, shortly, the Daimler-Benz applications in StatLog, we discuss shortcomings of the applied ML-algorithms and finally we outline the fields where we think further research is necessary. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
1,602
train
1-hop neighbor's text information: On Computing the Largest Fraction of Missing Information for the EM Algorithm and the Worst: We address the problem of computing the largest fraction of missing information for the EM algorithm and the worst linear function for data augmentation. These are the largest eigenvalue and its associated eigenvector for the Jacobian of the EM operator at a maximum likelihood estimate, which are important for assessing convergence in iterative simulation. An estimate of the largest fraction of missing information is available from the EM iterates; this is often adequate since only a few figures of accuracy are needed. In some instances the EM iteration also gives an estimate of the worst linear function. We show that the power method for eigencomputation can be used to compute efficient and accurate estimates of both quantities. Unlike eigenvalue decomposition, the power method computes only the largest eigenvalue and eigenvector of a matrix, it can take advantage of a good eigenvector estimate as an initial value and it can be terminated after only a few figures of accuracy are obtained. Moreover, the matrix products needed in the power method can be computed by extrapolation, obviating the need to form the Jacobian of the EM operator. We give results of simultation studies on multivariate normal data showing this approach becomes more efficient as the data dimension increases than methods that use a finite-difference approximation to the Jacobian, which is the only general-purpose alternative available. fl Funded by National Institutes of Health Small Business Innovation Reseach Grant 5R44CA65147-03, and by Office of Naval Research contracts N00014-96-1-0192 and N00014-96-1-0330. We are indebted to Tim Hesterberg, Jim Schimert, Doug Clarkson, Anne Greenbaum, and Adrian Raftery for comments and discussion that helped advance this research and improve this paper. 1-hop neighbor's text information: Training algorithms for hidden Markov models using entropy based distance functions. : We present new algorithms for parameter estimation of HMMs. By adapting a framework used for supervised learning, we construct iterative algorithms that maximize the likelihood of the observations while also attempting to stay close to the current estimated parameters. We use a bound on the relative entropy between the two HMMs as a distance measure between them. The result is new iterative training algorithms which are similar to the EM (Baum-Welch) algorithm for training HMMs. The proposed algorithms are composed of a step similar to the expectation step of Baum-Welch and a new update of the parameters which replaces the maximization (re-estimation) step. The algorithm takes only negligibly more time per iteration and an approximated version uses the same expectation step as Baum-Welch. We evaluate experimentally the new algorithms on synthetic and natural speech pronunciation data. For sparse models, i.e. models with relatively small number of non-zero parameters, the proposed algorithms require significantly fewer iterations. 1-hop neighbor's text information: Hierarchical Mixtures of Experts and the EM Algorithm, : We present a tree-structured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIM's). Learning is treated as a maximum likelihood problem; in particular, we present an Expectation-Maximization (EM) algorithm for adjusting the parameters of the architecture. We also develop an on-line learning algorithm in which the parameters are updated incrementally. Comparative simulation results are presented in the robot dynamics domain. *We want to thank Geoffrey Hinton, Tony Robinson, Mitsuo Kawato and Daniel Wolpert for helpful comments on the manuscript. This project was supported in part by a grant from the McDonnell-Pew Foundation, by a grant from ATR Human Information Processing Research Laboratories, by a grant from Siemens Corporation, by by grant IRI-9013991 from the National Science Foundation, and by grant N00014-90-J-1942 from the Office of Naval Research. The project was also supported by NSF grant ASC-9217041 in support of the Center for Biological and Computational Learning at MIT, including funds provided by DARPA under the HPCC program, and NSF grant ECS-9216531 to support an Initiative in Intelligent Control at MIT. Michael I. Jordan is a NSF Presidential Young Investigator. Target text information: On convergence properties of the em algorithm for gaussian mixtures. : We build up the mathematical connection between the "Expectation-Maximization" (EM) algorithm and gradient-based approaches for maximum likelihood learning of finite Gaussian mixtures. We show that the EM step in parameter space is obtained from the gradient via a projection matrix P , and we provide an explicit expression for the matrix. We then analyze the convergence of EM in terms of special properties of P and provide new results analyzing the effect that P has on the likelihood surface. Based on these mathematical results, we present a comparative discussion of the advantages and disadvantages of EM and other algorithms for the learning of Gaussian mixture models. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
1,161
val
1-hop neighbor's text information: A defect in Dempster-Shafer theory. : By analyzing the relationships among chance, weight of evidence and degree of belief, it is shown that the assertion "chances are special cases of belief functions" and the assertion "Dempster's rule can be used to combine belief functions based on distinct bodies of evidence" together lead to an inconsistency in Dempster-Shafer theory. To solve this problem, some fundamental postulates of the theory must be rejected. A new approach for uncertainty management is introduced, which shares many intuitive ideas with D-S theory, while avoiding this problem. 1-hop neighbor's text information: Non-axiomatic reasoning system (version 2.2). : NARS uses a new form of term logic, or an extended syllogism, in which several types of uncertainties can be represented and processed, and in which deduction, induction, abduction, and revision are carried out in a unified format. The system works in an asynchronously parallel way. The memory of the system is dynamically organized, and can also be interpreted as a network. 1-hop neighbor's text information: Reference classes and multiple inheritances. : The reference class problem in probability theory and the multiple inheritances (extensions) problem in non-monotonic logics can be referred to as special cases of conflicting beliefs. The current solution accepted in the two domains is the specificity priority principle. By analyzing an example, several factors (ignored by the principle) are found to be relevant to the priority of a reference class. A new approach, Non-Axiomatic Reasoning System (NARS), is discussed, where these factors are all taken into account. It is argued that the solution provided by NARS is better than the solutions provided by probability theory and non-monotonic logics. Target text information: From inheritance relation to non-axiomatic logic. : At the beginning of the paper, three binary term logics are defined. The first is based only on an inheritance relation. The second and the third suggest a novel way to process extension and intension, and they also have interesting relations with Aristotle's syllogistic logic. Based on the three simple systems, a Non-Axiomatic Logic is defined. It has a term-oriented language and an experience-grounded semantics. It can uniformly represents and processes randomness, fuzziness, and ignorance. It can also uniformly carries out deduction, abduction, induction, and revision. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
1,064
test
1-hop neighbor's text information: Integrated Architectures for Learning, Planning and Reacting Based on Approximating Dynamic Programming, : This paper extends previous work with Dyna, a class of architectures for intelligent systems based on approximating dynamic programming methods. Dyna architectures integrate trial-and-error (reinforcement) learning and execution-time planning into a single process operating alternately on the world and on a learned model of the world. In this paper, I present and show results for two Dyna architectures. The Dyna-PI architecture is based on dynamic programming's policy iteration method and can be related to existing AI ideas such as evaluation functions and universal plans (reactive systems). Using a navigation task, results are shown for a simple Dyna-PI system that simultaneously learns by trial and error, learns a world model, and plans optimal routes using the evolving world model. The Dyna-Q architecture is based on Watkins's Q-learning, a new kind of reinforcement learning. Dyna-Q uses a less familiar set of data structures than does Dyna-PI, but is arguably simpler to implement and use. We show that Dyna-Q architectures are easy to adapt for use in changing environments. 1-hop neighbor's text information: Learning to use selective attention and short-term memory in sequential tasks. : This paper presents U-Tree, a reinforcement learning algorithm that uses selective attention and short-term memory to simultaneously address the intertwined problems of large perceptual state spaces and hidden state. By combining the advantages of work in instance-based (or memory-based) learning and work with robust statistical tests for separating noise from task structure, the method learns quickly, creates only task-relevant state distinctions, and handles noise well. U-Tree uses a tree-structured representation, and is related to work on Prediction Suffix Trees [ Ron et al., 1994 ] , Parti-game [ Moore, 1993 ] , G-algorithm [ Chap-man and Kaelbling, 1991 ] , and Variable Resolution Dynamic Programming [ Moore, 1991 ] . It builds on Utile Suffix Memory [ McCallum, 1995c ] , which only used short-term memory, not selective perception. The algorithm is demonstrated solving a highway driving task in which the agent weaves around slower and faster traffic. The agent uses active perception with simulated eye movements. The environment has hidden state, time pressure, stochasticity, over 21,000 world states and over 2,500 percepts. From this environment and sensory system, the agent uses a utile distinction test to build a tree that represents depth-three memory where necessary, and has just 143 internal statesfar fewer than the 2500 3 states that would have resulted from a fixed-sized history-window ap proach. 1-hop neighbor's text information: Learning to Act using Real- Time Dynamic Programming. : fl The authors thank Rich Yee, Vijay Gullapalli, Brian Pinette, and Jonathan Bachrach for helping to clarify the relationships between heuristic search and control. We thank Rich Sutton, Chris Watkins, Paul Werbos, and Ron Williams for sharing their fundamental insights into this subject through numerous discussions, and we further thank Rich Sutton for first making us aware of Korf's research and for his very thoughtful comments on the manuscript. We are very grateful to Dimitri Bertsekas and Steven Sullivan for independently pointing out an error in an earlier version of this article. Finally, we thank Harry Klopf, whose insight and persistence encouraged our interest in this class of learning problems. This research was supported by grants to A.G. Barto from the National Science Foundation (ECS-8912623 and ECS-9214866) and the Air Force Office of Scientific Research, Bolling AFB (AFOSR-89-0526). Target text information: Category: Control, Navigation and Planning. Key words: Reinforcement learning, Exploration, Hidden state. Prefer oral presentation.: This paper presents Fringe Exploration, a technique for efficient exploration in partially observable domains. The key idea, (applicable to many exploration techniques), is to keep statistics in the space of possible short-term memories, instead of in the agent's current state space. Experimental results in a partially observable maze and in a difficult driving task with visual routines show dramatic performance improvements. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
5
Reinforcement Learning
cora
149
test
1-hop neighbor's text information: Learning Concept Classification Rules Using Genetic Algorithms. : In this paper, we explore the use of genetic algorithms (GAs) as a key element in the design and implementation of robust concept learning systems. We describe and evaluate a GA-based system called GABIL that continually learns and refines concept classification rules from its interaction with the environment. The use of GAs is motivated by recent studies showing the effects of various forms of bias built into different concept learning systems, resulting in systems that perform well on certain concept classes (generally, those well matched to the biases) and poorly on others. By incorporating a GA as the underlying adaptive search mechanism, we are able to construct a concept learning system that has a simple, unified architecture with several important features. First, the system is surprisingly robust even with minimal bias. Second, the system can be easily extended to incorporate traditional forms of bias found in other concept learning systems. Finally, the architecture of the system encourages explicit representation of such biases and, as a result, provides for an important additional feature: the ability to dynamically adjust system bias. The viability of this approach is illustrated by comparing the performance of GABIL with that of four other more traditional concept learners (AQ14, C4.5, ID5R, and IACL) on a variety of target concepts. We conclude with some observations about the merits of this approach and about possible extensions. Target text information: A simpler look at consistency. : One of the major goals of most early concept learners was to find hypotheses that were perfectly consistent with the training data. It was believed that this goal would indirectly achieve a high degree of predictive accuracy on a set of test data. Later research has partially disproved this belief. However, the issue of consistency has not yet been resolved completely. We examine the issue of consistency from a new perspective. To avoid overfitting the training data, a considerable number of current systems have sacrificed the goal of learning hypotheses that are perfectly consistent with the training instances by setting a goal of hypothesis simplicity (Occam's razor). Instead of using simplicity as a goal, we have developed a novel approach that addresses consistency directly. In other words, our concept learner has the explicit goal of selecting the most appropriate degree of consistency with the training data. We begin this paper by exploring concept learning with less than perfect consistency. Next, we describe a system that can adapt its degree of consistency in response to feedback about predictive accuracy on test data. Finally, we present the results of initial experiments that begin to address the question of how tightly hypotheses should fit the training data for different problems. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
1,938
test
1-hop neighbor's text information: Combining Inductive Learning with Prior Knowledge and Reasoning. : Much effort has been devoted to understanding learning and reasoning in artificial intelligence. However, very few models attempt to integrate these two complementary processes. Rather, there is a vast body of research in machine learning, often focusing on inductive learning from examples, quite isolated from the work on reasoning in artificial intelligence. Though these two processes may be different, they are very much interrelated. The ability to reason about a domain of knowledge is often based on rules about that domain, that must be learned somehow. And the ability to reason can often be used to acquire new knowledge, or learn. This paper introduces an Incremental Learning Algorithm (ILA) that attempts to combine inductive learning with prior knowledge and reasoning. ILA has many important characteristics useful for such a combination, including: 1) incremental, self-organizing learning, 2) nonuniform learning, 3) inherent non-monotonicity, 4) extensional and intensional capabilities, and 5) low order polynomial complexity. The paper describes ILA, gives simulation results for several applications, and discusses each of the above characteristics in detail. 1-hop neighbor's text information: Martinez (1993). Using Precepts to Augment Training Set Learning. : are used in turn to approximate A. Empirical studies show that good results can be achieved with TSL [8, 11]. However, TSL has several drawbacks. Training set learners (e.g., backpropagation) are typically slow as they may require many passes over the training set. Also, there is no guarantee that, given an arbitrary training set, the system will find enough good critical features to get a reasonable approximation of A. Moreover, the number of features to be searched is exponential in the number of inputs, and TSL becomes computationally expensive [1]. Finally, the scarcity of interesting positive theoretical results suggests the difficulty of learning without sufficient a priori knowledge. The goal of learning systems is to generalize. Generalization is commonly based on the set of critical features the system has available. Training set learners typically extract critical features from a random set of examples. While this approach is attractive, it suffers from the exponential growth of the number of features to be searched. We propose to extend it by endowing the system with some a priori knowledge, in the form of precepts. Advantages of the augmented system are speedup, improved generalization, and greater parsimony. This paper presents a precept-driven learning algorithm. Its main features include: 1) distributed implementation, 2) bounded learning and execution times, and 3) ability to handle both correct and incorrect precepts. Results of simulations on real-world data demonstrate promise. This paper presents precept-driven learning (PDL). PDL is intended to overcome some of TSL's weaknesses. In PDL, the training set is augmented by a small set of precepts. A pair p = (i, o) in I O is called an example. A precept is an example in which some of the i-entries (inputs) are set to the special value don't-care. An input whose value is not don't-care is said to be asserted. If i has no effect on the value of the output. The use of the special value don't-care is therefore as a shorthand. A pair containing don't-care inputs represents as many examples as the product of the sizes of the input domains of its don't-care inputs. 1. Introduction 1-hop neighbor's text information: A hybrid nearest-neighbor and nearest-hyperrectangle algorithm. : Algorithms based on Nested Generalized Exemplar (NGE) theory (Salzberg, 1991) classify new data points by computing their distance to the nearest "generalized exemplar" (i.e., either a point or an axis-parallel rectangle). They combine the distance-based character of nearest neighbor (NN) classifiers with the axis-parallel rectangle representation employed in many rule-learning systems. An implementation of NGE was compared to the k-nearest neighbor (kNN) algorithm in 11 domains and found to be significantly inferior to kNN in 9 of them. Several modifications of NGE were studied to understand the cause of its poor performance. These show that its performance can be substantially improved by preventing NGE from creating overlapping rectangles, while still allowing complete nesting of rectangles. Performance can be further improved by modifying the distance metric to allow weights on each of the features (Salzberg, 1991). Best results were obtained in this study when the weights were computed using mutual information between the features and the output class. The best version of NGE developed is a batch algorithm (BNGE FW MI ) that has no user-tunable parameters. BNGE FW MI 's performance is comparable to the first-nearest neighbor algorithm (also incorporating feature weights). However, the k-nearest neighbor algorithm is still significantly superior to BNGE FW MI in 7 of the 11 domains, and inferior to it in only 2. We conclude that, even with our improvements, the NGE approach is very sensitive to the shape of the decision boundaries in classification problems. In domains where the decision boundaries are axis-parallel, the NGE approach can produce excellent generalization with interpretable hypotheses. In all domains tested, NGE algorithms require much less memory to store generalized exemplars than is required by NN algorithms. Target text information: An Efficient Metric for Heterogeneous Inductive Learning Applications in the Attribute-Value Language. : Many inductive learning problems can be expressed in the classical attribute-value language. In order to learn and to generalize, learning systems often rely on some measure of similarity between their current knowledge base and new information. The attribute-value language defines a heterogeneous multidimensional input space, where some attributes are nominal and others linear. Defining similarity, or proximity, of two points in such input spaces is non trivial. We discuss two representative homogeneous metrics and show examples of why they are limited to their own domains. We then address the issues raised by the design of a heterogeneous metric for inductive learning systems. In particular, we discuss the need for normalization and the impact of don't-care values. We propose a heterogeneous metric and evaluate it empirically on a simplified version of ILA. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
1,894
val
1-hop neighbor's text information: DISTRIBUTED GENETIC ALGORITHMS FOR PARTITIONING UNIFORM GRIDS: 1-hop neighbor's text information: Genetic Algorithms as Multi-Coordinators in Large-Scale Optimization: We present high-level, decomposition-based algorithms for large-scale block-angular optimization problems containing integer variables, and demonstrate their effectiveness in the solution of large-scale graph partitioning problems. These algorithms combine the subproblem-coordination paradigm (and lower bounds) of price-directive decomposition methods with knapsack and genetic approaches to the utilization of "building blocks" of partial solutions. Even for graph partitioning problems requiring billions of variables in a standard 0-1 formulation, this approach produces high-quality solutions (as measured by deviations from an easily computed lower bound), and substantially outperforms widely-used graph partitioning techniques based on heuristics and spectral methods. 1-hop neighbor's text information: Optimal and asymptotically optimal equi-partition of rectangular domains via stripe decomposition. : We present an efficient method for the partitioning of rectangular domains into equi-area sub-domains of minimum total perimeter. For a variety of applications in parallel computation, this corresponds to a load-balanced distribution of tasks that minimize interprocessor communication. Our method is based on utilizing, to the maximum extent possible, a set of optimal shapes for sub-domains. We prove that for a large class of these problems, we can construct solutions whose relative distance from a computable lower bound converges to zero as the problem size tends to infinity. PERIX-GA, a genetic algorithm employing this approach, has successfully solved to optimality million-variable instances of the perimeter-minimization problem and for a one-billion-variable problem has generated a solution within 0.32% of the lower bound. We report on the results of an implementation on a CM-5 supercomputer and make comparisons with other existing codes. Target text information: Fast equi-partitioning of rectangular domains using stripe decomposition. : This paper presents a fast algorithm that provides optimal or near optimal solutions to the minimum perimeter problem on a rectangular grid. The minimum perimeter problem is to partition a grid of size M N into P equal area regions while minimizing the total perimeter of the regions. The approach taken here is to divide the grid into stripes that can be filled completely with an integer number of regions . This striping method gives rise to a knapsack integer program that can be efficiently solved by existing codes. The solution of the knapsack problem is then used to generate the grid region assignments. An implementation of the algorithm partitioned a 1000 1000 grid into 1000 regions to a provably optimal solution in less than one second. With sufficient memory to hold the M N grid array, extremely large minimum perimeter problems can be solved easily. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
1,776
train
1-hop neighbor's text information: (1997a) Bayesian time series: Models and computations for the analysis of time series in the physical sciences. In Maximum Entropy and Bayesian Methods 15, : This articles discusses developments in Bayesian time series mod-elling and analysis relevant in studies of time series in the physical and engineering sciences. With illustrations and references, we discuss: Bayesian inference and computation in various state-space models, with examples in analysing quasi-periodic series; isolation and modelling of various components of error in time series; decompositions of time series into significant latent subseries; nonlinear time series models based on mixtures of auto-regressions; problems with errors and uncertainties in the timing of observations; and the development of non-linear models based on stochastic deformations of time scales. 1-hop neighbor's text information: Exploratory Modelling of Multiple Non-stationary Time Series: Latent Process Structure & Decompositions," in Modelling Longitudinal and Spatially Correlated Data, : We describe and illustrate Bayesian approaches to modelling and analysis of multiple non-stationary time series. This begins with uni-variate models for collections of related time series assumedly driven by underlying but unobservable processes, referred to as dynamic latent factor processes. We focus on models in which the factor processes, and hence the observed time series, are modelled by time-varying autoregressions capable of flexibly representing ranges of observed non-stationary characteristics. We highlight concepts and new methods of time series decomposition to infer characteristics of latent components in time series, and relate uni-variate decomposition analyses to underlying multivariate dynamic factor structure. Our motivating application is in analysis of multiple EEG traces from an ongoing EEG study at Duke. In this study, individuals undergoing ECT therapy generate multiple EEG traces at various scalp locations, and physiological interest lies in identifying dependencies and dissimilarities across series. In addition to the multivariate and non-stationary aspects of the series, this area provides illustration of the new results about decomposition of time series into latent, physically interpretable components; this is illustrated in data analysis of one EEG data set. The paper also discusses current and future research directions. fl This research was supported in part by the National Science Foundation under grant DMS-9311071. The EEG data and context arose from discussions with Dr Andrew Krystal, of Duke University Medical Center, with whom continued interactions have been most valuable. Address for correspondence: Institute of Statistics and Decision Sciences, Duke University, Durham, NC 27708-0251 U.S.A. (http://www.stat.duke.edu) Target text information: (1997b) Modelling and robustness issues in Bayesian time series analysis (with discussion). In Bayesian Robustness 2, : Some areas of recent development and current interest in time series are noted, with some discussion of Bayesian modelling efforts motivated by substantial practical problems. The areas include non-linear auto-regressive time series modelling, measurement error structures in state-space modelling of time series, and issues of timing uncertainties and time deformations. Some discussion of the needs and opportunities for work on non/semi-parametric models and robustness issues is given in each context. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
2,008
test
1-hop neighbor's text information: Graphical Models in Applied Multivariate Statistics. : 1-hop neighbor's text information: Operations for learning with graphical models. : This paper is a multidisciplinary review of empirical, statistical learning from a graphical model perspective. Well-known examples of graphical models include Bayesian networks, directed graphs representing a Markov chain, and undirected networks representing a Markov field. These graphical models are extended to model data analysis and empirical learning using the notation of plates. Graphical operations for simplifying and manipulating a problem are provided including decomposition, differentiation, and the manipulation of probability models from the exponential family. Two standard algorithm schemas for learning are reviewed in a graphical framework: Gibbs sampling and the expectation maximization algorithm. Using these operations and schemas, some popular algorithms can be synthesized from their graphical specification. This includes versions of linear regression, techniques for feed-forward networks, and learning Gaussian and discrete Bayesian networks from data. The paper concludes by sketching some implications for data analysis and summarizing how some popular algorithms fall within the framework presented. 1-hop neighbor's text information: Exploiting tractable substructures in intractable networks. : We develop a refined mean field approximation for inference and learning in probabilistic neural networks. Our mean field theory, unlike most, does not assume that the units behave as independent degrees of freedom; instead, it exploits in a principled way the existence of large substructures that are computationally tractable. To illustrate the advantages of this framework, we show how to incorporate weak higher order interactions into a first-order hidden Markov model, treating the corrections (but not the first order structure) within mean field theory. Target text information: Belief Networks, Hidden Markov Models, and Markov Random Fields: a Unifying View: The use of graphs to represent independence structure in multivariate probability models has been pursued in a relatively independent fashion across a wide variety of research disciplines since the beginning of this century. This paper provides a brief overview of the current status of such research with particular attention to recent developments which have served to unify such seemingly disparate topics as probabilistic expert systems, statistical physics, image analysis, genetics, decoding of error-correcting codes, Kalman filters, and speech recognition with Markov models. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
134
test
1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming. 1-hop neighbor's text information: "Genetic and Non-Genetic Operators in Alecsys," : It is well known that standard learning classifier systems, when applied to many different domains, exhibit a number of problems: payoff oscillation, difficult to regulate interplay between the reward system and the background genetic algorithm (GA), rule chains instability, default hierarchies instability, are only a few. ALECSYS is a parallel version of a standard learning classifier system (CS), and as such suffers of these same problems. In this paper we propose some innovative solutions to some of these problems. We introduce the following original features. Mutespec, a new genetic operator used to specialize potentially useful classifiers. Energy, a quantity introduced to measure global convergence in order to apply the genetic algorithm only when the system is close to a steady state. Dynamical adjustment of the classifiers set cardinality, in order to speed up the performance phase of the algorithm. We present simulation results of experiments run in a simulated two-dimensional world in which a simple agent learns to follow a light source. Target text information: On the Relations Between Search and Evolutionary Algorithms: Technical Report: CSRP-96-7 March 1996 Abstract Evolutionary algorithms are powerful techniques for optimisation whose operation principles are inspired by natural selection and genetics. In this paper we discuss the relation between evolutionary techniques, numerical and classical search methods and we show that all these methods are instances of a single more general search strategy, which we call the `evolutionary computation cookbook'. By combining the features of classical and evolutionary methods in different ways new instances of this general strategy can be generated, i.e. new evolutionary (or classical) algorithms can be designed. One such algorithm, GA fl , is described. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
532
test
1-hop neighbor's text information: Supervised learning from incomplete data via an EM approach. : Real-world learning tasks may involve high-dimensional data sets with arbitrary patterns of missing data. In this paper we present a framework based on maximum likelihood density estimation for learning from such data sets. We use mixture models for the density estimates and make two distinct appeals to the Expectation-Maximization (EM) principle (Dempster et al., 1977) in deriving a learning algorithm|EM is used both for the estimation of mixture components and for coping with missing data. The resulting algorithm is applicable to a wide range of supervised as well as unsupervised learning problems. Results from a classification benchmark|the iris data set|are presented. 1-hop neighbor's text information: Using Temporal-Difference Reinforcement Learning to Improve Decision-Theoretic Utilities for Diagnosis: Probability theory represents and manipulates uncertainties, but cannot tell us how to behave. For that we need utility theory which assigns values to the usefulness of different states, and decision theory which concerns optimal rational decisions. There are many methods for probability modeling, but few for learning utility and decision models. We use reinforcement learning to find the optimal sequence of questions in a diagnosis situation while maintaining a high accuracy. Automated diagnosis on a heart-disease domain is used to demonstrate that temporal-difference learning can improve diagnosis. On the Cleveland heart-disease database our results are better than those reported from all previous methods. 1-hop neighbor's text information: "Active Learning with Statistical Models," : For many types of learners one can compute the statistically "optimal" way to select data. We review how these techniques have been used with feedforward neural networks [MacKay, 1992; Cohn, 1994]. We then show how the same principles may be used to select data for two alternative, statistically-based learning architectures: mixtures of Gaussians and locally weighted regression. While the techniques for neural networks are expensive and approximate, the techniques for mixtures of Gaussians and locally weighted regression are both efficient and accurate. This report describes research done at the Center for Biological and Computational Learning and the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the Center is provided in part by a grant from the National Science Foundation under contract ASC-9217041. The authors were also funded by the McDonnell-Pew Foundation, ATR Human Information Processing Laboratories, Siemens Corporate Research, NSF grant CDA-9309300 and by grant N00014-94-1-0777 from the Office of Naval Research. Michael I. Jordan is a NSF Presidential Young Investigator. A version of this paper appears in G. Tesauro, D. Touretzky, and J. Alspector, eds., Advances in Neural Information Processing Systems 7. Morgan Kaufmann, San Francisco, CA (1995). Target text information: A mixture model system for medical and machine diagnosis. : Diagnosis is the process of identifying the disorders of a machine or a patient by considering its history, symptoms and other signs. Starting from possible initial information, new information is requested in a sequential manner and the diagnosis is made more precise. It is thus a missing data problem since not everything is known. We model the joint probability distribution of the data from a case database with mixture models. Model parameters are estimated by the EM algorithm which gives the additional benefit that missing data in the database itself can also be handled correctly. Request of new information to refine the diagnosis is performed using the maximum utility principle from decision theory. Since the system is based on machine learning it is domain independent. An example using a heart disease database is presented. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
45
val
1-hop neighbor's text information: Supervised and unsupervised discretization of continuous features. : Many supervised machine learning algorithms require a discrete feature space. In this paper, we review previous work on continuous feature discretization, identify defining characteristics of the methods, and conduct an empirical evaluation of several methods. We compare binning, an unsupervised discretization method, to entropy-based and purity-based methods, which are supervised algorithms. We found that the performance of the Naive-Bayes algorithm significantly improved when features were discretized using an entropy-based method. In fact, over the 16 tested datasets, the discretized version of Naive-Bayes slightly outperformed C4.5 on average. We also show that in some cases, the performance of the C4.5 induction algorithm significantly improved if features were discretized in advance; in our experiments, the performance never significantly degraded, an interesting phenomenon considering the fact that C4.5 is capable of locally discretiz ing features. 1-hop neighbor's text information: Constructive Induction from Data in AQ17-DCI: Further Experiments , Reports of the Machine Learning and Inference Laboratory, : 1-hop neighbor's text information: Inferential Theory of Learning: Developing Foundations for Multistrategy Learning, in Machine Learning: A Multistrategy Approach, Vol. IV, R.S. : The development of multistrategy learning systems should be based on a clear understanding of the roles and the applicability conditions of different learning strategies. To this end, this chapter introduces the Inferential Theory of Learning that provides a conceptual framework for explaining logical capabilities of learning strategies, i.e., their competence. Viewing learning as a process of modifying the learners knowledge by exploring the learners experience, the theory postulates that any such process can be described as a search in a knowledge space, triggered by the learners experience and guided by learning goals. The search operators are instantiations of knowledge transmutations, which are generic patterns of knowledge change. Transmutations may employ any basic type of inferencededuction, induction or analogy. Several fundamental knowledge transmutations are described in a novel and general way, such as generalization, abstraction, explanation and similization, and their counterparts, specialization, concretion, prediction and dissimilization, respectively. Generalization enlarges the reference set of a description (the set of entities that are being described). Abstraction reduces the amount of the detail about the reference set. Explanation generates premises that explain (or imply) the given properties of the reference set. Similization transfers knowledge from one reference set to a similar reference set. Using concepts of the theory, a multistrategy task-adaptive learning (MTL) methodology is outlined, and illustrated b y an example. MTL dynamically adapts strategies to the learning task, defined by the input information, learners background knowledge, and the learning goal. It aims at synergistically integrating a whole range of inferential learning strategies, such as empirical generalization, constructive induction, deductive generalization, explanation, prediction, abstraction, and similization. Target text information: Machine Learning and Inference: Constructive induction divides the problem of learning an inductive hypothesis into two intertwined searches: onefor the best representation space, and twofor the best hypothesis in that space. In data-driven constructive induction (DCI), a learning system searches for a better representation space by analyzing the input examples (data). The presented data-driven constructive induction method combines an AQ-type learning algorithm with two classes of representation space improvement operators: constructors, and destructors. The implemented system, AQ17-DCI, has been experimentally applied to a GNP prediction problem using a World Bank database. The results show that decision rules learned by AQ17-DCI outperformed the rules learned in the original representation space both in predictive accuracy and rule simplicity. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
0
Rule Learning
cora
817
test
1-hop neighbor's text information: Static Data Association with a Terrain-Based Prior Density: Target text information: Selection of Distance Metrics and Feature Subsets for k-Nearest Neighbor Classifiers. : I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
2,180
test
1-hop neighbor's text information: Selection of Relevant Features in Machine Learning. : In this survey, we review work in machine learning on methods for handling data sets containing large amounts of irrelevant information. We focus on two key issues: the problem of selecting relevant features, and the problem of selecting relevant examples. We describe the advances that have been made on these topics in both empirical and theoretical work in machine learning, and we present a general framework that we use to compare different methods. We close with some challenges for future work in this area. 1-hop neighbor's text information: Hierarchical Mixtures of Experts and the EM Algorithm, : We present a tree-structured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIM's). Learning is treated as a maximum likelihood problem; in particular, we present an Expectation-Maximization (EM) algorithm for adjusting the parameters of the architecture. We also develop an on-line learning algorithm in which the parameters are updated incrementally. Comparative simulation results are presented in the robot dynamics domain. *We want to thank Geoffrey Hinton, Tony Robinson, Mitsuo Kawato and Daniel Wolpert for helpful comments on the manuscript. This project was supported in part by a grant from the McDonnell-Pew Foundation, by a grant from ATR Human Information Processing Research Laboratories, by a grant from Siemens Corporation, by by grant IRI-9013991 from the National Science Foundation, and by grant N00014-90-J-1942 from the Office of Naval Research. The project was also supported by NSF grant ASC-9217041 in support of the Center for Biological and Computational Learning at MIT, including funds provided by DARPA under the HPCC program, and NSF grant ECS-9216531 to support an Initiative in Intelligent Control at MIT. Michael I. Jordan is a NSF Presidential Young Investigator. 1-hop neighbor's text information: Tibshirani (1994) Combining Estimates in Regression and Classification, : We consider the problem of how to combine a collection of general regression fit vectors in order to obtain a better predictive model. The individual fits may be from subset linear regression, ridge regression, or something more complex like a neural network. We develop a general framework for this problem and examine a recent cross-validation-based proposal called "stacking" in this context. Combination methods based on the bootstrap and analytic methods are also derived and compared in a number of examples, including best subsets regression and regression trees. Finally, we apply these ideas to classification problems where the estimated combination weights can yield insight into the structure of the problem. Target text information: A Method of Combining Multiple Probabilistic Classifiers through Soft Competition on Different Feature Sets: A novel method is proposed for combining multiple probabilistic classifiers on different feature sets. In order to achieve the improved classification performance, a generalized finite mixture model is proposed as a linear combination scheme and implemented based on radial basis function networks. In the linear combination scheme, soft competition on different feature sets is adopted as an automatic feature rank mechanism so that different feature sets can be always simultaneously used in an optimal way to determine linear combination weights. For training the linear combination scheme, a learning algorithm is developed based on Expectation-Maximization (EM) algorithm. The proposed method has been applied to a typical real world problem, viz. speaker identification, in which different feature sets often need consideration simultaneously for robustness. Simulation results show that the proposed method yields good performance in speaker identification. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
233
val
1-hop neighbor's text information: MML and Bayesianism: similarities and differences. : Tech Report 207 Department of Computer Science, Monash University, Clayton, Vic. 3168, Australia Abstract: This paper continues the introduction to minimum encoding inductive inference given by Oliver and Hand. This series of papers was written with the objective of providing an introduction to this area for statisticians. We describe the message length estimates used in Wallace's Minimum Message Length (MML) inference and Rissanen's Minimum Description Length (MDL) inference. The differences in the message length estimates of the two approaches are explained. The implications of these differences for applications are discussed. Target text information: Causal Discovery via MML: Automating the learning of causal models from sample data is a key step toward incorporating machine learning in the automation of decision-making and reasoning under uncertainty. This paper presents a Bayesian approach to the discovery of causal models, using a Minimum Message Length (MML) method. We have developed encoding and search methods for discovering linear causal models. The initial experimental results presented in this paper show that the MML induction approach can recover causal models from generated data which are quite accurate reflections of the original models; our results compare favorably with those of the TETRAD II program of Spirtes et al. [25] even when their algorithm is supplied with prior temporal information and MML is not. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
432
test
1-hop neighbor's text information: Rigorous learning curve bounds from statistical mechanics. : In this paper we introduce and investigate a mathematically rigorous theory of learning curves that is based on ideas from statistical mechanics. The advantage of our theory over the well-established Vapnik-Chervonenkis theory is that our bounds can be considerably tighter in many cases, and are also more reflective of the true behavior (functional form) of learning curves. This behavior can often exhibit dramatic properties such as phase transitions, as well as power law asymptotics not explained by the VC theory. The disadvantages of our theory are that its application requires knowledge of the input distribution, and it is limited so far to finite cardinality function classes. We illustrate our results with many concrete examples of learning curve bounds derived from our theory. 1-hop neighbor's text information: A bound on the error of Cross Validation using the approxima-tion and estimation rates, with consequences for the training-test split. : We give an analysis of the generalization error of cross validation in terms of two natural measures of the difficulty of the problem under consideration: the approximation rate (the accuracy to which the target function can be ideally approximated as a function of the number of hypothesis parameters), and the estimation rate (the deviation between the training and generalization errors as a function of the number of hypothesis parameters). The approximation rate captures the complexity of the target function with respect to the hypothesis model, and the estimation rate captures the extent to which the hypothesis model suffers from overfitting. Using these two measures, we give a rigorous and general bound on the error of cross validation. The bound clearly shows the tradeoffs involved with making fl the fraction of data saved for testing too large or too small. By optimizing the bound with respect to fl, we then argue (through a combination of formal analysis, plotting, and controlled experimentation) that the following qualitative properties of cross validation behavior should be quite robust to significant changes in the underlying model selection problem: 1-hop neighbor's text information: Toward efficient agnostic learning. : In this paper we initiate an investigation of generalizations of the Probably Approximately Correct (PAC) learning model that attempt to significantly weaken the target function assumptions. The ultimate goal in this direction is informally termed agnostic learning, in which we make virtually no assumptions on the target function. The name derives from the fact that as designers of learning algorithms, we give up the belief that Nature (as represented by the target function) has a simple or succinct explanation. We give a number of positive and negative results that provide an initial outline of the possibilities for agnostic learning. Our results include hardness results for the most obvious generalization of the PAC model to an agnostic setting, an efficient and general agnostic learning method based on dynamic programming, relationships between loss functions for agnostic learning, and an algorithm for a learning problem that involves hidden variables. Target text information: Algorithmic stability and sanity-check bounds for leave-one-out cross-validation. : In this paper we prove sanity-check bounds for the error of the leave-one-out cross-validation estimate of the generalization error: that is, bounds showing that the worst-case error of this estimate is not much worse than that of the training error estimate. The name sanity-check refers to the fact that although we often expect the leave-one-out estimate to perform considerably better than the training error estimate, we are here only seeking assurance that its performance will not be considerably worse. Perhaps surprisingly, such assurance has been given only for limited cases in the prior literature on cross-validation. Any nontrivial bound on the error of leave-one-out must rely on some notion of algorithmic stability. Previous bounds relied on the rather strong notion of hypothesis stability, whose application was primarily limited to nearest-neighbor and other local algorithms. Here we introduce the new and weaker notion of error stability, and apply it to obtain sanity-check bounds for leave-one-out for other classes of learning algorithms, including training error minimization procedures and Bayesian algorithms. We also provide lower bounds demonstrating the necessity of some form of error stability for proving bounds on the error of the leave-one-out estimate, and the fact that for training error minimization algorithms, in the worst case such bounds must still depend on the Vapnik-Chervonenkis dimension of the hypothesis class. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
2,695
test
1-hop neighbor's text information: Identification and control of nonlinear systems using neural network models: Design and stability analysis. : Report 91-09-01 September 1991 (revised) May 1994 Target text information: Some topics in neural networks and control. : I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
1,560
test
1-hop neighbor's text information: Strongly typed genetic programming in evolving cooperation strategies. : 1-hop neighbor's text information: Type inheritance in strongly typed genetic programming. : This paper appears as chapter 18 of Kenneth E. Kinnear, Jr. and Peter J. Angeline, editors Advances in Genetic Programming 2, MIT Press, 1996. Abstract Genetic Programming (GP) is an automatic method for generating computer programs, which are stored as data structures and manipulated to evolve better programs. An extension restricting the search space is Strongly Typed Genetic Programming (STGP), which has, as a basic premise, the removal of closure by typing both the arguments and return values of functions, and by also typing the terminal set. A restriction of STGP is that there are only two levels of typing. We extend STGP by allowing a type hierarchy, which allows more than two levels of typing. Target text information: Clique detection via genetic programming. : Genetic programming is applied to the task of finding all of the cliques in a graph. Nodes in the graph are represented as tree structures, which are then manipulated to form candidate cliques. The intrinsic properties of clique detection complicates the design of a good fitness evaluation. We analyze those properties, and show the clique detector is found to be better at finding the maximum clique in the graph, not the set of all cliques. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
2,064
test
1-hop neighbor's text information: Olfaction Metal Oxide Semiconductor Gas Sensors and Neural Networks: Target text information: ``Gas Identification System using Graded Temperature Sensor and Neural Net Interpretation\'\', : I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
1,976
test
1-hop neighbor's text information: "Using Modeling Knowledge to Guide Design Space Search". : Automated search of a space of candidate designs seems an attractive way to improve the traditional engineering design process. To make this approach work, however, the automated design system must include both knowledge of the modeling limitations of the method used to evaluate candidate designs and also an effective way to use this knowledge to influence the search process. We suggest that a productive approach is to include this knowledge by implementing a set of model constraint functions which measure how much each modeling assumptions is violated, and to influence the search by using the values of these model constraint functions as constraint inputs to a standard constrained nonlinear optimization numerical method. We test this idea in the domain of conceptual design of supersonic transport aircraft, and our experiments indicate that our model constraint communication strategy can decrease the cost of design space search by one or more orders of magnitude. Target text information: "Intelligent Gradient-Based Search of Incompletely Defined Design Spaces". : Gradient-based numerical optimization of complex engineering designs offers the promise of rapidly producing better designs. However, such methods generally assume that the objective function and constraint functions are continuous, smooth, and defined everywhere. Unfortunately, realistic simulators tend to violate these assumptions. We present a knowledge-based technique for intelligently computing gradients in the presence of such pathologies in the simulators, and show how this gradient computation method can be used as part of a gradient-based numerical optimization system. We tested the resulting system in the domain of conceptual design of supersonic transport aircraft, and found that using knowledge-based gradients can decrease the cost of design space search by one or more orders of magnitude. Acknowledgments: We thank our aircraft design expert, Gene Bouchard of Lockheed, for his invaluable assistance in this research. We also thank all members of the HPCD project, especially Tom Ellman, Keith Miyake, and Don Smith. This research was partially supported by NASA under grant NAG2-817 and is also part of the Rutgers-based HPCD (Hypercomputing and Design) project supported by the Advanced Research Projects Agency of the Department of Defense through contract ARPA-DABT 63-93-C-0064. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
2,077
test
1-hop neighbor's text information: Modelling risk from a disease in time and space, : This paper combines existing models for longitudinal and spatial data in a hierarchical Bayesian framework, with particular emphasis on the role of time- and space-varying covariate effects. Data analysis is implemented via Markov chain Monte Carlo methods. The methodology is illustrated by a tentative re-analysis of Ohio lung cancer data 1968-88. Two approaches that adjust for unmeasured spatial covariates, particularly tobacco consumption, are described. The first includes random effects in the model to account for unobserved heterogeneity; the second adds a simple urbanization measure as a surrogate for smoking behaviour. The Ohio dataset has been of particular interest because of the suggestion that a nuclear facility in the southwest of the state may have caused increased levels of lung cancer there. However, we contend here that the data are inadequate for a proper investigation of this issue. fl Email: [email protected] Target text information: (1993) Bayesian inference for agricultural field experiments. : SUMMARY The paper describes Bayesian analysis for agricultural field experiments, a topic that has received very little previous attention, despite a vast frequentist literature. Adoption of the Bayesian paradigm simplifies the interpretation of the results, especially in ranking and selection. Also, complex formulations can be analyzed with comparative ease, using Markov chain Monte Carlo methods. A key ingredient in the approach is the need for spatial representations of the unobserved fertility patterns. This is discussed in detail. Problems caused by outliers and by jumps in fertility are tackled via hierarchical-t formulations that may find use in other contexts. The paper includes three analyses of variety trials for yield and one example involving binary data; none is entirely straightforward. Some comparisons with frequentist analyses are made. The datasets are available at http://www.stat.duke.edu/~higdon/trials/data.html. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
2,251
test
1-hop neighbor's text information: Regression shrinkage and selection via the lasso. : We propose a new method for estimation in linear models. The "lasso" minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant. Because of the nature of this constraint it tends to produce some coefficients that are exactly zero and hence gives interpretable models. Our simulation studies suggest that the lasso enjoys some of the favourable properties of both subset selection and ridge regression. It produces interpretable models like subset selection and exhibits the stability of ridge regression. There is also an interesting relationship with recent work in adaptive function estimation by Donoho and Johnstone. The lasso idea is quite general and can be applied in a variety of statistical models: extensions to generalized regression models and tree-based models are briefly described. Target text information: A proposal for variable selection in the Cox model: We propose a new method for variable selection and estimation in Cox's proportional hazards model. Our proposal minimizes the log partial likelihood subject to the sum of the absolute values of the parameters being bounded by a constant. Because of the nature of this constraint it tends to produce some coefficients that are exactly zero and hence gives interpretable models. The method is a variation of the "lasso" proposal of Tibshirani (1994), designed for the linear regression context. Simulations indicate that the lasso can be more accurate than stepwise selection in this setting. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
407
val
1-hop neighbor's text information: Neuro-dynamic Programming. : 1-hop neighbor's text information: Correlated action effects in decision-theoretic regression. : Much recent research in decision theoretic planning has adopted Markov decision processes (MDPs) as the model of choice, and has attempted to make their solution more tractable by exploiting problem structure. One particular algorithm, structured policy construction achieves this by means of a decision theoretic analog of goal regression, using action descriptions based on Bayesian networks with tree-structured conditional probability tables. The algorithm as presented is not able to deal with actions with correlated effects. We describe a new decision theoretic regression operator that corrects this weakness. While conceptually straightforward, this extension requires a somewhat more complicated technical approach. 1-hop neighbor's text information: Approximating value trees in structured dynamic programming. : We propose and examine a method of approximate dynamic programming for Markov decision processes based on structured problem representations. We assume an MDP is represented using a dynamic Bayesian network, and construct value functions using decision trees as our function representation. The size of the representation is kept within acceptable limits by pruning these value trees so that leaves represent possible ranges of values, thus approximating the value functions produced during optimization. We propose a method for detecting convergence, prove errors bounds on the resulting approximately optimal value functions and policies, and describe some preliminary experi mental results. Target text information: Structured Reachability Analysis for Markov Decision Processes: Recent research in decision theoretic planning has focussed on making the solution of Markov decision processes (MDPs) more feasible. We develop a family of algorithms for structured reachability analysis of MDPs that are suitable when an initial state (or set of states) is known. Using compact, structured representations of MDPs (e.g., Bayesian networks), our methods, which vary in the tradeoff between complexity and accuracy, produce structured descriptions of (estimated) reachable states that can be used to eliminate variables or variable values from the problem description, reducing the size of the MDP and making it easier to solve. One contribution of our work is the extension of ideas from GRAPHPLAN to deal with the distributed nature of action representations typically embodied within Bayes nets and the problem of correlated action effects. We also demonstrate that our algorithm can be made more complete by using k-ary constraints instead of binary constraints. Another contribution is the illustration of how the compact representation of reachability constraints can be exploited by several existing (exact and approximate) abstraction algorithms for MDPs. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
221
train
1-hop neighbor's text information: "Automated WYSIWYG Design of both the topology and component val 223 ues of electrical circuits using genetic programming," : Genetic programming was used to evolve both the topology and sizing (numerical values) for each component of a low-distortion, low 1-hop neighbor's text information: Evolution of non-deterministic incremental algorithms as a new approach for search in state spaces. : Let us call a non-deterministic incremental algorithm one that is able to construct any solution to a combinatorial problem by selecting incrementally an ordered sequence of choices that defines this solution, each choice being made non-deterministically. In that case, the state space can be represented as a tree, and a solution is a path from the root of that tree to a leaf. This paper describes how the simulated evolution of a population of such non-deterministic incremental algorithms offers a new approach for the exploration of a state space, compared to other techniques like Genetic Algorithms (GA), Evolutionary Strategies (ES) or Hill Climbing. In particular, the efficiency of this method, implemented as the Evolving Non-Determinism (END) model, is presented for the sorting network problem, a reference problem that has challenged computer science. Then, we shall show that the END model remedies some drawbacks of these optimization techniques and even outperforms them for this problem. Indeed, some 16-input sorting networks as good as the best known have been built from scratch, and even a 25-year-old result for the 13-input problem has been improved by one comparator. 1-hop neighbor's text information: Automated synthesis of computational circuits using genetic programming. : Automated synthesis of analog electronic circuits is recognized as a difficult problem. Genetic programming was used to evolve b o t h the topology and the sizing ( n u m e r i c a l v a l u e s ) f o r e a c h component of a circuit that can perform source identification by correctly cl assify an incoming signal into categories. Target text information: "Use of architecture-altering operations to dynamically adapt a three-way analog source identification circuit to accommodate a new source," : We used genetic programming to evolve b o t h the topology and the sizing (numerical values) for each component of an analog electrical circuit that can correctly classify an incoming analog electrical signal into three categories. Then, the r e p e r t o i r e o f s o u r c e s w a s dynamically changed by adding a new source during the run. The p a p e r d e s c r i b e s h o w t h e I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
2,110
test
1-hop neighbor's text information: Monitoring in Embedded Agents: Finding good monitoring strategies is an important process in the design of any embedded agent. We describe the nature of the monitoring problem, point out what makes it difficult, and show that while periodic monitoring strategies are often the easiest to derive, they are not always the most appropriate. We demonstrate mathematically and empirically that for a wide class of problems, the so-called "cupcake problems", there exists a simple strategy, interval reduction, that outperforms periodic monitoring. We also show how features of the environment may influence the choice of the optimal strategy. The paper concludes with some thoughts about a monitoring strategy taxonomy, and what its defining features might be. 1-hop neighbor's text information: Stochastic Random or probabilistic but with some direction. For example the arrival of people at: Simulated Annealing Search technique where a single trial solution is modified at random. An energy is defined which represents how good the solution is. The goal is to find the best solution by minimising the energy. Changes which lead to a lower energy are always accepted; an increase is probabilistically accepted. The probability is given by exp(E=k B T ). Where E is the change in energy, k B is a constant and T is the Temperature. Initially the temperature is high corresponding to a liquid or molten state where large changes are possible and it is progressively reduced using a cooling schedule so allowing smaller changes until the system solidifies at a low energy solution. 1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming. Target text information: Learning monitoring strategies: A difficult genetic programming application. : Finding optimal or at least good monitoring strategies is an important consideration when designing an agent. We have applied genetic programming to this task, with mixed results. Since the agent control language was kept purposefully general, the set of monitoring strategies constitutes only a small part of the overall space of possible behaviors. Because of this, it was often difficult for the genetic algorithm to evolve them, even though their performance was superior. These results raise questions as to how easy it will be for genetic programming to scale up as the areas it is applied to become more complex. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
1,083
test
1-hop neighbor's text information: Markov chain Monte Carlo methods based on "slicing" the density function. : Technical Report No. 9722, Department of Statistics, University of Toronto Abstract. One way to sample from a distribution is to sample uniformly from the region under the plot of its density function. A Markov chain that converges to this uniform distribution can be constructed by alternating uniform sampling in the vertical direction with uniform sampling from the horizontal `slice' defined by the current vertical position. Variations on such `slice sampling' methods can easily be implemented for univariate distributions, and can be used to sample from a multivariate distribution by updating each variable in turn. This approach is often easier to implement than Gibbs sampling, and may be more efficient than easily-constructed versions of the Metropolis algorithm. Slice sampling is therefore attractive in routine Markov chain Monte Carlo applications, and for use by software that automatically generates a Markov chain sampler from a model specification. One can also easily devise overrelaxed versions of slice sampling, which sometimes greatly improve sampling efficiency by suppressing random walk behaviour. Random walks can also be avoided in some slice sampling schemes that simultaneously update all variables. 1-hop neighbor's text information: Convergence Rates of Markov Chains. : In this paper, we analyse theoretical properties of the slice sampler. We find that the algorithm has extremely robust geometric ergodicity properties. For the case of just one auxiliary variable, we demonstrate that the algorithm is stochastically monotone, and deduce analytic bounds on the total variation distance from stationarity of the method using Foster-Lyapunov drift condition methodology. 1-hop neighbor's text information: On convergence rates of Gibbs samplers for uniform distributions. : We consider a Gibbs sampler applied to the uniform distribution on a bounded region R R d . We show that the convergence properties of the Gibbs sampler depend greatly on the smoothness of the boundary of R. Indeed, for sufficiently smooth boundaries the sampler is uniformly ergodic, while for jagged boundaries the sampler could fail to even be geometrically ergodic. Target text information: Auxilliary variable methods for Markov chain Monte Carlo with applications. : Suppose one wishes to sample from the density (x) using Markov chain Monte Carlo (MCMC). An auxiliary variable u and its conditional distribution (ujx) can be defined, giving the joint distribution (x; u) = (x)(ujx). A MCMC scheme which samples over this joint distribution can lead to substantial gains in efficiency compared to standard approaches. The revolutionary algorithm of Swendsen and Wang (1987) is one such example. In addition to reviewing the Swendsen-Wang algorithm and its generalizations, this paper introduces a new auxiliary variable method called partial decoupling. Two applications in Bayesian image analysis are considered. The first is a binary classification problem in which partial decoupling out performs SW and single site Metropolis. The second is a PET reconstruction which uses the gray level prior of Geman and McClure (1987). A generalized Swendsen-Wang algorithm is developed for this problem, which reduces the computing time to the point that MCMC is a viable method of posterior exploration. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
958
test
1-hop neighbor's text information: `A case study in machine learning\', : This paper tries to identify rules and factors that are predictive for the outcome of international conflict management attempts. We use C4.5, an advanced Machine Learning algorithm, for generating decision trees and prediction rules from cases in the CONFMAN database. The results show that simple patterns and rules are often not only more understandable, but also more reliable than complex rules. Simple decision trees are able to improve the chances of correctly predicting the outcome of a conflict management attempt. This suggests that mediation is more repetitive than conflicts per se, where such results have not been achieved so far. 1-hop neighbor's text information: A common lisp hypermedia server. : A World-Wide Web (WWW) server was implemented in Common LISP in order to facilitate exploratory programming in the global hypermedia domain and to provide access to complex research programs, particularly artificial intelligence systems. The server was initially used to provide interfaces for document retrieval and for email servers. More advanced applications include interfaces to systems for inductive rule learning and natural-language question answering. Continuing research seeks to more fully generalize automatic form-processing techniques developed for email servers to operate seamlessly over the Web. The conclusions argue that presentation-based interfaces and more sophisticated form processing should be moved into the clients in order to reduce the load on servers and provide more advanced interaction models for users. Target text information: ``Beyond Correlation: Bringing Artificial Intelligence to Event Data,\'\' International Interactions, : The Feature Vector Editor offers a user-extensible environment for exploratory data analysis. Several empirical studies have applied this environment to the SHER-FACS International Conflict Management dataset. Current analysis techniques include boolean analysis, temporal analysis, and automatic rule learning. Implemented portably in ANSI Common Lisp and the Common Lisp Interface Manager (CLIM), the system features an advanced interface that makes it intuitive for people to manipulate data and discover significant relationships. The system encapsulates data within objects and defines generic protocols that mediate all interactions between data, users and analysis algorithms. Generic data protocols make possible rapid integration of new datasets and new analytical algorithms with heterogeneous data formats. More sophisticated research reformulates SHERFACS conflict codings as machine-parsable narratives suitable for processing into semantic representations by the RELATUS Natural Language System. Experiments with 244 SHERFACS cases demonstrated the feasibility of building knowledge bases from synthetic texts exceeding 600 pages. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
0
Rule Learning
cora
2,344
test
1-hop neighbor's text information: Global conditioning for probabilistic inference in belief networks. : In this paper we propose a new approach to probabilistic inference on belief networks, global conditioning, which is a simple generalization of Pearl's (1986b) method of loop-cutset conditioning. We show that global conditioning, as well as loop-cutset conditioning, can be thought of as a special case of the method of Lauritzen and Spiegelhalter (1988) as refined by Jensen et al (1990a; 1990b). Nonetheless, this approach provides new opportunities for parallel processing and, in the case of sequential processing, a tradeoff of time for memory. We also show how a hybrid method (Suermondt and others 1990) combining loop-cutset conditioning with Jensen's method can be viewed within our framework. By exploring the relationships between these methods, we develop a unifying framework in which the advantages of each approach can be combined successfully. 1-hop neighbor's text information: Logarithmic-time updates and queries in probabilistic networks. : Traditional databases commonly support efficient query and update procedures that operate in time which is sublinear in the size of the database. Our goal in this paper is to take a first step toward dynamic reasoning in probabilistic databases with comparable efficiency. We propose a dynamic data structure that supports efficient algorithms for updating and querying singly connected Bayesian networks. In the conventional algorithm, new evidence is absorbed in time O(1) and queries are processed in time O(N ), where N is the size of the network. We propose an algorithm which, after a preprocessing phase, allows us to answer queries in time O(log N ) at the expense of O(log N ) time per evidence absorption. The usefulness of sub-linear processing time manifests itself in applications requiring (near) real-time response over large probabilistic databases. We briefly discuss a potential application of dynamic probabilistic reasoning in computational biology. Target text information: Logarithmic Time Parallel Bayesian Inference: I present a parallel algorithm for exact probabilistic inference in Bayesian networks. For polytree networks with n variables, the worst-case time complexity is O(log n) on a CREW PRAM (concurrent-read, exclusive-write parallel random-access machine) with n processors, for any constant number of evidence variables. For arbitrary networks, the time complexity is O(r 3w log n) for n processors, or O(w log n) for r 3w n processors, where r is the maximum range of any variable, and w is the induced width (the maximum clique size), after moralizing and trian gulating the network. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
693
test
1-hop neighbor's text information: Slonim. The power of team exploration: Two robots can learn unlabeled directed graphs. : We show that two cooperating robots can learn exactly any strongly-connected directed graph with n indistinguishable nodes in expected time polynomial in n. We introduce a new type of homing sequence for two robots which helps the robots recognize certain previously-seen nodes. We then present an algorithm in which the robots learn the graph and the homing sequence simultaneously by wandering actively through the graph. Unlike most previous learning results using homing sequences, our algorithm does not require a teacher to provide counterexamples. Furthermore, the algorithm can use efficiently any additional information available that distinguishes nodes. We also present an algorithm in which the robots learn by taking random walks. The rate at which a random walk converges to the stationary distribution is characterized by the conductance of the graph. Our random-walk algorithm learns in expected time polynomial in n and in the inverse of the conductance and is more efficient than the homing-sequence algorithm for high-conductance graphs. 1-hop neighbor's text information: Learning Algorithms with Applications to Robot Navigation and Protein Folding: 1-hop neighbor's text information: The Power of a Pebble: Exploring and Mapping Directed Graphs: Exploring and mapping an unknown environment is a fundamental problem, which is studied in a variety of contexts. Many works have focused on finding efficient solutions to restricted versions of the problem. In this paper, we consider a model that makes very limited assumptions on the environment and solve the mapping problem in this general setting. We model the environment by an unknown directed graph G, and consider the problem of a robot exploring and mapping G. We do not assume that the vertices of G are labeled, and thus the robot has no hope of succeeding unless it is given some means of distinguishing between vertices. For this reason we provide the robot with a pebble a device that it can place on a vertex and use to identify the vertex later. In this paper we show: (1) If the robot knows an upper bound on the number of vertices then it can learn the graph efficiently with only one pebble. (2) If the robot does not know an upper bound on the number of vertices n, then fi(log log n) pebbles are both necessary and sufficient. In both cases our algorithms are deterministic. Target text information: Exactly learning automata with small cover time. : We present algorithms for exactly learning unknown environments that can be described by deterministic finite automata. The learner performs a walk on the target automaton, where at each step it observes the output of the state it is at, and chooses a labeled edge to traverse to the next state. The learner has no means of a reset, and does not have access to a teacher that answers equivalence queries and gives the learner counterexamples to its hypotheses. We present two algorithms: The first is for the case in which the outputs observed by the learner are always correct, and the second is for the case in which the outputs might be corrupted by random noise. The running times of both algorithms are polynomial in the cover time of the underlying graph of the target automaton. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
929
test
1-hop neighbor's text information: Evolving sensors in environments of controlled complexity. : 1 . Sensors represent a crucial link between the evolutionary forces shaping a species' relationship with its environment, and the individual's cognitive abilities to behave and learn. We report on experiments using a new class of "latent energy environments" (LEE) models to define environments of carefully controlled complexity which allow us to state bounds for random and optimal behaviors that are independent of strategies for achieving the behaviors. Using LEE's analytic basis for defining environments, we then use neural networks (NNets) to model individuals and a steady - state genetic algorithm to model an evolutionary process shaping the NNets, in particular their sensors. Our experiments consider two types of "contact" and "ambient" sensors, and variants where the NNets are not allowed to learn, learn via error correction from internal prediction, and via reinforcement learning. We find that predictive learning, even when using a larger repertoire of the more sophisticated ambient sensors, provides no advantage over NNets unable to learn. However, reinforcement learning using a small number of crude contact sensors does provide a significant advantage. Our analysis of these results points to a tradeoff between the genetic "robustness" of sensors and their informativeness to a learning system. 1-hop neighbor's text information: Tracking the red queen: Measurements of adaptive progress in co-evolutionary simulations. : Co-evolution can give rise to the "Red Queen effect", where interacting populations alter each other's fitness landscapes. The Red Queen effect significantly complicates any measurement of co-evolutionary progress, introducing fitness ambiguities where improvements in performance of co-evolved individuals can appear as a decline or stasis in the usual measures of evolutionary progress. Unfortunately, no appropriate measures of fitness given the Red Queen effect have been developed in artificial life, theoretical biology, population dynamics, or evolutionary genetics. We propose a set of appropriate performance measures based on both genetic and behavioral data, and illustrate their use in a simulation of co-evolution between genetically specified continuous-time noisy recurrent neural networks which generate pursuit and evasion behaviors in autonomous agents. 1-hop neighbor's text information: The evolutionary cost of learning. : Traits that are acquired by members of an evolving population during their lifetime, through adaptive processes such as learning, can become genetically specified in later generations. Thus there is a change in the level of learning in the population over evolutionary time. This paper explores the idea that as well as the benefits to be gained from learning, there may also be costs to be paid for the ability to learn. It is these costs that supply the selection pressure for the genetic assimilation of acquired traits. Two models are presented that attempt to illustrate this assertion. The first uses Kauffman's NK fitness landscapes to show the effect that both explicit and implicit costs have on the assimilation of learnt traits. A characteristic `hump' is observed in the graph of the level of plasticity in the population showing that learning is first selected for and then against as evolution progresses. The second model is a practical example in which neural network controllers are evolved for a small mobile robot. Results from this experiment also show the hump. Target text information: Guiding or Hiding: Explorations into the Effects of Learning on the Rate of Evolution.: Individual lifetime learning can `guide' an evolving population to areas of high fitness in genotype space through an evolutionary phenomenon known as the Baldwin effect (Baldwin, 1896; Hin-ton & Nowlan, 1987). It is the accepted wisdom that this guiding speeds up the rate of evolution. By highlighting another interaction between learning and evolution, that will be termed the Hiding effect, it will be argued here that this depends on the measure of evolutionary speed one adopts. The Hiding effect shows that learning can reduce the selection pressure between individuals by `hiding' their genetic differences. There is thus a trade-off between the Baldwin effect and the Hiding effect to determine learning's influence on evolution and two factors that contribute to this trade-off, the cost of learning and landscape epis tasis, are investigated experimentally. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
173
test
1-hop neighbor's text information: On the usefulness of re-using diagnostic solutions. : Recent studies on planning, comparing plan re-use and plan generation, have shown that both the above tasks may have the same degree of computational complexity, even if we deal with very similar problems. The aim of this paper is to show that the same kind of results apply also for diagnosis. We propose a theoretical complexity analysis coupled with some experimental tests, intended to evaluate the adequacy of adaptation strategies which re-use the solutions of past diagnostic problems in order to build a solution to the problem to be solved. Results of such analysis show that, even if diagnosis re-use falls into the same complexity class of diagnosis generation (they are both NP-complete problems), practical advantages can be obtained by exploiting a hybrid architecture combining case-based and model-based diagnostic problem solving in a unifying framework. 1-hop neighbor's text information: A comparative utility analysis of case-based reasoning and control-rule learning systems. : The utility problem in learning systems occurs when knowledge learned in an attempt to improve a system's performance degrades performance instead. We present a methodology for the analysis of utility problems which uses computational models of problem solving systems to isolate the root causes of a utility problem, to detect the threshold conditions under which the problem will arise, and to design strategies to eliminate it. We present models of case-based reasoning and control-rule learning systems and compare their performance with respect to the swamping utility problem. Our analysis suggests that case-based reasoning systems are more resistant to the utility problem than control-rule learning systems. 1 1-hop neighbor's text information: Adapter: an integrated diagnostic system combining case-based and abductive reasoning. : The aim of this paper is to describe the ADAPtER system, a diagnostic architecture combining case-based reasoning with abductive reasoning and exploiting the adaptation of the solution of old episodes, in order to focus the reasoning process. Domain knowledge is represented via a logical model and basic mechanisms, based on abductive reasoning with consistency constraints, have been defined for solving complex diagnostic problems involving multiple faults. The model-based component has been supplemented with a case memory and adaptation mechanisms have been developed, in order to make the diagnostic system able to exploit past experience in solving new cases. A heuristic function is proposed, able to rank the solutions associated to retrieved cases with respect to the adaptation effort needed to transform such solutions into possible solutions for the current case. We will discuss some preliminary experiments showing the validity of the above heuristic and the convenience of solving a new case by adapting a retrieved solution rather than solving the new problem from scratch. Target text information: A utility-based approach to learning in a mixed Case-Based and Model-Based Reasoning architecture: Case-based reasoning (CBR) can be used as a form of "caching" solved problems to speedup later problem solving. Using "cached" cases brings additional costs with it due to retrieval time, case adaptation time and also storage space. Simply storing all cases will result in a situation in which retrieving and trying to adapt old cases will take more time (on average) than not caching at all. This means that caching must be applied selectively to build a case memory that is actually useful. This is a form of the utility problem [4, 2]. The approach taken here is to construct a "cost model" of a system that can be used to predict the effect of changes to the system. In this paper we describe the utility problem associated with "caching" cases and the construction of a "cost model". We present experimental results that demonstrate that the model can be used to predict the effect of certain changes to the case memory. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
178
test
1-hop neighbor's text information: The Neural Network House: An overview. : Typical home comfort systems utilize only rudimentary forms of energy management and conservation. The most sophisticated technology in common use today is an automatic setback thermostat. Tremendous potential remains for improving the efficiency of electric and gas usage. However, home residents who are ignorant of the physics of energy utilization cannot design environmental control strategies, but neither can energy management experts who are ignorant of the behavior patterns of the inhabitants. Adaptive control seems the only alternative. We have begun building an adaptive control system that can infer appropriate rules of operation for home comfort systems based on the lifestyle of the inhabitants and energy conservation goals. Recent research has demonstrated the potential of neural networks for intelligent control. We are constructing a prototype control system in an actual residence using neural network reinforcement learning and prediction techniques. The residence is equipped with sensors to provide information about environmental conditions (e.g., temperatures, ambient lighting level, sound and motion in each room) and actuators to control the gas furnace, electric space heaters, gas hot water heater, lighting, motorized blinds, ceiling fans, and dampers in the heating ducts. This paper presents an overview of the project as it now stands. 1-hop neighbor's text information: Predicting sunspots and exchange rates with connectionist networks. : We investigate the effectiveness of connectionist networks for predicting the future continuation of temporal sequences. The problem of overfitting, particularly serious for short records of noisy data, is addressed by the method of weight-elimination: a term penalizing network complexity is added to the usual cost function in back-propagation. The ultimate goal is prediction accuracy. We analyze two time series. On the benchmark sunspot series, the networks outperform traditional statistical approaches. We show that the network performance does not deteriorate when there are more input units than needed. Weight-elimination also manages to extract some part of the dynamics of the notoriously noisy currency exchange rates and makes the network solution interpretable. Target text information: Comparison of neural net and conventional techniques for lighting control. : We compare two techniques for lighting control in an actual room equipped with seven banks of lights and photoresistors to detect the lighting level at four sensing points. Each bank of lights can be independently set to one of sixteen intensity levels. The task is to determine the device intensity levels that achieve a particular configuration of sensor readings. One technique we explored uses a neural network to approximate the mapping between sensor readings and device intensity levels. The other technique we examined uses a conventional feedback control loop. The neural network approach appears superior both in that it does not require experimentation on the fly (and hence fluctuating light intensity levels during settling, and lengthy settling times) and in that it can deal with complex interactions that conventional control techniques do not handle well. This comparison was performed as part of the "Adaptive House" project, which is described briefly. Further directions for control in the I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
1,789
test
1-hop neighbor's text information: Dirichlet mixtures: A method for improving detection of weak but significant protein sequence homology. COS. : This paper presents the mathematical foundations of Dirichlet mixtures, which have been used to improve database search results for homologous sequences, when a variable number of sequences from a protein family or domain are known. We present a method for condensing the information in a protein database into a mixture of Dirichlet densities. These mixtures are designed to be combined with observed amino acid frequencies, to form estimates of expected amino acid probabilities at each position in a profile, hidden Markov model, or other statistical model. These estimates give a statistical model greater generalization capacity, such that remotely related family members can be more reliably recognized by the model. Dirichlet mixtures have been shown to outperform substitution matrices and other methods for computing these expected amino acid distributions in database search, resulting in fewer false positives and false negatives for the families tested. This paper corrects a previously published formula for estimating these expected probabilities, and contains complete derivations of the Dirichlet mixture formulas, methods for optimizing the mixtures to match particular databases, and suggestions for efficient implementation. 1-hop neighbor's text information: Hidden Markov models in computational biology: Applications to protein modeling. : Hidden Markov Models (HMMs) are applied to the problems of statistical modeling, database searching and multiple sequence alignment of protein families and protein domains. These methods are demonstrated on the globin family, the protein kinase catalytic domain, and the EF-hand calcium binding motif. In each case the parameters of an HMM are estimated from a training set of unaligned sequences. After the HMM is built, it is used to obtain a multiple alignment of all the training sequences. It is also used to search the SWISS-PROT 22 database for other sequences that are members of the given protein family, or contain the given domain. The HMM produces multiple alignments of good quality that agree closely with the alignments produced by programs that incorporate three-dimensional structural information. When employed in discrimination tests (by examining how closely the sequences in a database fit the globin, kinase and EF-hand HMMs), the HMM is able to distinguish members of these families from non-members with a high degree of accuracy. Both the HMM and PRO-FILESEARCH (a technique used to search for relationships between a protein sequence and multiply aligned sequences) perform better in these tests than PROSITE (a dictionary of sites and patterns in proteins). The HMM appears to have a slight advantage 1-hop neighbor's text information: The megaprior heuristic for discovering protein sequence patterns. : Several computer algorithms for discovering patterns in groups of protein sequences are in use that are based on fitting the parameters of a statistical model to a group of related sequences. These include hidden Markov model (HMM) algorithms for multiple sequence alignment, and the MEME and Gibbs sampler algorithms for discovering motifs. These algorithms are sometimes prone to producing models that are incorrect because two or more patterns have been combined. The statistical model produced in this situation is a convex combination (weighted average) of two or more different models. This paper presents a solution to the problem of convex combinations in the form of a heuristic based on using extremely low variance Dirichlet mixture priors as part of the statistical model. This heuristic, which we call the megaprior heuristic, increases the strength (i.e., decreases the variance) of the prior in proportion to the size of the sequence dataset. This causes each column in the final model to strongly resemble the mean of a single component of the prior, regardless of the size of the dataset. We describe the cause of the convex combination problem, analyze it mathematically, motivate and describe the implementation of the megaprior heuristic, and show how it can effectively eliminate the problem of convex combinations in protein sequence pattern discovery. Target text information: Minimum-Risk Profiles of Protein Families Based on Statistical Decision Theory: Statistical decision theory provides a principled way to estimate amino acid frequencies in conserved positions of a protein family. The goal is to minimize the risk function, or the expected squared-error distance between the estimates and the true population frequencies. The minimum-risk estimates are obtained by adding an optimal number of pseudocounts to the observed data. Two formulas are presented, one for pseudocounts based on marginal amino acid frequencies and one for pseudocounts based on the observed data. Experimental results show that profiles constructed using minimal-risk estimates are more discriminating than those constructed using existing methods. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
554
val
1-hop neighbor's text information: Smoothing spline ANOVA with component-wise Bayesian "confidence intervals". : We study a multivariate smoothing spline estimate of a function of several variables, based on an ANOVA decomposition as sums of main effect functions (of one variable), two-factor interaction functions (of two variables), etc. We derive the Bayesian "confidence intervals" for the components of this decomposition and demonstrate that, even with multiple smoothing parameters, they can be efficiently computed using the publicly available code RKPACK, which was originally designed just to compute the estimates. We carry out a small Monte Carlo study to see how closely the actual properties of these component-wise confidence intervals match their nominal confidence levels. Lastly, we analyze some lake acidity data as a function of calcium concentration, latitude, and longitude, using both polynomial and thin plate spline main effects in the same model. 1-hop neighbor's text information: Smoothing spline ANOVA for exponential families, with application to the Wisconsin Epidemiological Study of Diabetic Retinopathy. : 1-hop neighbor's text information: I.M.: Adapting to unknown smoothness via wavelet shrinkage. : We attempt to recover a function of unknown smoothness from noisy, sampled data. We introduce a procedure, SureShrink, which suppresses noise by thresholding the empirical wavelet coefficients. The thresholding is adaptive: a threshold level is assigned to each dyadic resolution level by the principle of minimizing the Stein Unbiased Estimate of Risk (Sure) for threshold estimates. The computational effort of the overall procedure is order N log(N ) as a function of the sample size N. SureShrink is smoothness-adaptive: if the unknown function contains jumps, the reconstruction (essentially) does also; if the unknown function has a smooth piece, the reconstruction is (essentially) as smooth as the mother wavelet will allow. The procedure is in a sense optimally smoothness-adaptive: it is near-minimax simultaneously over a whole interval of the Besov scale; the size of this interval depends on the choice of mother wavelet. We know from a previous paper by the authors that traditional smoothing methods kernels, splines, and orthogonal series estimates - even with optimal choices of the smoothing parameter, would be unable to perform in a near-minimax way over many spaces in the Besov scale. Acknowledgements. The first author was supported at U.C. Berkeley by NSF DMS 88-10192, by NASA Contract NCA2-488, and by a grant from ATT Foundation. The second author was supported in part by NSF grants DMS 84-51750, 86-00235, and NIH PHS grant GM21215-12, and by a grant from ATT Foundation. Target text information: : TECHNICAL REPORT NO. 947 June 5, 1995 I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
767
test
1-hop neighbor's text information: Vytopil. Design Issues Towards PREENS, a Parallel Research Execution Environment for Neural Systems. : PREENS a Parallel Research Execution Environment for Neural Systems is a distributed neurosimulator, targeted on networks of workstations and transputer systems. As current applications of neural networks often contain large amounts of data and as the neural networks involved in tasks such as vision are very large, high requirements on memory and computational resources are imposed on the target execution platforms. PREENS can be executed in a distributed environment, i.e. tools and neural network simulation programs can be running on any machine connectable via TCP/IP. Using this approach, larger tasks and more data can be examined using an efficient coarse grained parallelism. Furthermore, the design of PREENS allows for neural networks to be running on any high performance MIMD machine such as a trans-puter system. In this paper, the different features and design concepts of PREENS are discussed. These can also be used for other applications, like image processing. Target text information: Segmentation and Classification of Combined Optical and Radar Imagery: The classification performance of a neural network for combined six-band Landsat-TM and one-band ERS-1/SAR PRI imagery from the same scene is carried out. Different combinations of the data | either raw, segmented or filtered |, using the available ground truth polygons, training and test sets are created. The training sets are used for learning while the test sets are used for verification of the neural network. The different combinations are evaluated here. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
211
test
1-hop neighbor's text information: Kanazawa, Reasoning about Time and Probability, : 1-hop neighbor's text information: Fall diagnosis using dynamic belief networks. : The task is to monitor walking patterns and give early warning of falls using foot switch and mercury trigger sensors. We describe a dynamic belief network model for fall diagnosis which, given evidence from sensor observations, outputs beliefs about the current walking status and makes predictions regarding future falls. The model represents possible sensor error and is parametrised to allow customisation to the individual being monitored. 1-hop neighbor's text information: The data association problem when monitoring robot vehicles using dynamic belief networks. : We describe the development of a monitoring system which uses sensor observation data about discrete events to construct dynamically a probabilistic model of the world. This model is a Bayesian network incorporating temporal aspects, which we call a Dynamic Belief Network; it is used to reason under uncertainty about both the causes and consequences of the events being monitored. The basic dynamic construction of the network is data-driven. However the model construction process combines sensor data about events with externally provided information about agents' behaviour, and knowledge already contained within the model, to control the size and complexity of the network. This means that both the network structure within a time interval, and the amount of history and detail maintained, can vary over time. We illustrate the system with the example domain of monitoring robot vehicles and people in a restricted dynamic environment using light-beam sensor data. In addition to presenting a generic network structure for monitoring domains, we describe the use of more complex network structures which address two specific monitoring problems, sensor validation and the Data Association Problem. Target text information: A case study in dynamic belief networks: monitoring walking, fall prediction and detection. : I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
2,083
test
1-hop neighbor's text information: A collection of algorithms for belief networks. : Portions of this report have been published in the Proceedings of the Fifteenth Annual Symposium on Computer Applications in Medical Care (November, 1991). Target text information: : Figure 9: Results for various optimizations. Figure 10: Results with and without markov boundary scoring. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
869
val
1-hop neighbor's text information: A Theory of Networks for Approximation and Learning, : Learning an input-output mapping from a set of examples, of the type that many neural networks have been constructed to perform, can be regarded as synthesizing an approximation of a multi-dimensional function, that is solving the problem of hy-persurface reconstruction. From this point of view, this form of learning is closely related to classical approximation techniques, such as generalized splines and regularization theory. This paper considers the problems of an exact representation and, in more detail, of the approximation of linear and nonlinear mappings in terms of simpler functions of fewer variables. Kolmogorov's theorem concerning the representation of functions of several variables in terms of functions of one variable turns out to be almost irrelevant in the context of networks for learning. We develop a theoretical framework for approximation based on regularization techniques that leads to a class of three-layer networks that we call Generalized Radial Basis Functions (GRBF), since they are mathematically related to the well-known Radial Basis Functions, mainly used for strict interpolation tasks. GRBF networks are not only equivalent to generalized splines, but are also closely related to pattern recognition methods such as Parzen windows and potential functions and to several neural network algorithms, such as Kanerva's associative memory, backpropagation and Kohonen's topology preserving map. They also have an interesting interpretation in terms of prototypes that are synthesized and optimally combined during the learning stage. The paper introduces several extensions and applications of the technique and discusses intriguing analogies with neurobiological data. c fl Massachusetts Institute of Technology, 1994 This paper describes research done within the Center for Biological Information Processing, in the Department of Brain and Cognitive Sciences, and at the Artificial Intelligence Laboratory. This research is sponsored by a grant from the Office of Naval Research (ONR), Cognitive and Neural Sciences Division; by the Artificial Intelligence Center of Hughes Aircraft Corporation; by the Alfred P. Sloan Foundation; by the National Science Foundation. Support for the A. I. Laboratory's artificial intelligence research is provided by the Advanced Research Projects Agency of the Department of Defense under Army contract DACA76-85-C-0010, and in part by ONR contract N00014-85-K-0124. 1-hop neighbor's text information: A practical Bayesian framework for backpropagation networks. : A quantitative and practical Bayesian framework is described for learning of mappings in feedforward networks. The framework makes possible: (1) objective comparisons between solutions using alternative network architectures; (2) objective stopping rules for network pruning or growing procedures; (3) objective choice of magnitude and type of weight decay terms or additive regularisers (for penalising large weights, etc.); (4) a measure of the effective number of well-determined parameters in a model; (5) quantified estimates of the error bars on network parameters and on network output; (6) objective comparisons with alternative learning and interpolation models such as splines and radial basis functions. The Bayesian `evidence' automatically embodies `Occam's razor,' penalising over-flexible and over-complex models. The Bayesian approach helps detect poor underlying assumptions in learning models. For learning models well matched to a problem, a good correlation between generalisation ability This paper makes use of the Bayesian framework for regularisation and model comparison described in the companion paper `Bayesian interpolation' (MacKay, 1991a). This framework is due to Gull and Skilling (Gull, 1989a). and the Bayesian evidence is obtained. 1-hop neighbor's text information: Rasmussen (1996). Evaluation of Gaussian Processes and Other Methods for Nonlinear Regression. : Target text information: Efficient implementation of Gaussian processes for interpolation. : Neural networks and Bayesian inference provide a useful framework within which to solve regression problems. However their parameterization means that the Bayesian analysis of neural networks can be difficult. In this paper, we investigate a method for regression using Gaussian process priors which allows exact Bayesian analysis using matrix manipulations. We discuss the workings of the method in detail. We will also detail a range of mathematical and numerical techniques that are useful in applying Gaussian processes to general problems including efficient approximate matrix inversion methods developed by Skilling. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
979
test
1-hop neighbor's text information: Boolean Functions Fitness Spaces: We investigate the distribution of performance of the Boolean functions of 3 Boolean inputs (particularly that of the parity functions), the always-on-6 and even-6 parity functions. We us enumeration, uniform Monte-Carlo random sampling and sampling random full trees. As expected XOR dramatically changes the fitness distributions. In all cases once some minimum size threshold has been exceeded, the distribution of performance is approximately independent of program length. However the distribution of the performance of full trees is different from that of asymmetric trees and varies with tree depth. We consider but reject testing the No Free Lunch (NFL) theorems on these functions. 1-hop neighbor's text information: Why ants are hard. : The problem of programming an artificial ant to follow the Santa Fe trail is used as an example program search space. Previously reported genetic programming, simulated annealing and hill climbing performance is shown not to be much better than random search on the Ant problem. Analysis of the program search space in terms of fixed length schema suggests it is highly deceptive and that for the simplest solutions large building blocks must be assembled before they have above average fitness. In some cases we show solutions cannot be assembled using a fixed representation from small building blocks of above average fitness. This suggest the Ant problem is difficult for Genetic Algorithms. 1-hop neighbor's text information: Fitness causes bloat in variable size representations. : We argue based upon the numbers of representations of given length, that increase in representation length is inherent in using a fixed evaluation function with a discrete but variable length representation. Two examples of this are analysed, including the use of Price's Theorem. Both examples confirm the tendency for solutions to grow in size is caused by fitness based selection. Target text information: Fitness causes bloat: Mutation. : In many cases programs length's increase (known as "bloat", "fluff" and increasing "structural complexity") during artificial evolution. We show bloat is not specific to genetic programming and suggest it is inherent in search techniques with discrete variable length representations using simple static evaluation functions. We investigate the bloating characteristics of three non-population and one population based search techniques using a novel mutation operator. An artificial ant following the Santa Fe trail problem is solved by simulated annealing, hill climbing, strict hill climbing and population based search using two variants of the the new subtree based mutation operator. As predicted bloat is observed when using unbiased mutation and is absent in simulated annealing and both hill climbers when using the length neutral mutation however bloat occurs with both mutations when using a population. We conclude that there are two causes of bloat. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
2,100
test
1-hop neighbor's text information: Abduction, experience, and goals: A model of everyday abductive explanation. : 1-hop neighbor's text information: Adapter: an integrated diagnostic system combining case-based and abductive reasoning. : The aim of this paper is to describe the ADAPtER system, a diagnostic architecture combining case-based reasoning with abductive reasoning and exploiting the adaptation of the solution of old episodes, in order to focus the reasoning process. Domain knowledge is represented via a logical model and basic mechanisms, based on abductive reasoning with consistency constraints, have been defined for solving complex diagnostic problems involving multiple faults. The model-based component has been supplemented with a case memory and adaptation mechanisms have been developed, in order to make the diagnostic system able to exploit past experience in solving new cases. A heuristic function is proposed, able to rank the solutions associated to retrieved cases with respect to the adaptation effort needed to transform such solutions into possible solutions for the current case. We will discuss some preliminary experiments showing the validity of the above heuristic and the convenience of solving a new case by adapting a retrieved solution rather than solving the new problem from scratch. 1-hop neighbor's text information: Goal-based explanation evaluation. : 1 I would like to thank my dissertation advisor, Roger Schank, for his very valuable guidance on this research, and to thank the Cognitive Science reviewers for their helpful comments on a draft of this paper. The research described here was conducted primarily at Yale University, supported in part by the Defense Advanced Research Projects Agency, monitored by the Office of Naval Research under contract N0014-85-K-0108 and by the Air Force Office of Scientific Research under contract F49620-88-C-0058. Target text information: Focusing Construction and Selection of Abductive Hypotheses. : Many abductive understanding systems explain novel situations by a chaining process that is neutral to explainer needs beyond generating some plausible explanation for the event being explained. This paper examines the relationship of standard models of abductive understanding to the case-based explanation model. In case-based explanation, construction and selection of abductive hypotheses are focused by specific explanations of prior episodes and by goal-based criteria reflecting current information needs. The case-based method is inspired by observations of human explanation of anomalous events during everyday understanding, and this paper focuses on the method's contributions to the problems of building good explanations in everyday domains. We identify five central issues, compare how those issues are addressed in traditional and case-based explanation models, and discuss motivations for using the case-based approach to facilitate generation of plausible and useful explanations in domains that are complex and imperfectly un derstood. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
2,616
test
1-hop neighbor's text information: Classifiers: A theoretical and empirical study. : This paper describes how a competitive tree learning algorithm can be derived from first principles. The algorithm approximates the Bayesian decision theoretic solution to the learning task. Comparative experiments with the algorithm and the several mature AI and statistical families of tree learning algorithms currently in use show the derived Bayesian algorithm is consistently as good or better, although sometimes at computational cost. Using the same strategy, we can design algorithms for many other supervised and model learning tasks given just a probabilistic representation for the kind of knowledge to be learned. As an illustration, a second learning algorithm is derived for learning Bayesian networks from data. Implications to incremental learning and the use of multiple models are also discussed. 1-hop neighbor's text information: Learning Concept Classification Rules Using Genetic Algorithms. : In this paper, we explore the use of genetic algorithms (GAs) as a key element in the design and implementation of robust concept learning systems. We describe and evaluate a GA-based system called GABIL that continually learns and refines concept classification rules from its interaction with the environment. The use of GAs is motivated by recent studies showing the effects of various forms of bias built into different concept learning systems, resulting in systems that perform well on certain concept classes (generally, those well matched to the biases) and poorly on others. By incorporating a GA as the underlying adaptive search mechanism, we are able to construct a concept learning system that has a simple, unified architecture with several important features. First, the system is surprisingly robust even with minimal bias. Second, the system can be easily extended to incorporate traditional forms of bias found in other concept learning systems. Finally, the architecture of the system encourages explicit representation of such biases and, as a result, provides for an important additional feature: the ability to dynamically adjust system bias. The viability of this approach is illustrated by comparing the performance of GABIL with that of four other more traditional concept learners (AQ14, C4.5, ID5R, and IACL) on a variety of target concepts. We conclude with some observations about the merits of this approach and about possible extensions. Target text information: Is Consistency Harmful?: We examine the issue of consistency from a new perspective. To avoid overfitting the training data, a considerable number of current systems have sacrificed the goal of learning hypotheses that are perfectly consistent with the training instances by setting a new goal of hypothesis simplicity (Occam's razor). Instead of using simplicity as a goal, we have developed a novel approach that addresses consistency directly. In other words, our concept learner has the explicit goal of selecting the most appropriate degree of consistency with the training data. We begin this paper by exploring concept learning with less than perfect consistency. Next, we describe a system that can adapt its degree of consistency in response to feedback about predictive accuracy on test data. Finally, we present the results of initial experiments that begin to address the question of how tightly hypotheses should fit the training data for different problems. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
152
train
1-hop neighbor's text information: Exploratory Modelling of Multiple Non-stationary Time Series: Latent Process Structure & Decompositions," in Modelling Longitudinal and Spatially Correlated Data, : We describe and illustrate Bayesian approaches to modelling and analysis of multiple non-stationary time series. This begins with uni-variate models for collections of related time series assumedly driven by underlying but unobservable processes, referred to as dynamic latent factor processes. We focus on models in which the factor processes, and hence the observed time series, are modelled by time-varying autoregressions capable of flexibly representing ranges of observed non-stationary characteristics. We highlight concepts and new methods of time series decomposition to infer characteristics of latent components in time series, and relate uni-variate decomposition analyses to underlying multivariate dynamic factor structure. Our motivating application is in analysis of multiple EEG traces from an ongoing EEG study at Duke. In this study, individuals undergoing ECT therapy generate multiple EEG traces at various scalp locations, and physiological interest lies in identifying dependencies and dissimilarities across series. In addition to the multivariate and non-stationary aspects of the series, this area provides illustration of the new results about decomposition of time series into latent, physically interpretable components; this is illustrated in data analysis of one EEG data set. The paper also discusses current and future research directions. fl This research was supported in part by the National Science Foundation under grant DMS-9311071. The EEG data and context arose from discussions with Dr Andrew Krystal, of Duke University Medical Center, with whom continued interactions have been most valuable. Address for correspondence: Institute of Statistics and Decision Sciences, Duke University, Durham, NC 27708-0251 U.S.A. (http://www.stat.duke.edu) 1-hop neighbor's text information: "Priors and Component Structures in Autoregressive Time Series," : New approaches to prior specification and structuring in autoregressive time series models are introduced and developed. We focus on defining classes of prior distributions for parameters and latent variables related to latent components of an autoregressive model for an observed time series. These new priors naturally permit the incorporation of both qualitative and quantitative prior information about the number and relative importance of physically meaningful components that represent low frequency trends, quasi-periodic sub-processes, and high frequency residual noise components of observed series. The class of priors also naturally incorporates uncertainty about model order, and hence leads in posterior analysis to model order assessment and resulting posterior and predictive inferences that incorporate full uncertainties about model order as well as model parameters. Analysis also formally incorporates uncertainty, and leads to inferences about, unknown initial values of the time series, as it does for predictions of future values. Posterior analysis involves easily implemented iterative simulation methods, developed and described here. One motivating applied field is climatology, where the evaluation of latent structure, especially quasi-periodic structure, is of critical importance in connection with issues of global climatic variability. We explore analysis of data from the Southern Oscillation Index (SOI), one of several series that has been central in recent high-profile debates in the atmospheric sciences about recent apparent trends in climatic indicators. Target text information: "Bayesian Inference on Periodic-ities and Component Spectral Structure in Time Series," : Summary We detail and illustrate time series analysis and spectral inference in autoregressive models with a focus on the underlying latent structure and time series decompositions. A novel class of priors on parameters of latent components leads to a new class of smoothness priors on autoregressive coefficients, provides for formal inference on model order, including very high order models, and leads to the incorporation of uncertainty about model order into summary inferences. The class of prior models also allows for subsets of unit roots, and hence leads to inference on sustained though stochastically time-varying periodicities in time series. Applications to analysis of the frequency composition of time series, in both time and spectral domains, is illustrated in a study of a time series from astronomy. This analyses demonstrates the impact and utility of the new class of priors in addressing model order uncertainty and in allowing for unit root structure. Time domain decomposition of a time series into estimated latent components provides an important alternative view of the component spectral characteristics of a series. In addition, our data analysis illustrates the utility of the smoothness prior and allowance for unit root structure in inference about spectral densities. In particular, the framework overcomes supposed problems in spectral estimation with autoregressive models using more traditional model fitting methods. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
2,004
test
1-hop neighbor's text information: Experiments with a New Boosting Algorithm. : In an earlier paper, we introduced a new boosting algorithm called AdaBoost which, theoretically, can be used to significantly reduce the error of any learning algorithm that consistently generates classifiers whose performance is a little better than random guessing. We also introduced the related notion of a pseudo-loss which is a method for forcing a learning algorithm of multi-label concepts to concentrate on the labels that are hardest to discriminate. In this paper, we describe experiments we carried out to assess how well AdaBoost with and without pseudo-loss, performs on real learning problems. We performed two sets of experiments. The first set compared boosting to Breiman's bagging method when used to aggregate various classifiers (including decision trees and single attribute-value tests). We compared the performance of the two methods on a collection of machine-learning benchmarks. In the second set of experiments, we studied in more detail the performance of boosting using a nearest-neighbor classifier on an OCR problem. 1-hop neighbor's text information: "A Comparative Study of ID3 and Backpropagation for English Text-to-Speech Mapping," : The performance of the error backpropagation (BP) and ID3 learning algorithms was compared on the task of mapping English text to phonemes and stresses. Under the distributed output code developed by Sejnowski and Rosenberg, it is shown that BP consistently out-performs ID3 on this task by several percentage points. Three hypotheses explaining this difference were explored: (a) ID3 is overfitting the training data, (b) BP is able to share hidden units across several output units and hence can learn the output units better, and (c) BP captures statistical information that ID3 does not. We conclude that only hypothesis (c) is correct. By augmenting ID3 with a simple statistical learning procedure, the performance of BP can be approached but not matched. More complex statistical procedures can improve the performance of both BP and ID3 substantially. A study of the residual errors suggests that there is still substantial room for improvement in learning methods for text-to-speech mapping. Target text information: Achieving High-Accuracy Text-to-Speech with Machine Learning: In 1987, Sejnowski and Rosenberg developed their famous NETtalk system for English text-to-speech. This chapter describes a machine learning approach to text-to-speech that builds upon and extends the initial NETtalk work. Among the many extensions to the NETtalk system were the following: a different learning algorithm, a wider input "window", error-correcting output coding, a right-to-left scan of the word to be pronounced (with the results of each decision influencing subsequent decisions), and the addition of several useful input features. These changes yielded a system that performs much better than the original NETtalk system. After training on 19,002 words, the system achieves 93.7% correct pronunciation of individual phonemes and 64.8% correct pronunciation of whole words (where the pronunciation must exactly match the dictionary pronunciation to be correct). Based on the judgements of three human participants in a blind assessment study, our system was estimated to have a serious error rate of 16.7% (on whole words) compared to an error rate of 26.1% for the DECTalk3.0 rulebase. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
456
test
1-hop neighbor's text information: An incremental interactive algorithm for regular grammar inference. : We present an efficient incremental algorithm for learning deterministic finite state automata (DFA) from labeled examples and membership queries. This algorithm is an extension of Angluin's ID procedure to an incremental framework. The learning algorithm is intermittently provided with labeled examples and has access to a knowledgeable teacher capable of answering membership queries. The learner constructs an initial hypothesis from the given set of labeled examples and the teacher's responses to membership queries. If an additional example observed by the learner is inconsistent with the current hypothesis then the hypothesis is modified minimally to make it consistent with the new example. The update procedure ensures that the modified hypothesis is consistent with all examples observed thus far. The algorithm is guaranteed to converge to a minimum state DFA corresponding to the target when the set of examples observed by the learner includes a live complete set. We prove the convergence of this algorithm and analyze its time and space complexities. 1-hop neighbor's text information: Planning and Learning in an Adversarial Robotic Game: 1 This paper demonstrates the tandem use of a finite automata learning algorithm and a utility planner for an adversarial robotic domain. For many applications, robot agents need to predict the movement of objects in the environment and plan to avoid them. When the robot has no reasoning model of the object, machine learning techniques can be used to generate one. In our project, we learn a DFA model of an adversarial robot and use the automaton to predict the next move of the adversary. The robot agent plans a path to avoid the adversary at the predicted location while fulfilling the goal requirements. 1-hop neighbor's text information: Query, pacs and simple-pac learning. : We study a distribution dependent form of PAC learning that uses probability distributions related to Kolmogorov complexity. We relate the PACS model, defined by Denis, D'Halluin and Gilleron in [3], with the standard simple-PAC model and give a general technique that subsumes the results in [3] and [6]. Target text information: Learning dfa from simple examples. : We present a framework for learning DFA from simple examples. We show that efficient PAC learning of DFA is possible if the class of distributions is restricted to simple distributions where a teacher might choose examples based on the knowledge of the target concept. This answers an interesting variant of an open research question posed in Pitt's seminal paper: Are DFA's PAC-identifiable if examples are drawn from the uniform distribution, or some other known simple distribution? Our approach uses the RPNI algorithm for learning DFA from labeled examples. In particular, we describe an efficient learning algorithm for exact learning of the target DFA with high probability when a bound on the number of states (N ) of the target DFA is known in advance. When N is not known, we show how this algorithm can be used for efficient PAC learning of DFAs. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
1,093
test
1-hop neighbor's text information: Paying attention to the right things: Issues of focus in case-based creative design. : Case-based reasoning can be used to explain many creative design processes, since much creativity stems from using old solutions in novel ways. To understand the role cases play, we conducted an exploratory study of a seven-week student creative design project. This paper discusses the observations we made and the issues that arise in understanding and modeling creative design processes. We found particularly interesting the role of imagery in reminding and in evaluating design options. This included visualization, mental simulation, gesturing, and even sound effects. An important class of issues we repeatedly encounter in our modeling efforts concerns the focus of the designer. (For example, which problem constraints should be reformulated? Which evaluative issues should be raised?) Cases help to address these focus issues. 1-hop neighbor's text information: Integrating reading and creativity: A functional approach. : Reading has been studied for decades by a variety of cognitive disciplines, yet no theories exist which sufficiently describe and explain how people accomplish the complete task of reading real-world texts. In particular, a type of knowledge intensive reading known as creative reading has been largely ignored by the past research. We argue that creative reading is an aspect of practically all reading experiences; as a result, any theory which overlooks this will be insufficient. We have built on results from psychology, artificial intelligence, and education in order to produce a functional theory of the complete reading process. The overall framework describes the set of tasks necessary for reading to be performed. Within this framework, we have developed a theory of creative reading. The theory is implemented in the ISAAC (Integrated Story Analysis And Creativity) system, a reading system which reads science fiction stories. 1-hop neighbor's text information: Introspective Reasoning using Meta-Explanations for Multistrat-egy Learning. : In order to learn effectively, a reasoner must not only possess knowledge about the world and be able to improve that knowledge, but it also must introspectively reason about how it performs a given task and what particular pieces of knowledge it needs to improve its performance at the current task. Introspection requires declarative representations of meta-knowledge of the reasoning performed by the system during the performance task, of the system's knowledge, and of the organization of this knowledge. This chapter presents a taxonomy of possible reasoning failures that can occur during a performance task, declarative representations of these failures, and associations between failures and particular learning strategies. The theory is based on Meta-XPs, which are explanation structures that help the system identify failure types, formulate learning goals, and choose appropriate learning strategies in order to avoid similar mistakes in the future. The theory is implemented in a computer model of an introspective reasoner that performs multistrategy learning during a story understanding task. Target text information: A Model of Creative Understanding, : Although creativity has largely been studied in problem solving contexts, creativity consists of both a generative component and a comprehension component. In particular, creativity is an essential part of reading and understanding of natural language stories. We have formalized the understanding process and have developed an algorithm capable of producing creative understanding behavior. We have also created a novel knowledge organization scheme to assist the process. Our model of creativity is implemented as a portion of the ISAAC (Integrated Story Analysis And Creativity) reading system, a system which models the creative reading of science fiction stories. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
1,750
test
1-hop neighbor's text information: A probabilistic calculus of actions. : We present a symbolic machinery that admits both probabilistic and causal information about a given domain, and produces probabilistic statements about the effect of actions and the impact of observations. The calculus admits two types of conditioning operators: ordinary Bayes conditioning, P (yjX = x), which represents the observation X = x, and causal conditioning, P (yjdo(X = x)), read: the probability of Y = y conditioned on holding X constant (at x) by deliberate action. Given a mixture of such observational and causal sentences, together with the topology of the causal graph, the calculus derives new conditional probabilities of both types, thus enabling one to quantify the effects of actions and observations. 1-hop neighbor's text information: A theory of inferred causation. : This paper concerns the empirical basis of causation, and addresses the following issues: We propose a minimal-model semantics of causation, and show that, contrary to common folklore, genuine causal influences can be distinguished from spurious covariations following standard norms of inductive reasoning. We also establish a sound characterization of the conditions under which such a distinction is possible. We provide an effective algorithm for inferred causation and show that, for a large class of data the algorithm can uncover the direction of causal influences as defined above. Finally, we ad dress the issue of non-temporal causation. Target text information: Bayesian Networks: I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
874
val
1-hop neighbor's text information: On the convergence properties of the EM algorithm. : In this article we investigate the relationship between the two popular algorithms, the EM algorithm and the Gibbs sampler. We show that the approximate rate of convergence of the Gibbs sampler by Gaussian approximation is equal to that of the corresponding EM type algorithm. This helps in implementing either of the algorithms as improvement strategies for one algorithm can be directly transported to the other. In particular, by running the EM algorithm we know approximately how many iterations are needed for convergence of the Gibbs sampler. We also obtain a result that under conditions, the EM algorithm used for finding the maximum likelihood estimates can be slower to converge than the corresponding Gibbs sampler for Bayesian inference which uses proper prior distributions. We illustrate our results in a number of realistic examples all based on the generalized linear mixed models. Target text information: (1997) Bayesian Estimation and Model Choice in the Item Response Models. : I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
2,295
val
1-hop neighbor's text information: Combining neural and symbolic learning to revise probabilistic rule bases. : This paper describes Rapture | a system for revising probabilistic knowledge bases that combines connectionist and symbolic learning methods. Rapture uses a modified version of backpropagation to refine the certainty factors of a probabilistic rule base and it uses ID3's information-gain heuristic to add new rules. Results on refining three actual expert knowledge bases demonstrate that this combined approach generally performs better than previous methods. Target text information: Modifying Network Architectures for Certainty-Factor Rule-Base Revision: This paper describes Rapture | a system for revising probabilistic rule bases that converts symbolic rules into a connectionist network, which is then trained via connectionist techniques. It uses a modified version of backpropagation to refine the certainty factors of the rule base, and uses ID3's information-gain heuristic (Quinlan, 1986) to add new rules. Work is currently under way for finding improved techniques for modifying network architectures that include adding hidden units using the UPSTART algorithm (Frean, 1990). A case is made via comparison with fully connected connectionist techniques for keeping the rule base as close to the original as possible, adding new input units only as needed. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
257
test
1-hop neighbor's text information: Simple Synchrony Networks: Learning Generalisations across Syntactic Constituents: This paper describes a training algorithm for Simple Synchrony Networks (SSNs), and reports on experiments in language learning using a recursive grammar. The SSN is a new connectionist architecture combining a technique for learning about patterns across time, Simple Recurrent Networks (SRNs), with Temporal Synchrony Variable Binding (TSVB). The use of TSVB means the SSN can learn about entities in the training set, and generalise this information to entities in the test set. In the experiments, the network is trained on sentences with up to one embedded clause, and with some words restricted to certain classes of constituent. During testing, the network generalises information learned to sentences with up to three embedded clauses, and with words appearing in any constituent. These results demonstrate that SSNs learn generalisations across syntactic constituents. Target text information: Natural language grammatical inference: A comparison of recurrent neural networks and machine learning methods. : This paper examines the inductive inference of a complex grammar with neural networks specifically, the task considered is that of training a network to classify natural language sentences as grammatical or ungrammatical, thereby exhibiting the same kind of discriminatory power provided by the Principles and Parameters linguistic framework, or Government-and-Binding theory. Neural networks are trained, without the division into learned vs. innate components assumed by Chomsky, in an attempt to produce the same judgments as native speakers on sharply grammatical/ungrammatical data. How a recurrent neural network could possess linguistic capability, and the properties of various common recurrent neural network architectures are discussed. The problem exhibits training behavior which is often not present with smaller grammars, and training was initially difficult. However, after implementing several techniques aimed at improving the convergence of the gradient descent backpropagation-through-time training algorithm, significant learning was possible. It was found that certain architectures are better able to learn an appropriate grammar. The operation of the networks and their training is analyzed. Finally, the extraction of rules in the form of deterministic finite state automata is investigated. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
1,831
test
1-hop neighbor's text information: Generalizations of the bias/variance decomposition for prediction error. : The bias and variance of a real valued random variable, using squared error loss, are well understood. However because of recent developments in classification techniques it has become desirable to extend these concepts to general random variables and loss functions. The 0-1 (misclassification) loss function with categorical random variables has been of particular interest. We explore the concepts of variance and bias and develop a decomposition of the prediction error into functions of the systematic and variable parts of our predictor. After providing some examples we conclude with a discussion of the various definitions that have been proposed. 1-hop neighbor's text information: Why does Bagging Work? a Bayesian Account and its Implications. : The error rate of decision-tree and other classification learners can often be much reduced by bagging: learning multiple models from bootstrap samples of the database, and combining them by uniform voting. In this paper we empirically test two alternative explanations for this, both based on Bayesian learning theory: (1) bagging works because it is an approximation to the optimal procedure of Bayesian model averaging, with an appropriate implicit prior; (2) bagging works because it effectively shifts the prior to a more appropriate region of model space. All the experimental evidence contradicts the first hypothesis, and confirms the second. Bagging (Breiman 1996a) is a simple and effective way to reduce the error rate of many classification learning algorithms. For example, in the empirical study described below, it reduces the error of a decision-tree learner in 19 of 26 databases, by 4% on average. In the bagging procedure, given a training set of size s, a "bootstrap" replicate of it is constructed by taking s samples with replacement from the training set. Thus a new training set of the same size is produced, where each of the original examples may appear once, more than once, or not. On average, 63% of the original examples will appear in the bootstrap sample. The learning algorithm is then applied to this training set. This procedure is repeated m times, and the resulting m models are aggregated by uniform voting. Bagging is one of several "multiple model" approaches that have recently received much attention (see, for example, (Chan, Stolfo, & Wolpert 1996)). Other procedures of this type include boosting (Freund & Schapire 1996) and stacking (Wolpert 1992). 1-hop neighbor's text information: Training methods for adaptive boosting of neural networks for character recognition. : Technical Report #1072, D epartement d'Informatique et Recherche Op erationnelle, Universit e de Montr eal Abstract Boosting is a general method for improving the performance of any learning algorithm that consistently generates classifiers which need to perform only slightly better than random guessing. A recently proposed and very promising boosting algorithm is AdaBoost [5]. It has been applied with great success to several benchmark machine learning problems using rather simple learning algorithms [4], in particular decision trees [1, 2, 6]. In this paper we use AdaBoost to improve the performances of neural networks applied to character recognition tasks. We compare training methods based on sampling the training set and weighting the cost function. Our system achieves about 1.4% error on a data base of online handwritten digits from more than 200 writers. Adaptive boosting of a multi-layer network achieved 2% error on the UCI Letters offline characters data set. Target text information: Experiments with a New Boosting Algorithm. : In an earlier paper, we introduced a new boosting algorithm called AdaBoost which, theoretically, can be used to significantly reduce the error of any learning algorithm that consistently generates classifiers whose performance is a little better than random guessing. We also introduced the related notion of a pseudo-loss which is a method for forcing a learning algorithm of multi-label concepts to concentrate on the labels that are hardest to discriminate. In this paper, we describe experiments we carried out to assess how well AdaBoost with and without pseudo-loss, performs on real learning problems. We performed two sets of experiments. The first set compared boosting to Breiman's bagging method when used to aggregate various classifiers (including decision trees and single attribute-value tests). We compared the performance of the two methods on a collection of machine-learning benchmarks. In the second set of experiments, we studied in more detail the performance of boosting using a nearest-neighbor classifier on an OCR problem. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
1,462
test
1-hop neighbor's text information: "Coevolving High Level Representations," : 1-hop neighbor's text information: The evolution of communication schemes over continuous channaels. : Many problems impede the design of multi-agent systems, not the least of which is the passing of information between agents. While others hand implement communication routes and semantics, we explore a method by which communication can evolve. In the experiments described here, we model agents as connectionist networks. We supply each agent with a number of communications channels implemented by the addition of both input and output units for each channel. The output units initiate environmental signals whose amplitude decay over distance and are perturbed by environmental noise. An agent does not receive input from other individuals, rather the agents input reects the summation of all other agents output signals along that channel. Because we use real-valued activations, the agents communicate using real-valued vectors. Under our evolutionary program, GNARL, the agents coevolve a communication scheme over continuous channels which conveys task-spe cific information. 1-hop neighbor's text information: Fool\'s gold: Extracting finite state machines from recurrent network dynamics. : Several recurrent networks have been proposed as representations for the task of formal language learning. After training a recurrent network, the next step is to understand the information processing carried out by the network. Some researchers (Giles et al., 1992; Watrous & Kuhn, 1992; Cleeremans et al., 1989) have resorted to extracting finite state machines from the internal state trajectories of their recurrent networks. This paper describes two conditions, sensitivity to initial conditions and frivolous computational explanations due to discrete measurements (Kolen & Pollack, 1993), which allow these extraction methods to return illusionary finite state descriptions. Target text information: The observer\'s paradox: Apparent computational complexity in physical systems. : I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
1,847
val
1-hop neighbor's text information: Applications of a logical discovery engine. : The clausal discovery engine claudien is presented. claudien discovers regularities in data and is a representative of the inductive logic programming paradigm. As such, it represents data and regularities by means of first order clausal theories. Because the search space of clausal theories is larger than that of attribute value representation, claudien also accepts as input a declarative specification of the language bias, which determines the set of syntactically well-formed regularities. Whereas other papers on claudien focuss on the semantics or logical problem specification of claudien, on the discovery algorithm, or the PAC-learning aspects, this paper wants to illustrate the power of the resulting technique. In order to achieve this aim, we show how claudien can be used to learn 1) integrity constraints in databases, 2) functional dependencies and determinations, 3) properties of sequences, 4) mixed quantitative and qualitative laws, 5) reverse engineering, and 6) classification rules. 1-hop neighbor's text information: Multi-class problems and discretization in ICL (extended abstract). : Handling multi-class problems and real numbers is important in practical applications of machine learning to KDD problems. While attribute-value learners address these problems as a rule, very few ILP systems do so. The few ILP systems that handle real numbers mostly do so by trying out all real values that are applicable, thus running into efficiency or overfitting problems. This paper discusses some recent extensions of ICL that address these problems. ICL, which stands for Inductive Constraint Logic, is an ILP system that learns first order logic formulae from positive and negative examples. The main charateristic of ICL is its view on examples. These are seen as interpretations which are true or false for the clausal target theory (in CNF). We first argue that ICL can be used for learning a theory in a disjunctive normal form (DNF). With this in mind, a possible solution for handling more than two classes is given (based on some ideas from CN2). Finally, we show how to tackle problems with continuous values by adapting discretization techniques from attribute value learners. 1-hop neighbor's text information: Inductive Constraint Logic. : A novel approach to learning first order logic formulae from positive and negative examples is presented. Whereas present inductive logic programming systems employ examples as true and false ground facts (or clauses), we view examples as interpretations which are true or false for the target theory. This viewpoint allows to reconcile the inductive logic programming paradigm with classical attribute value learning in the sense that the latter is a special case of the former. Because of this property, we are able to adapt AQ and CN2 type algorithms in order to enable learning of full first order formulae. However, whereas classical learning techniques have concentrated on concept representations in disjunctive normal form, we will use a clausal representation, which corresponds to a conjuctive normal form where each conjunct forms a constraint on positive examples. This representation duality reverses also the role of positive and negative examples, both in the heuristics and in the algorithm. The resulting theory is incorporated in a system named ICL (Inductive Constraint Logic). Target text information: Inductive constraint logic and the mutagenesis problem. : A novel approach to learning first order logic formulae from positive and negative examples is incorporated in a system named ICL (Inductive Constraint Logic). In ICL, examples are viewed as interpretations which are true or false for the target theory, whereas in present inductive logic programming systems, examples are true and false ground facts (or clauses). Furthermore, ICL uses a clausal representation, which corresponds to a conjunctive normal form where each conjunct forms a constraint on positive examples, whereas classical learning techniques have concentrated on concept representations in disjunctive normal form. We present some experiments with this new system on the mutagenesis problem. These experiments illustrate some of the differences with other systems, and indicate that our approach should work at least as well as the more classical approaches. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
0
Rule Learning
cora
2,646
test
1-hop neighbor's text information: (1991) Learning Polynomial functions by feature Construction. : We present a method for learning higher-order polynomial functions from examples using linear regression and feature construction. Regression is used on a set of training instances to produce a weight vector for a linear function over the feature set. If this hypothesis is imperfect, a new feature is constructed by forming the product of the two features that most effectively predict the squared error of the current hypothesis. The algorithm is then repeated. In an extension to this method, the specific pair of features to combine is selected by measuring their joint ability to predict the hypothesis' error. 1-hop neighbor's text information: An empirical comparison of selection measures for decision-tree induction. : Ourston and Mooney, 1990b ] D. Ourston and R. J. Mooney. Improving shared rules in multiple category domain theories. Technical Report AI90-150, Artificial Intelligence Labora tory, University of Texas, Austin, TX, December 1990. 1-hop neighbor's text information: Multivariate versus univariate decision trees. : COINS Technical Report 92-8 January 1992 Abstract In this paper we present a new multivariate decision tree algorithm LMDT, which combines linear machines with decision trees. LMDT constructs each test in a decision tree by training a linear machine and then eliminating irrelevant and noisy variables in a controlled manner. To examine LMDT's ability to find good generalizations we present results for a variety of domains. We compare LMDT empirically to a univariate decision tree algorithm and observe that when multivariate tests are the appropriate bias for a given data set, LMDT finds small accurate trees. Target text information: Multivariate Decision Trees: COINS Technical Report 92-82 December 1992 Abstract Multivariate decision trees overcome a representational limitation of univariate decision trees: univariate decision trees are restricted to splits of the instance space that are orthogonal to the feature's axis. This paper discusses the following issues for constructing multivariate decision trees: representing a multivariate test, including symbolic and numeric features, learning the coefficients of a multivariate test, selecting the features to include in a test, and pruning of multivariate decision trees. We present some new and review some well-known methods for forming multivariate decision trees. The methods are compared across a variety of learning tasks to assess each method's ability to find concise, accurate decision trees. The results demonstrate that some multivariate methods are more effective than others. In addition, the experiments confirm that allowing multivariate tests improves the accuracy of the resulting decision tree over univariate trees. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
254
test
1-hop neighbor's text information: Statistical mechanics of nonlinear nonequilibrium financial markets, : The work in progress reported by Wright & Liley shows great promise, primarily because of their experimental and simulation paradigms. However, their tentative conclusion that macroscopic neocortex may be considered (approximately) a linear near-equilibrium system is premature and does not correspond to tentative conclusions drawn from other studies of neocortex. At this time, there exists an interdisciplinary multidimensional gradation on published studies of neocortex, with one primary dimension of mathematical physics represented by two extremes. At one extreme, there is much scientifically unsupported talk of chaos and quantum physics being responsible for many important macroscopic neocortical processes (involving many thousands to millions of neurons) (Wilczek, 1994). At another extreme, many non-mathematically trained neuroscientists uncritically lump all neocortical mathematical theory into one file, and consider only statistical averages of citations for opinions on the quality of that research (Nunez, 1995). In this context, it is important to appreciate that Wright and Liley (W&L) report on their scientifically sound studies on macroscopic neocortical function, based on simulation and a blend of sound theory and reproducible experiments. However, their pioneering work, given the absence of much knowledge of neocortex at this time, is open to criticism, especially with respect to their present inferences and conclusions. Their conclusion that EEG data exhibit linear near-equilibrium dynamics may very well be true, but only in the sense of focusing only on one local minima, possibly with individual-specific and physiological-state dependent 1-hop neighbor's text information: Adaptive Simulated Annealing (ASA), : 1-hop neighbor's text information: Canonical momenta indicators of financial markets and neocortical EEG, : A paradigm of statistical mechanics of financial markets (SMFM) is fit to multivariate financial markets using Adaptive Simulated Annealing (ASA), a global optimization algorithm, to perform maximum likelihood fits of Lagrangians defined by path integrals of multivariate conditional probabilities. Canonical momenta are thereby derived and used as technical indicators in a recursive ASA optimization process to tune trading rules. These trading rules are then used on out-of-sample data, to demonstrate that they can profit from the SMFM model, to illustrate that these markets are likely not efficient. This methodology can be extended to other systems, e.g., electroencephalography. This approach to complex systems emphasizes the utility of blending an intuitive and powerful mathematical-physics formalism to generate indicators which are used by AI-type rule-based models of management. Target text information: Volatility of Volatility of Financial Markets: We present empirical evidence for considering volatility of Eurodollar futures as a stochastic process, requiring a generalization of the standard Black-Scholes (BS) model which treats volatility as a constant. We use a previous development of a statistical mechanics of financial markets (SMFM) to model these issues. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
402
test
1-hop neighbor's text information: A hierarchical ensemble of decision trees applied to classifying data from a psychological experiment: Classifying by hand complex data coming from psychology experiments can be a long and difficult task, because of the quantity of data to classify and the amount of training it may require. One way to alleviate this problem is to use machine learning techniques. We built a classifier based on decision trees that reproduces the classifying process used by two humans on a sample of data and that learns how to classify unseen data. The automatic classifier proved to be more accurate, more constant and much faster than classification by hand. 1-hop neighbor's text information: Decision Trees: Equivalence and Propositional Operations: For the well-known concept of decision trees as it is used for inductive inference we study the natural concept of equivalence: two decision trees are equivalent if and only if they represent the same hypothesis. We present a simple efficient algorithm to establish whether two decision trees are equivalent or not. The complexity of this algorithm is bounded by the product of the sizes of both decision trees. The hypothesis represented by a decision tree is essentially a boolean function, just like a proposition. Although every boolean function can be represented in this way, we show that disjunctions and conjunctions of decision trees can not efficiently be represented as decision trees, and simply shaped propositions may require exponential size for representation as de cision trees. Target text information: Machine learning research: Four current directions. : Machine Learning research has been making great progress is many directions. This article summarizes four of these directions and discusses some current open problems. The four directions are (a) improving classification accuracy by learning ensembles of classifiers, (b) methods for scaling up supervised learning algorithms, (c) reinforcement learning, and (d) learning complex stochastic models. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
5
Reinforcement Learning
cora
2,620
test
1-hop neighbor's text information: Bilinear separation of two sets in n-space. : The NP-complete problem of determining whether two disjoint point sets in the n-dimensional real space R n can be separated by two planes is cast as a bilinear program, that is minimizing the scalar product of two linear functions on a polyhedral set. The bilinear program, which has a vertex solution, is processed by an iterative linear programming algorithm that terminates in a finite number of steps at a point satisfying a necessary optimality condition or at a global minimum. Encouraging computational experience on a number of test problems is reported. 1-hop neighbor's text information: Introduction to the Theory of Neural Computa 92 tion. : Neural computation, also called connectionism, parallel distributed processing, neural network modeling or brain-style computation, has grown rapidly in the last decade. Despite this explosion, and ultimately because of impressive applications, there has been a dire need for a concise introduction from a theoretical perspective, analyzing the strengths and weaknesses of connectionist approaches and establishing links to other disciplines, such as statistics or control theory. The Introduction to the Theory of Neural Computation by Hertz, Krogh and Palmer (subsequently referred to as HKP) is written from the perspective of physics, the home discipline of the authors. The book fulfills its mission as an introduction for neural network novices, provided that they have some background in calculus, linear algebra, and statistics. It covers a number of models that are often viewed as disjoint. Critical analyses and fruitful comparisons between these models 1-hop neighbor's text information: Pattern recognition via linear programming: Theory and application to medical diagnosis. : A decision problem associated with a fundamental nonconvex model for linearly inseparable pattern sets is shown to be NP-complete. Another nonconvex model that employs an 1 norm instead of the 2-norm, can be solved in polynomial time by solving 2n linear programs, where n is the (usually small) dimensionality of the pattern space. An effective LP-based finite algorithm is proposed for solving the latter model. The algorithm is employed to obtain a noncon-vex piecewise-linear function for separating points representing measurements made on fine needle aspirates taken from benign and malignant human breasts. A computer program trained on 369 samples has correctly diagnosed each of 45 new samples encountered and is currently in use at the University of Wisconsin Hospitals. 1. Introduction. The fundamental problem we wish to address is that of Target text information: Misclassification Minimization: The problem of minimizing the number of misclassified points by a plane, attempting to separate two point sets with intersecting convex hulls in n-dimensional real space, is formulated as a linear program with equilibrium constraints (LPEC). This general LPEC can be converted to an exact penalty problem with a quadratic objective and linear constraints. A Frank-Wolfe-type algorithm is proposed for the penalty problem that terminates at a stationary point or a global solution. Novel aspects of the approach include: (i) A linear complementarity formulation of the step function that "counts" misclassifications, (ii) Exact penalty formulation without boundedness, nondegeneracy or constraint qualification assumptions, (iii) An exact solution extraction from the sequence of minimizers of the penalty function for a finite value of the penalty parameter for the general LPEC and an explicitly exact solution for the LPEC with uncoupled constraints, and (iv) A parametric quadratic programming formulation of the LPEC associated with the misclassification minimization problem. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
790
test
1-hop neighbor's text information: Multivariate versus univariate decision trees. : COINS Technical Report 92-8 January 1992 Abstract In this paper we present a new multivariate decision tree algorithm LMDT, which combines linear machines with decision trees. LMDT constructs each test in a decision tree by training a linear machine and then eliminating irrelevant and noisy variables in a controlled manner. To examine LMDT's ability to find good generalizations we present results for a variety of domains. We compare LMDT empirically to a univariate decision tree algorithm and observe that when multivariate tests are the appropriate bias for a given data set, LMDT finds small accurate trees. Target text information: EE380L:Neural Networks for Pattern Recognition POp Trees under the guidance of: Decision Trees have been widely used for classification/regression tasks. They are relatively much faster to build as compared to Neural Networks and are understandable by humans. In normal decision trees, based on the input vector, only one branch is followed. In Probabilistic OPtion trees, based on the input vector we follow all of the subtrees with some probability. These probabilities are learned by the system. Probabilistic decisions are likely to be useful, when the boundary of classes submerge in each other, or when there is noise in the input data. In addition they provide us with a confidence measure. We allow option nodes in our trees, Again, instead of uniform voting, we learn the weightage of every subtree. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
168
test
1-hop neighbor's text information: Evolving Artificial Neural Networks using the Baldwin Effect: This paper describes how through simple means a genetic search towards optimal neural network architectures can be improved, both in the convergence speed as in the quality of the final result. This result can be theoretically explained with the Baldwin effect, which is implemented here not just by the learning process of the network alone, but also by changing the network architecture as part of the learning procedure. This can be seen as a combination of two different techniques, both help ing and improving on simple genetic search. 1-hop neighbor's text information: "A framework of combining symbolic and neural learning," : The primary goal of inductive learning is to generalize well that is, induce a function that accurately produces the correct output for future inputs. Hansen and Salamon showed that, under certain assumptions, combining the predictions of several separately trained neural networks will improve generalization. One of their key assumptions is that the individual networks should be independent in the errors they produce. In the standard way of performing backpropagation this assumption may be violated, because the standard procedure is to initialize network weights in the region of weight space near the origin. This means that backpropagation's gradient-descent search may only reach a small subset of the possible local minima. In this paper we present an approach to initializing neural networks that uses competitive learning to intelligently create networks that are originally located far from the origin of weight space, thereby potentially increasing the set of reachable local minima. We report experiments on two real-world datasets where combinations of networks initialized with our method generalize better than combina tions of networks initialized the traditional way. 1-hop neighbor's text information: Extraction of rules from discrete-time recurrent neural networks. Neural Networks, : Technical Report CS-TR-3465 and UMIACS-TR-95-54 University of Maryland, College Park, MD 20742 Abstract The extraction of symbolic knowledge from trained neural networks and the direct encoding of (partial) knowledge into networks prior to training are important issues. They allow the exchange of information between symbolic and connectionist knowledge representations. The focus of this paper is on the quality of the rules that are extracted from recurrent neural networks. Discrete-time recurrent neural networks can be trained to correctly classify strings of a regular language. Rules defining the learned grammar can be extracted from networks in the form of deterministic finite-state automata (DFA's) by applying clustering algorithms in the output space of recurrent state neurons. Our algorithm can extract different finite-state automata that are consistent with a training set from the same network. We compare the generalization performances of these different models and the trained network and we introduce a heuristic that permits us to choose among the consistent DFA's the model which best approximates the learned regular grammar. Target text information: Pruning recurrent neural networks for improved generalization performance. : Determining the architecture of a neural network is an important issue for any learning task. For recurrent neural networks no general methods exist that permit the estimation of the number of layers of hidden neurons, the size of layers or the number of weights. We present a simple pruning heuristic which significantly improves the generalization performance of trained recurrent networks. We illustrate this heuristic by training a fully recurrent neural network on positive and negative strings of a regular grammar. We also show that if rules are extracted from networks trained to recognize these strings, that rules extracted after pruning are more consistent with the rules to be learned. This performance improvement is obtained by pruning and retraining the networks. Simulations are shown for training and pruning a recurrent neural net on strings generated by two regular grammars, a randomly-generated 10-state grammar and an 8-state triple parity grammar. Further simulations indicate that this pruning method can gives generalization performance superior to that obtained by training with weight decay. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
937
test
1-hop neighbor's text information: Supervised learning from incomplete data via an EM approach. : Real-world learning tasks may involve high-dimensional data sets with arbitrary patterns of missing data. In this paper we present a framework based on maximum likelihood density estimation for learning from such data sets. We use mixture models for the density estimates and make two distinct appeals to the Expectation-Maximization (EM) principle (Dempster et al., 1977) in deriving a learning algorithm|EM is used both for the estimation of mixture components and for coping with missing data. The resulting algorithm is applicable to a wide range of supervised as well as unsupervised learning problems. Results from a classification benchmark|the iris data set|are presented. 1-hop neighbor's text information: Mixtures of probabilistic principle component analysers. : Principal component analysis (PCA) is one of the most popular techniques for processing, compressing and visualising data, although its effectiveness is limited by its global linearity. While nonlinear variants of PCA have been proposed, an alternative paradigm is to capture data complexity by a combination of local linear PCA projections. However, conventional PCA does not correspond to a probability density, and so there is no unique way to combine PCA models. Previous attempts to formulate mixture models for PCA have therefore to some extent been ad hoc. In this paper, PCA is formulated within a maximum-likelihood framework, based on a specific form of Gaussian latent variable model. This leads to a well-defined mixture model for probabilistic principal component analysers, whose parameters can be determined using an EM algorithm. We discuss the advantages of this model in the context of clustering, density modelling and local dimensionality reduction, and we demonstrate its application to image compression and handwritten digit recognition. 1-hop neighbor's text information: Probabilistic principal component analysis. : Principal component analysis (PCA) is a ubiquitous technique for data analysis and processing, but one which is not based upon a probability model. In this paper we demonstrate how the principal axes of a set of observed data vectors may be determined through maximum-likelihood estimation of parameters in a latent variable model closely related to factor analysis. We consider the properties of the associated likelihood function, giving an EM algorithm for estimating the principal subspace iteratively, and discuss the advantages conveyed by the definition of a probability density function for PCA. Target text information: Computation and Neural Systems, : I present an expectation-maximization (EM) algorithm for principal component analysis (PCA). The algorithm allows a few eigenvectors and eigenvalues to be extracted from large collections of high dimensional data. It is computationally very efficient in space and time. It also naturally accommodates missing information. I also introduce a new variant of PCA called sensible principal component analysis (SPCA) which defines a proper density model in the data space. Learning for SPCA is also done with an EM algorithm. I report results on synthetic and real data showing that these EM algorithms correctly and efficiently find the leading eigenvectors of the covariance of datasets in a few iterations using up to hundreds of thousands of datapoints in thousands of dimensions. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
1,772
test
1-hop neighbor's text information: Resonance and the perception of musical meter. : Many connectionist approaches to musical expectancy and music composition let the question of What next? overshadow the equally important question of When next?. One cannot escape the latter question, one of temporal structure, when considering the perception of musical meter. We view the perception of metrical structure as a dynamic process where the temporal organization of external musical events synchronizes, or entrains, a listeners internal processing mechanisms. This article introduces a novel connectionist unit, based upon a mathematical model of entrainment, capable of phase and frequency-locking to periodic components of incoming rhythmic patterns. Networks of these units can self-organize temporally structured responses to rhythmic patterns. The resulting network behavior embodies the perception of metrical structure. The article concludes with a discussion of the implications of our approach for theories of metrical structure and musical expectancy. 1-hop neighbor's text information: Representing rhythmic patterns in a network of oscillators. : This paper describes an evolving computational model of the perception and production of simple rhythmic patterns. The model consists of a network of oscillators of different resting frequencies which couple with input patterns and with each other. Oscillators whose frequencies match periodicities in the input tend to become activated. Metrical structure is represented explicitly in the network in the form of clusters of oscillators whose frequencies and phase angles are constrained to maintain the harmonic relationships that characterize meter. Rests in rhythmic patterns are represented by explicit rest oscillators in the network, which become activated when an expected beat in the pattern fails to appear. The model makes predictions about the relative difficulty of patterns and the effect of deviations from periodicity in the input. The nested periodicity that defines musical, and probably also linguistic, meter appears to be fundamental to the way in which people perceive and produce patterns in time. Meter by itself, however, is not sufficient to describe patterns which are interesting or memorable because of how they deviate from the metrical hierarchy. The simplest deviations are rests or gaps where one or more levels in the hierarchy would normally have a beat. When beats are removed at regular intervals which match the period of some level of the metrical hierarchy, we have what we will call a simple rhythmic pattern. Figure 1 shows an example of a simple rhythmic pattern. Below it is a grid representation of the meter which is behind the pattern. 1-hop neighbor's text information: Synchronization and desynchronization in a network of locally coupled Wilson-Cowan oscillators, : A network of Wilson-Cowan oscillators is constructed, and its emergent properties of synchronization and desynchronization are investigated by both computer simulation and formal analysis. The network is a two-dimensional matrix, where each oscillator is coupled only to its neighbors. We show analytically that a chain of locally coupled oscillators (the piece-wise linear approximation to the Wilson-Cowan oscillator) synchronizes, and present a technique to rapidly entrain finite numbers of oscillators. The coupling strengths change on a fast time scale based on a Hebbian rule. A global separator is introduced which receives input from and sends feedback to each oscillator in the matrix. The global separator is used to desynchronize different oscillator groups. Unlike many other models, the properties of this network emerge from local connections, that preserve spatial relationships among components, and are critical for encoding Gestalt principles of feature grouping. The ability to synchronize and desynchronize oscillator groups within this network offers a promising approach for pattern segmentation and figure/ground segregation based on oscillatory correlation. Target text information: Meter as Mechanism: A Neural Network that Learns Metrical Patterns: One kind of prosodic structure that apparently underlies both music and some examples of speech production is meter. Yet detailed measurements of the timing of both music and speech show that the nested periodicities that define metrical structure can be quite noisy in time. What kind of system could produce or perceive such variable metrical timing patterns? And what would it take to be able to store and reproduce particular metrical patterns from long-term memory? We have developed a network of coupled oscillators that both produces and perceives patterns of pulses that conform to particular meters. In addition, beginning with an initial state with no biases, it can learn to prefer the particular meter that it has been previously exposed to. Meter is an abstract structure in time based on the periodic recurrence of pulses, that is, on equal time intervals between distinct phase zeros. From this point of view, the simplest meter is a regular metronome pulse. But often there appear meters with two or three (or rarely even more) nested periodicities with integral frequency ratios. A hierarchy of such metrical structures is implied in standard Western musical notation, where different levels of the metrical hierarchy are indicated by kinds of notes (quarter notes, half notes, etc.) and by the bars separating measures with an equal number of beats. For example, in a basic waltz-time meter, there are individual beats, all with the same spacing, grouped into sets of three, with every third one receiving a stronger accent at its onset. In this meter there is a hierarchy consisting of both a faster periodic cycle (at the beat level) and a slower one (at the measure level) that is 1/3 as fast, with its onset (or zero phase angle) coinciding with the zero phase angle of every third beat. This essentially temporal view of meter contrasts with the traditional symbol-string theories (such as Hayes, 1981 for speech and Lerdahl and Jackendoff, 1983 for music). Metrical systems, however they are defined, seem to underlie most of what we call music. Indeed, an expanded version of European musical notation is found to be practical for transcribing most music from around the world. That is, most forms of music employ nested periodic temporal patterns (Titon, Fujie, & Locke, 1996). Musical notation has I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
495
train
1-hop neighbor's text information: Learning hierarchical rule sets. : We present an algorithm for learning sets of rules that are organized into up to k levels. Each level can contain an arbitrary number of rules "if c then l" where l is the class associated to the level and c is a concept from a given class of basic concepts. The rules of higher levels have precedence over the rules of lower levels and can be used to represent exceptions. As basic concepts we can use Boolean attributes in the infinite attribute space model, or certain concepts defined in terms of substrings. Given a sample of m examples, the algorithm runs in polynomial time and produces a consistent concept representation of size O((log m) k n k ), where n is the size of the smallest consistent representation with k levels of rules. This implies that the algorithm learns in the PAC model. The algorithm repeatedly applies the greedy heuristics for weighted set cover. The weights are obtained from approximate solutions to previous set cover problems. Target text information: Learning rules with local exceptions. : We present a learning algorithm for rule-based concept representations called ripple-down rule sets. Ripple-down rule sets allow us to deal with the exceptions for each rule separately by introducing exception rules, exception rules for each exception rule etc. up to a constant depth. These local exception rules are in contrast to decision lists, in which the exception rules must be placed into a global ordering of the rules. The localization of exceptions makes it possible to represent concepts that have no decision list representation. On the other hand, decision lists with a constant number of alternations between rules for different classes can be represented by constant depth ripple-down rule sets with only a polynomial increase in size. Our algorithm is an Occam algorithm for constant depth ripple-down rule sets and, hence, a PAC learning algorithm. It is based on repeatedly applying the greedy approximation method for the weighted set cover problem to find good exception rule sets. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
965
test
1-hop neighbor's text information: An Information Maximization Approach to Blind Separation and Blind Deconvolution. : We derive a new self-organising learning algorithm which maximises the information transferred in a network of non-linear units. The algorithm does not assume any knowledge of the input distributions, and is defined here for the zero-noise limit. Under these conditions, information maximisation has extra properties not found in the linear case (Linsker 1989). The non-linearities in the transfer function are able to pick up higher-order moments of the input distributions and perform something akin to true redundancy reduction between units in the output representation. This enables the network to separate statistically independent components in the inputs: a higher-order generalisation of Principal Components Analysis. We apply the network to the source separation (or cocktail party) problem, successfully separating unknown mixtures of up to ten speakers. We also show that a variant on the network architecture is able to perform blind deconvolution (cancellation of unknown echoes and reverberation in a speech signal). Finally, we derive dependencies of information transfer on time delays. We suggest that information max-imisation provides a unifying framework for problems in `blind' signal processing. fl Please send comments to [email protected]. This paper will appear as Neural Computation, 7, 6, 1004-1034 (1995). The reference for this version is: Technical Report no. INC-9501, February 1995, Institute for Neural Computation, UCSD, San Diego, CA 92093-0523. 1-hop neighbor's text information: "A self-organizing multiple-view representation of 3-D objects," : We explore representation of 3D objects in which several distinct 2D views are stored for each object. We demonstrate the ability of a two-layer network of thresholded summation units to support such representations. Using unsupervised Hebbian relaxation, the network learned to recognize ten objects from different viewpoints. The training process led to the emergence of compact representations of the specific input views. When tested on novel views of the same objects, the network exhibited a substantial generalization capability. In simulated psychophysical experiments, the network's behavior was qualitatively similar to that of human subjects. 1-hop neighbor's text information: Invariant face and object recognition in the visual system. : Neurons in the ventral stream of the primate visual system exhibit responses to the images of objects which are invariant with respect to natural transformations such as translation, size, and view. Anatomical and neurophysiological evidence suggests that this is achieved through a series of hierarchical processing areas. In an attempt to elucidate the manner in which such representations are established, we have constructed a model of cortical visual processing which seeks to parallel many features of this system, specifically the multi-stage hierarchy with its topologically constrained convergent connectivity. Each stage is constructed as a competitive network utilising a modified Hebb-like learning rule, called the trace rule, which incorporates previous as well as current neuronal activity. The trace rule enables neurons to learn about whatever is invariant over short time periods (e.g. 0.5 s) in the representation of objects as the objects transform in the real world. The trace rule enables neurons to learn the statistical invariances about objects during their transformations, by associating together representations which occur close together in time. We show that by using the trace rule training algorithm the model can indeed learn to produce transformation invariant responses to natural stimuli such as faces. Target text information: Learning Viewpoint Invariant Representations of Faces in an Attractor Network: In natural visual experience, different views of an object tend to appear in close temporal proximity as an animal manipulates the object or navigates around it. We investigated the ability of an attractor network to acquire view invariant visual representations by associating first neighbors in a pattern sequence. The pattern sequence contains successive views of faces of ten individuals as they change pose. Under the network dynamics developed by Griniasty, Tsodyks & Amit (1993), multiple views of a given subject fall into the same basin of attraction. We use an independent component (ICA) representation of the faces for the input patterns (Bell & Sejnowski, 1995). The ICA representation has advantages over the principal component representation (PCA) for viewpoint-invariant recognition both with and without the attractor network, suggesting that ICA is a better representation than PCA for object recognition. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
135
test
1-hop neighbor's text information: Veloso (1994). Planning and Learning by Analogical Reasoning. : Realistic and complex planning situations require a mixed-initiative planning framework in which human and automated planners interact to mutually construct a desired plan. Ideally, this joint cooperation has the potential of achieving better plans than either the human or the machine can create alone. Human planners often take a case-based approach to planning, relying on their past experience and planning by retrieving and adapting past planning cases. Planning by analogical reasoning in which generative and case-based planning are combined, as in Prodigy/Analogy, provides a suitable framework to study this mixed-initiative integration. However, having a human user engaged in this planning loop creates a variety of new research questions. The challenges we found creating a mixed-initiative planning system fall into three categories: planning paradigms differ in human and machine planning; visualization of the plan and planning process is a complex, but necessary task; and human users range across a spectrum of experience, both with respect to the planning domain and the underlying planning technology. This paper presents our approach to these three problems when designing an interface to incorporate a human into the process of planning by analogical reasoning with Prodigy/Analogy. The interface allows the user to follow both generative and case-based planning, it supports visualization of both plan and the planning rationale, and it addresses the variance in the experience of the user by allowing the user to control the presentation of information. 1-hop neighbor's text information: Structural similarity as guidance in case-based design. : This paper presents a novel approach to determine structural similarity as guidance for adaptation in case-based reasoning (Cbr). We advance structural similarity assessment which provides not only a single numeric value but the most specific structure two cases have in common, inclusive of the modification rules needed to obtain this structure from the two cases. Our approach treats retrieval, matching and adaptation as a group of dependent processes. This guarantees the retrieval and matching of not only similar but adaptable cases. Both together enlarge the overall problem solving performance of Cbr and the explainability of case selection and adaptation considerably. Although our approach is more theoretical in nature and not restricted to a specific domain, we will give an example taken from the domain of industrial building design. Additionally, we will sketch two prototypical implementations of this approach. Target text information: S o l u t i o n Relevant A b s t r a: Two major problems in case-based reasoning are the efficient and justified retrieval of source cases and the adaptation of retrieved solutions to the conditions of the target. For analogical theorem proving by induction, we describe how a solution-relevant abstraction can restrict the retrieval of source cases and the mapping from the source problem to the target problem and how it can determine reformulations that further adapt the source solution. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
786
test
1-hop neighbor's text information: Belief Networks Revisited: 1-hop neighbor's text information: (1996c) Feedback Models: Interpretation and Discovery. : 1-hop neighbor's text information: "Aspects of Graphical Models Connected With Causality," : This paper demonstrates the use of graphs as a mathematical tool for expressing independenices, and as a formal language for communicating and processing causal information in statistical analysis. We show how complex information about external interventions can be organized and represented graphically and, conversely, how the graphical representation can be used to facilitate quantitative predictions of the effects of interventions. We first review the Markovian account of causation and show that directed acyclic graphs (DAGs) offer an economical scheme for representing conditional independence assumptions and for deducing and displaying all the logical consequences of such assumptions. We then introduce the manipulative account of causation and show that any DAG defines a simple transformation which tells us how the probability distribution will change as a result of external interventions in the system. Using this transformation it is possible to quantify, from non-experimental data, the effects of external interventions and to specify conditions under which randomized experiments are not necessary. Finally, the paper offers a graphical interpretation for Rubin's model of causal effects, and demonstrates its equivalence to the manipulative account of causation. We exemplify the tradeoffs between the two approaches by deriving nonparametric bounds on treatment effects under conditions of imperfect compliance. Target text information: A qualitative framework for probabilistic inference. : I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
1,971
train
1-hop neighbor's text information: Sequential Thresholds: Context Sensitive Default Extensions: Default logic encounters some conceptual difficulties in representing common sense reasoning tasks. We argue that we should not try to formulate modular default rules that are presumed to work in all or most circumstances. We need to take into account the importance of the context which is continuously evolving during the reasoning process. Sequential thresholding is a quantitative counterpart of default logic which makes explicit the role context plays in the construction of a non-monotonic extension. We present a semantic characterization of generic non-monotonic reasoning, as well as the instan-tiations pertaining to default logic and sequential thresholding. This provides a link between the two mechanisms as well as a way to integrate the two that can be beneficial to both. Target text information: Possible world partition sequences: A unifying framework for uncertain reasoning. : When we work with information from multiple sources, the formalism each employs to handle uncertainty may not be uniform. In order to be able to combine these knowledge bases of different formats, we need to first establish a common basis for characterizing and evaluating the different formalisms, and provide a semantics for the combined mechanism. A common framework can provide an infrastructure for building an integrated system, and is essential if we are to understand its behavior. We present a unifying framework based on an ordered partition of possible worlds called partition sequences, which corresponds to our intuitive notion of biasing towards certain possible scenarios when we are uncertain of the actual situation. We show that some of the existing formalisms, namely, default logic, autoepistemic logic, probabilistic conditioning and thresholding (generalized conditioning), and possibility theory can be incorporated into this general framework. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
1,322
test
1-hop neighbor's text information: Cliff (1993). "Issues in evolutionary robotics," From Animals to Animats 2 (Ed. : A version of this paper appears in: Proceedings of SAB92, the Second International Conference on Simulation of Adaptive Behaviour J.-A. Meyer, H. Roitblat, and S. Wilson, editors, MIT Press Bradford Books, Cambridge, MA, 1993. 1-hop neighbor's text information: "Selection for wandering behavior in a small robot," : We have evolved artificial neural networks to control the wandering behavior of small robots. The task was to touch as many squares in a grid as possible during a fixed period of time. A number of the simulated robots were embodied in small Lego (Trademark) robot, controlled by a Motorola (Trademark) 6811 processor; and their performance was compared to the simulations. We observed that: (a) evolution was an effective means to program control; (b) progress was characterized by sharply stepped periods of improvement, separated by periods of stasis that corresponded to levels of behavioral/computational complexity; and (c) the simulated and realized robots behaved quite similarly, the realized robots in some cases outperforming the simulated ones. Introducing random noise to the simulations improved the fit somewhat (from 0.73 to 0.79). Hybrid simulated/embodied selection regimes for evolutionary robots are discussed. 1-hop neighbor's text information: Investigating the role of diploidy in simulated populations of evolving individuals: In most work applying genetic algorithms to populations of neural networks there is no real distinction between genotype and phenotype. In nature both the information contained in the genotype and the mapping of the genetic information into the phenotype are usually much more complex. The genotypes of many organisms exhibit diploidy, i.e., they include two copies of each gene: if the two copies are not identical in their sequences and therefore have a functional difference in their products (usually proteins), the expressed phenotypic feature is termed the dominant one, the other one recessive (not expressed). In this paper we review the literature on the use of diploidy and dominance operators in genetic algorithms; we present the new results we obtained with our own simulations in changing environments; finally, we discuss some results of our simulations that parallel biological findings. Target text information: How to evolve autonomous robots: : I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
1,199
val
1-hop neighbor's text information: LEARNING MORE FROM LESS DATA: EXPERIMENTS WITH LIFELONG ROBOT LEARNING: 1-hop neighbor's text information: Is Learning the n-th Thing Any Easier Than Learning the First? in: : This paper investigates learning in a lifelong context. Lifelong learning addresses situations in which a learner faces a whole stream of learning tasks. Such scenarios provide the opportunity to transfer knowledge across multiple learning tasks, in order to generalize more accurately from less training data. In this paper, several different approaches to lifelong learning are described, and applied in an object recognition domain. It is shown that across the board, lifelong learning approaches generalize consistently more accurately from less training data, by their ability to transfer knowledge across learning tasks. Target text information: "Clustering learning tasks and the selective cross-task transfer of knowledge", : This research is sponsored in part by the National Science Foundation under award IRI-9313367, and by the Wright Laboratory, Aeronautical Systems Center, Air Force Materiel Command, USAF, and the Advanced Research Projects Agency (ARPA) under grant number F33615-93-1-1330. The views and conclusions contained in this document are those of the author and should not be interpreted as necessarily representing official policies or endorsements, either expressed or implied, of NSF, Wright Laboratory or the United States Government. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
1,936
test
1-hop neighbor's text information: Improving Bagging Performance by Increasing Decision Tree Diversity: Ensembles of decision trees often exhibit greater predictive accuracy than single trees alone. Bagging and boosting are two standard ways of generating and combining multiple trees. Boosting has been empirically determined to be the more effective of the two, and it has recently been proposed that this may be because it produces more diverse trees than bagging. This paper reports empirical findings that strongly support this hypothesis. We enforce greater decision tree diversity in bagging by a simple modification of the underlying decision tree learner that utilizes randomly-generated decision stumps of predefined depth as the starting point for tree induction. The modified procedure yields very competitive results while still retaining one of the attractive properties of bagging: all iterations are independent. Additionally, we also investigate a possible integration of bagging and boosting. All these ensemble-generating procedures are compared empirically on various domains. 1-hop neighbor's text information: "A framework of combining symbolic and neural learning," : The primary goal of inductive learning is to generalize well that is, induce a function that accurately produces the correct output for future inputs. Hansen and Salamon showed that, under certain assumptions, combining the predictions of several separately trained neural networks will improve generalization. One of their key assumptions is that the individual networks should be independent in the errors they produce. In the standard way of performing backpropagation this assumption may be violated, because the standard procedure is to initialize network weights in the region of weight space near the origin. This means that backpropagation's gradient-descent search may only reach a small subset of the possible local minima. In this paper we present an approach to initializing neural networks that uses competitive learning to intelligently create networks that are originally located far from the origin of weight space, thereby potentially increasing the set of reachable local minima. We report experiments on two real-world datasets where combinations of networks initialized with our method generalize better than combina tions of networks initialized the traditional way. 1-hop neighbor's text information: Experiments with a New Boosting Algorithm. : In an earlier paper, we introduced a new boosting algorithm called AdaBoost which, theoretically, can be used to significantly reduce the error of any learning algorithm that consistently generates classifiers whose performance is a little better than random guessing. We also introduced the related notion of a pseudo-loss which is a method for forcing a learning algorithm of multi-label concepts to concentrate on the labels that are hardest to discriminate. In this paper, we describe experiments we carried out to assess how well AdaBoost with and without pseudo-loss, performs on real learning problems. We performed two sets of experiments. The first set compared boosting to Breiman's bagging method when used to aggregate various classifiers (including decision trees and single attribute-value tests). We compared the performance of the two methods on a collection of machine-learning benchmarks. In the second set of experiments, we studied in more detail the performance of boosting using a nearest-neighbor classifier on an OCR problem. Target text information: An empirical evaluation of bagging and boosting. : An ensemble consists of a set of independently trained classifiers (such as neural networks or decision trees) whose predictions are combined when classifying novel instances. Previous research has shown that an ensemble as a whole is often more accurate than any of the single classifiers in the ensemble. Bagging (Breiman 1996a) and Boosting (Freund & Schapire 1996) are two relatively new but popular methods for producing ensembles. In this paper we evaluate these methods using both neural networks and decision trees as our classification algorithms. Our results clearly show two important facts. The first is that even though Bagging almost always produces a better classifier than any of its individual component classifiers and is relatively impervious to overfitting, it does not generalize any better than a baseline neural-network ensemble method. The second is that Boosting is a powerful technique that can usually produce better ensembles than Bagging; however, it is more susceptible to noise and can quickly overfit a data set. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
1,973
val
1-hop neighbor's text information: Learning to coordinate without sharing information. : Researchers in the field of Distributed Artificial Intelligence (DAI) have been interested in developing efficient mechanisms to coordinate the activities of multiple autonomous agents. The need for coordination arises because agents have to share resources and expertise required to achieve their goals. Previous work in the area includes using sophisticated information exchange protocols, investigating heuristics for negotiation, and developing formal models of possibilities of conflict and cooperation among agent interests. In order to handle the changing requirements of continuous and dynamic environments, we propose learning as a means to provide additional possibilities for effective coordination. We use reinforcement learning techniques on a block pushing problem to show that agents can learn complimentary policies to follow a desired path without any knowledge about each other. We theoretically analyze and experimentally verify the effects of learning rate on system convergence, and also demonstrate the benefits of using learned coordination knowledge on similar problems. Similar reinforcement learning based coordination can be achieved in both cooperative and non-cooperative domains, and in domains with noisy communication channels and other stochastic characteristics that present a formidable challenge to using other coordination schemes. 1-hop neighbor's text information: Reinforcement Learning: A Survey. : This paper surveys the field of reinforcement learning from a computer-science perspective. It is written to be accessible to researchers familiar with machine learning. Both the historical basis of the field and a broad selection of current work are summarized. Reinforcement learning is the problem faced by an agent that learns behavior through trial-and-error interactions with a dynamic environment. The work described here has a resemblance to work in psychology, but differs considerably in the details and in the use of the word "reinforcement." The paper discusses central issues of reinforcement learning, including trading off exploration and exploitation, establishing the foundations of the field via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state. It concludes with a survey of some implemented systems and an assessment of the practical utility of current methods for reinforcement learning. 1-hop neighbor's text information: Markov games as a framework for multi-agent reinforcement learning. : In the Markov decision process (MDP) formalization of reinforcement learning, a single adaptive agent interacts with an environment defined by a probabilistic transition function. In this solipsistic view, secondary agents can only be part of the environment and are therefore fixed in their behavior. The framework of Markov games allows us to widen this view to include multiple adaptive agents with interacting or competing goals. This paper considers a step in this direction in which exactly two agents with diametrically opposed goals share an environment. It describes a Q-learning-like algorithm for finding optimal policies and demonstrates its application to a simple two-player game in which the optimal policy is probabilistic. Target text information: Reinforcement Learning with Imitation in Heterogeneous Multi-Agent Systems: The application of decision making and learning algorithms to multi-agent systems presents many interestingresearch challenges and opportunities. Among these is the ability for agents to learn how to act by observing or imitating other agents. We describe an algorithm, the IQ-algorithm, that integrates imitation with Q-learning. Roughly, a Q-learner uses the observations it has made of an expert agent to bias its exploration in promising directions. This algorithm goes beyond previous work in this direction by relaxing the oft-made assumptions that the learner (observer) and the expert (observed agent) share the same objectives and abilities. Our preliminary experiments demonstrate significant transfer between agents using the IQ-model and in many cases reductions in training time. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
5
Reinforcement Learning
cora
436
val
1-hop neighbor's text information: Limits of control flow on parallelism. : This paper discusses three techniques useful in relaxing the constraints imposed by control flow on parallelism: control dependence analysis, executing multiple flows of control simultaneously, and speculative execution. We evaluate these techniques by using trace simulations to find the limits of parallelism for machines that employ different combinations of these techniques. We have three major results. First, local regions of code have limited parallelism, and control dependence analysis is useful in extracting global parallelism from different parts of a program. Second, a superscalar processor is fundamentally limited because it cannot execute independent regions of code concurrently. Higher performance can be obtained with machines, such as multiprocessors and dataflow machines, that can simultaneously follow multiple flows of control. Finally, without speculative execution to allow instructions to execute before their control dependences are resolved, only modest amounts of parallelism can be obtained for programs with complex control flow. Target text information: Space-Time Scheduling of Instruction-Level Parallelism on a Raw Machine. : Advances in VLSI technology will enable chips with over a billion transistors within the next decade. Unfortunately, the centralized-resource architectures of modern microprocessors are ill-suited to exploit such advances. Achieving a high level of parallelism at a reasonable clock speed requires distributing the processor resources a trend already visible in the dual-register-file architecture of the Alpha 21264. A Raw microprocessor takes an extreme position in this space by distributing all its resources such as instruction streams, register files, memory ports, and ALUs over a pipelined two-dimensional interconnect, and exposing them fully to the compiler. Compilation for instruction-level parallelism (ILP) on such distributed-resource machines requires both spatial instruction scheduling and traditional temporal instruction scheduling. This paper describes the techniques used by the Raw compiler to handle these issues. Preliminary results from a SUIF-based compiler for sequential programs written in C and Fortran indicate that the Raw approach to exploiting ILP can achieve speedups scalable with the number of processors for applications with such parallelism. The Raw architecture attempts to provide performance that is at least comparable to that provided by scaling an existing architecture, but that can achieve orders of magnitude improvement in performance for applications with a large amount of parallelism. This paper offers some positive results in this direction. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
0
Rule Learning
cora
1,146
test
1-hop neighbor's text information: "Learning and evolution in neural networks," : 1-hop neighbor's text information: Evolution of Homing Navigation in a Real Mobile Robot. : In this paper we describe the evolution of a discrete-time recurrent neural network to control a real mobile robot. In all our experiments the evolutionary procedure is carried out entirely on the physical robot without human intervention. We show that the autonomous development of a set of behaviors for locating a battery charger and periodically returning to it can be achieved by lifting constraints in the design of the robot/environment interactions that were employed in a preliminary experiment. The emergent homing behavior is based on the autonomous development of an internal neural topographic map (which is not pre-designed) that allows the robot to choose the appropriate trajectory as function of location and remaining energy. 1-hop neighbor's text information: Tracking the red queen: Measurements of adaptive progress in co-evolutionary simulations. : Co-evolution can give rise to the "Red Queen effect", where interacting populations alter each other's fitness landscapes. The Red Queen effect significantly complicates any measurement of co-evolutionary progress, introducing fitness ambiguities where improvements in performance of co-evolved individuals can appear as a decline or stasis in the usual measures of evolutionary progress. Unfortunately, no appropriate measures of fitness given the Red Queen effect have been developed in artificial life, theoretical biology, population dynamics, or evolutionary genetics. We propose a set of appropriate performance measures based on both genetic and behavioral data, and illustrate their use in a simulation of co-evolution between genetically specified continuous-time noisy recurrent neural networks which generate pursuit and evasion behaviors in autonomous agents. Target text information: Adaptive behaviour in competing co-evolving species. : Co-evolution of competitive species provides an interesting testbed to study the role of adaptive behavior because it provides unpredictable and dynamic environments. In this paper we experimentally investigate some arguments for the co-evolution of different adaptive protean behaviors in competing species of predators and preys. Both species are implemented as simulated mobile robots (Kheperas) with infrared proximity sensors, but the predator has an additional vision module whereas the prey has a maximum speed set to twice that of the predator. Different types of variability during life for neurocontrollers with the same architecture and genetic length are compared. It is shown that simple forms of pro-teanism affect co-evolutionary dynamics and that preys rather exploit noisy controllers to generate random trajectories, whereas predators benefit from directional-change controllers to improve pursuit behavior. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
2,694
test
1-hop neighbor's text information: The ilp description learning problem: Towards a genearl model-leve definition of data mining in ilp. : [email protected], [email protected] Proc. FGML-95, Annual Workshop of the GI Special Interest Group Machine Learning (GI FG 1.1.3), ed. K. Morik and J. Herrmann, Research Report 580, Univ.Dortmund, 1995. Abstract The task of discovering interesting regularities in (large) sets of data (data mining, knowledge discovery) has recently met with increased interest in Machine Learning in general and in Inductive Logic Programming (ILP) in particular. However, while there is a widely accepted definition for the task of concept learning from examples in ILP, definitions for the data mining task have been proposed only recently. In this paper, we examine these so-called "non-monotonic semantics" definitions and show that non-monotonicity is only an incidental property of the data mining learning task, and that this task makes perfect sense without such an assumption. We therefore introduce and define a generalized definition of the data mining task called the ILP description learning problem and discuss its properties and relation to the traditional concept learning (prediction) learning problem. Since our characterization is entirely on the level of models, the definition applies independently of the chosen hypothesis language. 1-hop neighbor's text information: Relational knowledge discovery in databases. : In this paper, we indicate some possible applications of ILP or similar techniques in the knowledge discovery field, and then discuss several methods for adapting and linking ILP-systems to relational database systems. The proposed methods range from "pure ILP" to "based on techniques originating in ILP". We show that it is both easy and advantageous to adapt ILP-systems in this way. 1-hop neighbor's text information: Multi-class problems and discretization in ICL (extended abstract). : Handling multi-class problems and real numbers is important in practical applications of machine learning to KDD problems. While attribute-value learners address these problems as a rule, very few ILP systems do so. The few ILP systems that handle real numbers mostly do so by trying out all real values that are applicable, thus running into efficiency or overfitting problems. This paper discusses some recent extensions of ICL that address these problems. ICL, which stands for Inductive Constraint Logic, is an ILP system that learns first order logic formulae from positive and negative examples. The main charateristic of ICL is its view on examples. These are seen as interpretations which are true or false for the clausal target theory (in CNF). We first argue that ICL can be used for learning a theory in a disjunctive normal form (DNF). With this in mind, a possible solution for handling more than two classes is given (based on some ideas from CN2). Finally, we show how to tackle problems with continuous values by adapting discretization techniques from attribute value learners. Target text information: Inductive Constraint Logic. : A novel approach to learning first order logic formulae from positive and negative examples is presented. Whereas present inductive logic programming systems employ examples as true and false ground facts (or clauses), we view examples as interpretations which are true or false for the target theory. This viewpoint allows to reconcile the inductive logic programming paradigm with classical attribute value learning in the sense that the latter is a special case of the former. Because of this property, we are able to adapt AQ and CN2 type algorithms in order to enable learning of full first order formulae. However, whereas classical learning techniques have concentrated on concept representations in disjunctive normal form, we will use a clausal representation, which corresponds to a conjuctive normal form where each conjunct forms a constraint on positive examples. This representation duality reverses also the role of positive and negative examples, both in the heuristics and in the algorithm. The resulting theory is incorporated in a system named ICL (Inductive Constraint Logic). I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
0
Rule Learning
cora
2,409
test
1-hop neighbor's text information: Nonlinear Prediction of Chaotic Time Series. : A novel method for regression has been recently proposed by V. Vapnik et al. [8, 9]. The technique, called Support Vector Machine (SVM), is very well founded from the mathematical point of view and seems to provide a new insight in function approximation. We implemented the SVM and tested it on the same data base of chaotic time series that was used in [1] to compare the performances of different approximation techniques, including polynomial and rational approximation, local polynomial techniques, Radial Basis Functions, and Neural Networks. The SVM performs better than the approaches presented in [1]. We also study, for a particular time series, the variability in performance with respect to the few free parameters of SVM. 1-hop neighbor's text information: A Theory of Networks for Approximation and Learning, : Learning an input-output mapping from a set of examples, of the type that many neural networks have been constructed to perform, can be regarded as synthesizing an approximation of a multi-dimensional function, that is solving the problem of hy-persurface reconstruction. From this point of view, this form of learning is closely related to classical approximation techniques, such as generalized splines and regularization theory. This paper considers the problems of an exact representation and, in more detail, of the approximation of linear and nonlinear mappings in terms of simpler functions of fewer variables. Kolmogorov's theorem concerning the representation of functions of several variables in terms of functions of one variable turns out to be almost irrelevant in the context of networks for learning. We develop a theoretical framework for approximation based on regularization techniques that leads to a class of three-layer networks that we call Generalized Radial Basis Functions (GRBF), since they are mathematically related to the well-known Radial Basis Functions, mainly used for strict interpolation tasks. GRBF networks are not only equivalent to generalized splines, but are also closely related to pattern recognition methods such as Parzen windows and potential functions and to several neural network algorithms, such as Kanerva's associative memory, backpropagation and Kohonen's topology preserving map. They also have an interesting interpretation in terms of prototypes that are synthesized and optimally combined during the learning stage. The paper introduces several extensions and applications of the technique and discusses intriguing analogies with neurobiological data. c fl Massachusetts Institute of Technology, 1994 This paper describes research done within the Center for Biological Information Processing, in the Department of Brain and Cognitive Sciences, and at the Artificial Intelligence Laboratory. This research is sponsored by a grant from the Office of Naval Research (ONR), Cognitive and Neural Sciences Division; by the Artificial Intelligence Center of Hughes Aircraft Corporation; by the Alfred P. Sloan Foundation; by the National Science Foundation. Support for the A. I. Laboratory's artificial intelligence research is provided by the Advanced Research Projects Agency of the Department of Defense under Army contract DACA76-85-C-0010, and in part by ONR contract N00014-85-K-0124. 1-hop neighbor's text information: Hierarchical Mixtures of Experts and the EM Algorithm, : We present a tree-structured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIM's). Learning is treated as a maximum likelihood problem; in particular, we present an Expectation-Maximization (EM) algorithm for adjusting the parameters of the architecture. We also develop an on-line learning algorithm in which the parameters are updated incrementally. Comparative simulation results are presented in the robot dynamics domain. *We want to thank Geoffrey Hinton, Tony Robinson, Mitsuo Kawato and Daniel Wolpert for helpful comments on the manuscript. This project was supported in part by a grant from the McDonnell-Pew Foundation, by a grant from ATR Human Information Processing Research Laboratories, by a grant from Siemens Corporation, by by grant IRI-9013991 from the National Science Foundation, and by grant N00014-90-J-1942 from the Office of Naval Research. The project was also supported by NSF grant ASC-9217041 in support of the Center for Biological and Computational Learning at MIT, including funds provided by DARPA under the HPCC program, and NSF grant ECS-9216531 to support an Initiative in Intelligent Control at MIT. Michael I. Jordan is a NSF Presidential Young Investigator. Target text information: Adaptive Computation Techniques for Time Series Analysis: I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
210
test
1-hop neighbor's text information: Data Structures and Genetic Programming, : It is established good software engineering practice to ensure that programs use memory via abstract data structures such as stacks, queues and lists. These provide an interface between the program and memory, freeing the program of memory management details which are left to the data structures to implement. The main result presented herein is that GP can automatically generate stacks and queues. Typically abstract data structures support multiple operations, such as put and get. We show that GP can simultaneously evolve all the operations of a data structure by implementing each such operation with its own independent program tree. That is, the chromosome consists of a fixed number of independent program trees. Moreover, crossover only mixes genetic material of program trees that implement the same operation. Program trees interact with each other only via shared memory and shared "Automatically Defined Functions" (ADFs). 1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming. 1-hop neighbor's text information: A promising genetic algorithm approach to job-shop scheduling, rescheduling, and open-shop scheduling problems. : Target text information: Scheduling maintenance of electrical power transmission networks using genetic programming. : Previous work showed the combination of a Genetic Algorithm using an order or permutation chromosome combined with hand coded "Greedy" Optimizers can readily produce an optimal schedule for a four node test problem [ Langdon, 1995 ] . Following this the same GA has been used to find low cost schedules for the South Wales region of the UK high voltage power network. This paper describes the evolution of the best known schedule for the base South Wales problem using Genetic Programming starting from the hand coded heuris tics used with the GA. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
2,114
test
1-hop neighbor's text information: Construction of phylogenetic trees. : 6] Farach, M. and Thorup, M. 1993. Fast Comparison of Evolutionary Trees, Technical Report 93-46, DIMACS, Rutgers University, Piscataway, NJ. Target text information: A six-point condition for ordinal matrices, : Ordinal assertions in an evolutionary context are of the form "species s is more similar to species x than to species y" and can be deduced from a distance matrix M of interspecies dissimilarities (M [s; x] < M [s; y]). Given species x and y, the ordinal binary character c xy of M is defined by c xy (s) = 1 if and only if M [s; x] < M[s; y], for all species s. In this paper we present several results concerning the inference of evolutionary trees or phylogenies from ordinal assertions. In particular, we present A six-point condition that characterizes those distance matrices whose ordinal binary characters are pairwise compatible. This characterization is analogous to the four-point condition for additive matrices. An optimal O(n 2 ) algorithm, where n is the number of species, for recovering a phylogeny that realizes the ordinal binary characters of a distance matrix that satisfies the six-point condition. An NP-completeness result on determining if there is a phylogeny that realizes k or more of the ordinal binary characters of a given distance matrix. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
2,521
test
1-hop neighbor's text information: Dynamic Hammock Predication for Non-predicated Instruction Set Architectures: Conventional speculative architectures use branch prediction to evaluate the most likely execution path during program execution. However, certain branches are difficult to predict. One solution to this problem is to evaluate both paths following such a conditional branch. Predicated execution can be used to implement this form of multi-path execution. Predicated architectures fetch and issue instructions that have associated predicates. These predicates indicate if the instruction should commit its result. Predicating a branch reduces the number of branches executed, eliminating the chance of branch misprediction at the cost of executing additional instructions. In this paper, we propose a restricted form of multi-path execution called Dynamic Predication for architectures with little or no support for predicated instructions in their instruction set. Dynamic predication dynamically predicates instruction sequences in the form of a branch hammock, concurrently executing both paths of the branch. A branch hammock is a short forward branch that spans a few instructions in the form of an if-then or if-then-else construct. We mark these and other constructs in the executable. When the decode stage detects such a sequence, it passes a predicated instruction sequence to a dynamically scheduled execution core. Our results show that dynamic predication can accrue speedups of up to 13%. 1-hop neighbor's text information: Limits of Instruction-Level Parallelism, : This paper examines the limits to instruction level parallelism that can be found in programs, in particular the SPEC95 benchmark suite. Apart from using a more recent version of the SPEC benchmark suite, it differs from earlier studies in removing non-essential true dependencies that occur as a result of the compiler employing a stack for subroutine linkage. This is a subtle limitation to parallelism that is not readily evident as it appears as a true dependency on the stack pointer. Other methods can be used that do not employ a stack to remove this dependency. In this paper we show that its removal exposes far more parallelism than has been seen previously. We refer to this type of parallelism as "parallelism at a distance" because it requires impossibly large instruction windows for detection. We conclude with two observations: 1) that a single instruction window characteristic of superscalar machines is inadequate for detecting parallelism at a distance; and 2) in order to take advantage of this parallelism the compiler must be involved, or separate threads must be explicitly programmed. Target text information: A Comparison of Full and Partial Predicated Execution Support for ILP Processors. : One can effectively utilize predicated execution to improve branch handling in instruction-level parallel processors. Although the potential benefits of predicated execution are high, the tradeoffs involved in the design of an instruction set to support predicated execution can be difficult. On one end of the design spectrum, architectural support for full predicated execution requires increasing the number of source operands for all instructions. Full predicate support provides for the most flexibility and the largest potential performance improvements. On the other end, partial predicated execution support, such as conditional moves, requires very little change to existing architectures. This paper presents a preliminary study to qualitatively and quantitatively address the benefit of full and partial predicated execution support. With our current compiler technology, we show that the compiler can use both partial and full predication to achieve speedup in large control-intensive programs. Some details of the code generation techniques are shown to provide insight into the benefit of going from partial to full predication. Preliminary experimental results are very encouraging: partial predication provides an average of 33% performance improvement for an 8-issue processor with no predicate support while full predication provides an additional 30% improvement. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
0
Rule Learning
cora
1,192
test
1-hop neighbor's text information: A simple randomized quantization algorithm for neural network pattern classifiers. : This paper explores some algorithms for automatic quantization of real-valued datasets using thermometer codes for pattern classification applications. Experimental results indicate that a relatively simple randomized thermometer code generation technique can result in quantized datasets that when used to train simple perceptrons, can yield generalization on test data that is substantially better than that obtained with their unquantized counterparts. 1-hop neighbor's text information: Perceptual Development and Learning: From Behavioral, Neurophysiological and Morphological Evidence to Computational Models. : An intelligent system has to be capable of adapting to a constantly changing environment. It therefore, ought to be capable of learning from its perceptual interactions with its surroundings. This requires a certain amount of plasticity in its structure. Any attempt to model the perceptual capabilities of a living system or, for that matter, to construct a synthetic system of comparable abilities, must therefore, account for such plasticity through a variety of developmental and learning mechanisms. This paper examines some results from neuroanatomical, morphological, as well as behavioral studies of the development of visual perception; integrates them into a computational framework; and suggests several interesting experiments with computational models that can yield insights into the development of visual perception. In order to understand the development of information processing structures in the brain, one needs knowledge of changes it undergoes from birth to maturity in the context of a normal environment. However, knowledge of its development in aberrant settings is also extremely useful, because it reveals the extent to which the development is a function of environmental experience (as opposed to genetically determined pre-wiring). Accordingly, we consider development of the visual system under both normal and restricted rearing conditions. The role of experience in the early development of the sensory systems in general, and the visual system in particular, has been widely studied through a variety of experiments involving carefully controlled manipulation of the environment presented to an animal. Extensive reviews of such results can be found in (Mitchell, 1984; Movshon, 1981; Hirsch, 1986; Boothe, 1986; Singer, 1986). Some examples of manipulation of visual experience are total pattern deprivation (e.g., dark rearing), selective deprivation of a certain class of patterns (e.g., vertical lines), monocular deprivation in animals with binocular vision, etc. Extensive studies involving behavioral deficits resulting from total visual pattern deprivation indicate that the deficits arise primarily as a result of impairment of visual information processing in the brain. The results of these experiments suggest specific developmental or learning mechanisms that may be operating at various stages of development, and at different levels in the system. We will discuss some of these hhhhhhhhhhhhhhh This is a working draft. All comments, especially constructive criticism and suggestions for improvement, will be appreciated. I am indebted to Prof. James Dannemiller for introducing me to some of the literature in infant development; to Prof. Leonard Uhr for his helpful comments on an initial draft of the paper; and to numerous researchers whose experimental work has provided the basis for the model outlined in this paper. This research was partially supported by grants from the National Science Foundation and the University of Wisconsin Graduate School. 1-hop neighbor's text information: Pruning Strategies for the MTiling Constructive Learning Algorithm: We present a framework for incorporating pruning strategies in the MTiling constructive neural network learning algorithm. Pruning involves elimination of redundant elements (connection weights or neurons) from a network and is of considerable practical interest. We describe three elementary sensitivity based strategies for pruning neurons. Experimental results demonstrate a moderate to significant reduction in the network size without compromising the network's generalization performance. Target text information: Coordination and Control Structures and Processes: Possibilities for Connectionist Networks. : The absence of powerful control structures and processes that synchronize, coordinate, switch between, choose among, regulate, direct, modulate interactions between, and combine distinct yet interdependent modules of large connectionist networks (CN) is probably one of the most important reasons why such networks have not yet succeeded at handling difficult tasks (e.g. complex object recognition and description, complex problem-solving, planning). In this paper we examine how CN built from large numbers of relatively simple neuron-like units can be given the ability to handle problems that in typical multi-computer networks and artificial intelligence programs along with all other types of programs are always handled using extremely elaborate and precisely worked out central control (coordination, synchronization, switching, etc.). We point out the several mechanisms for central control of this un-brain-like sort that CN already have built into them albeit in hidden, often overlooked, ways. We examine the kinds of control mechanisms found in computers, programs, fetal development, cellular function and the immune system, evolution, social organizations, and especially brains, that might be of use in CN. Particularly intriguing suggestions are found in the pacemakers, oscillators, and other local sources of the brain's complex partial synchronies; the diffuse, global effects of slow electrical waves and neurohormones; the developmental program that guides fetal development; communication and coordination within and among living cells; the working of the immune system; the evolutionary processes that operate on large populations of organisms; and the great variety of partially competing partially cooperating controls found in small groups, organizations, and larger societies. All these systems are rich in control but typically control that emerges from complex interactions of many local and diffuse sources. We explore how several different kinds of plausible control mechanisms might be incorporated into CN, and assess their potential benefits with respect to their cost. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
1,018
test
1-hop neighbor's text information: Empirical Analysis of the General Utility Problem in Machine Learning, : The overfit problem in inductive learning and the utility problem in speedup learning both describe a common behavior of machine learning methods: the eventual degradation of performance due to increasing amounts of learned knowledge. Plotting the performance of the changing knowledge during execution of a learning method (the performance response) reveals similar curves for several methods. The performance response generally indicates an increase to a single peak followed by a more gradual decrease in performance. The similarity in performance responses suggests a model relating performance to the amount of learned knowledge. This paper provides empirical evidence for the existence of a general model by plotting the performance responses of several learning programs. Formal models of the performance response are also discussed. These models can be used to control the amount of learning and avoid degradation of performance. 1-hop neighbor's text information: The Use of Explicit Goals for Knowledge to Guide Inference and Learning. : Combinatorial explosion of inferences has always been a central problem in artificial intelligence. Although the inferences that can be drawn from a reasoner's knowledge and from available inputs is very large (potentially infinite), the inferential resources available to any reasoning system are limited. With limited inferential capacity and very many potential inferences, reasoners must somehow control the process of inference. Not all inferences are equally useful to a given reasoning system. Any reasoning system that has goals (or any form of a utility function) and acts based on its beliefs indirectly assigns utility to its beliefs. Given limits on the process of inference, and variation in the utility of inferences, it is clear that a reasoner ought to draw the inferences that will be most valuable to it. This paper presents an approach to this problem that makes the utility of a (potential) belief an explicit part of the inference process. The method is to generate explicit desires for knowledge. The question of focus of attention is thereby transformed into two related problems: How can explicit desires for knowledge be used to control inference and facilitate resource-constrained goal pursuit in general? and, Where do these desires for knowledge come from? We present a theory of knowledge goals, or desires for knowledge, and their use in the processes of understanding and learning. The theory is illustrated using two case studies, a natural language understanding program that learns by reading novel or unusual newspaper stories, and a differential diagnosis program that improves its accuracy with experience. 1-hop neighbor's text information: Design and Implementation of a Replay Framework based on a Partial order Planner. : In this paper we describe the design and implementation of the derivation replay framework, dersnlp+ebl (Derivational snlp+ebl), which is based within a partial order planner. dersnlp+ebl replays previous plan derivations by first repeating its earlier decisions in the context of the new problem situation, then extending the replayed path to obtain a complete solution for the new problem. When the replayed path cannot be extended into a new solution, explanation-based learning (ebl) techniques are employed to identify the features of the new problem which prevent this extension. These features are then added as censors on the retrieval of the stored case. To keep retrieval costs low, dersnlp+ebl normally stores plan derivations for individual goals, and replays one or more of these derivations in solving multi-goal problems. Cases covering multiple goals are stored only when subplans for individual goals cannot be successfully merged. The aim in constructing the case library is to predict these goal interactions and to store a multi-goal case for each set of negatively interacting goals. We provide empirical results demonstrating the effectiveness of dersnlp+ebl in improving planning performance on randomly-generated problems drawn from a complex domain. Target text information: A comparative utility analysis of case-based reasoning and control-rule learning systems. : The utility problem in learning systems occurs when knowledge learned in an attempt to improve a system's performance degrades performance instead. We present a methodology for the analysis of utility problems which uses computational models of problem solving systems to isolate the root causes of a utility problem, to detect the threshold conditions under which the problem will arise, and to design strategies to eliminate it. We present models of case-based reasoning and control-rule learning systems and compare their performance with respect to the swamping utility problem. Our analysis suggests that case-based reasoning systems are more resistant to the utility problem than control-rule learning systems. 1 I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
1,584
test
1-hop neighbor's text information: Sequential Thresholds: Context Sensitive Default Extensions: Default logic encounters some conceptual difficulties in representing common sense reasoning tasks. We argue that we should not try to formulate modular default rules that are presumed to work in all or most circumstances. We need to take into account the importance of the context which is continuously evolving during the reasoning process. Sequential thresholding is a quantitative counterpart of default logic which makes explicit the role context plays in the construction of a non-monotonic extension. We present a semantic characterization of generic non-monotonic reasoning, as well as the instan-tiations pertaining to default logic and sequential thresholding. This provides a link between the two mechanisms as well as a way to integrate the two that can be beneficial to both. Target text information: Uncertain inferences and uncertain conclusions. : Uncertainty may be taken to characterize inferences, their conclusions, their premises or all three. Under some treatments of uncertainty, the inference itself is never characterized by uncertainty. We explore both the signiflcance of uncertainty in the premises and in the conclusion of an argument that involves uncertainty. We argue that for uncertainty to characterize the conclusion of an inference is natural, but that there is an interplay between uncertainty in the premises and uncertainty in the procedure of argument itself. We show that it is possible in principle to incorporate all uncertainty in the premises, rendering uncertainty arguments deductively valid. But we then argue (1) that this does not reect human argument, (2) that it is computa-tionally costly, and (3) that the gain in simplicity obtained by allowing uncertainty in inference can sometimes outweigh the loss of exibility it entails. keywords: uncertainty, inference, logic, argument, decision, premises. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
1,321
test
1-hop neighbor's text information: Scaling Up. : Partially observable Markov decision processes (pomdp's) model decision problems in which an agent tries to maximize its reward in the face of limited and/or noisy sensor feedback. While the study of pomdp's is motivated by a need to address realistic problems, existing techniques for finding optimal behavior do not appear to scale well and have been unable to find satisfactory policies for problems with more than a dozen states. After a brief review of pomdp's, this paper discusses several simple solution methods and shows that all are capable of finding near-optimal policies for a selection of extremely small pomdp's taken from the learning literature. In contrast, we show that none are able to solve a slightly larger and noisier problem based on robot navigation. We find that a combination of two novel approaches performs well on these problems and suggest methods for scaling to even larger and more complicated domains. 1-hop neighbor's text information: Efficient dynamic-programming updates in partially observable Markov decision processes. : We examine the problem of performing exact dynamic-programming updates in partially observable Markov decision processes (pomdps) from a computational complexity viewpoint. Dynamic-programming updates are a crucial operation in a wide range of pomdp solution methods and we find that it is intractable to perform these updates on piecewise-linear convex value functions for general pomdps. We offer a new algorithm, called the witness algorithm, which can compute updated value functions efficiently on a restricted class of pomdps in which the number of linear facets is not too great. We compare the witness algorithm to existing algorithms analytically and empirically and find that it is the fastest algorithm over a wide range of pomdp sizes. 1-hop neighbor's text information: Approximating optimal policies for partially observable stochastic domains. : The problem of making optimal decisions in uncertain conditions is central to Artificial Intelligence. If the state of the world is known at all times, the world can be modeled as a Markov Decision Process (MDP). MDPs have been studied extensively and many methods are known for determining optimal courses of action, or policies. The more realistic case where state information is only partially observable, Partially Observable Markov Decision Processes (POMDPs), have received much less attention. The best exact algorithms for these problems can be very inefficient in both space and time. We introduce Smooth Partially Observable Value Approximation (SPOVA), a new approximation method that can quickly yield good approximations which can improve over time. This method can be combined with reinforcement learning methods, a combination that was very effective in our test cases. Target text information: Incremental methods for computing bounds in partially observable Markov decision processes. : Partially observable Markov decision processes (POMDPs) allow one to model complex dynamic decision or control problems that include both action outcome uncertainty and imperfect observabil-ity. The control problem is formulated as a dynamic optimization problem with a value function combining costs or rewards from multiple steps. In this paper we propose, analyse and test various incremental methods for computing bounds on the value function for control problems with infinite discounted horizon criteria. The methods described and tested include novel incremental versions of grid-based linear interpolation method and simple lower bound method with Sondik's updates. Both of these can work with arbitrary points of the belief space and can be enhanced by various heuristic point selection strategies. Also introduced is a new method for computing an initial upper bound the fast informed bound method. This method is able to improve significantly on the standard and commonly used upper bound computed by the MDP-based method. The quality of resulting bounds are tested on a maze navigation problem with 20 states, 6 actions and 8 observations. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
5
Reinforcement Learning
cora
2,672
test
1-hop neighbor's text information: Learning when reformulation is appropriate for iterative design. : It is well known that search-space reformulation can improve the speed and reliability of numerical optimization in engineering design. We argue that the best choice of reformulation depends on the design goal, and present a technique for automatically constructing rules that map the design goal into a reformulation chosen from a space of possible reformulations. We tested our technique in the domain of racing-yacht-hull design, where each reformulation corresponds to incorporating constraints into the search space. We applied a standard inductive-learning algorithm, C4.5, to a set of training data describing which constraints are active in the optimal design for each goal encountered in a previous design session. We then used these rules to choose an appropriate reformulation for each of a set of test cases. Our experimental results show that using these reformulations improves both the speed and the reliability of design optimization, outperforming competing methods and approaching the best performance possible. 1-hop neighbor's text information: "Abstraction and Decomposition in Hill-climbing Design Optimization". : The performance of hillclimbing design optimization can be improved by abstraction and decomposition of the design space. Methods for automatically finding and exploiting such abstractions and decompositions are presented in this paper. A technique called "Operator Importance Analysis" finds useful abstractions. It does so by determining which of a given set of operators are the most important for a given class of design problems. Hillclimbing search runs faster when performed using this this smaller set of operators. A technique called "Operator Interaction Analysis" finds useful decompositions. It does so by measuring the pairwise interaction between operators. It uses such measurements to form an ordered partition of the operator set. This partition can then be used in a "hierarchic" hillclimbing algorithm which runs faster than ordinary hillclimbing with an unstructured operator set. We have implemented both techniques and tested them in the domain of racing yacht hull design. Our experimental results show that these two methods can produce substantial speedups with little or no loss in quality of the resulting designs. 1-hop neighbor's text information: "Using Modeling Knowledge to Guide Design Space Search". : Automated search of a space of candidate designs seems an attractive way to improve the traditional engineering design process. To make this approach work, however, the automated design system must include both knowledge of the modeling limitations of the method used to evaluate candidate designs and also an effective way to use this knowledge to influence the search process. We suggest that a productive approach is to include this knowledge by implementing a set of model constraint functions which measure how much each modeling assumptions is violated, and to influence the search by using the values of these model constraint functions as constraint inputs to a standard constrained nonlinear optimization numerical method. We test this idea in the domain of conceptual design of supersonic transport aircraft, and our experiments indicate that our model constraint communication strategy can decrease the cost of design space search by one or more orders of magnitude. Target text information: Ellman. Learning prototype-selection rules for case-based iterative design. : The first step for most case-based design systems is to select an initial prototype from a database of previous designs. The retrieved prototype is then modified to tailor it to the given goals. For any particular design goal the selection of a starting point for the design process can have a dramatic effect both on the quality of the eventual design and on the overall design time. We present a technique for automatically constructing effective prototype-selection rules. Our technique applies a standard inductive-learning algorithm, C4.5, to a set of training data describing which particular prototype would have been the best choice for each goal encountered in a previous design session. We have tested our technique in the domain of racing-yacht-hull design, comparing our inductively learned selection rules to several competing prototype-selection methods. Our results show that the inductive prototype-selection method leads to better final designs when the design process is guided by a noisy evaluation function, and that the inductively learned rules will often be more efficient than competing methods. Many automated design systems begin by retrieving an initial prototype from a library of previous designs, using the given design goal as an index to guide the retrieval process [14]. The retrieved prototype is then modified by a set of design modification operators to tailor the selected design to the given goals. In many cases the quality of competing designs can be assessed using domain-specific evaluation functions, and in such cases the design-modification process is often This research has benefited from numerous discussions with members of the Rutgers CAP project. We thank Andrew Gelsey for helping with the cross-validation code, John Keane for helping with RUVPP, and Andrew Gelsey and Tim Weinrich for comments on a previous draft of this paper. This research was supported under ARPA-funded NASA grant NAG 2-645. In the context of such case-based design systems, the choice of an initial prototype can affect both the quality of the final design and the computational cost of obtaining that design, for three reasons. First, prototype selection may impact quality when the prototypes lie in disjoint search spaces. In particular, if the system's design modification operators cannot convert any prototype into any other prototype, the choice of initial prototype will restrict the set of possible designs that can be obtained by any search process. A poor choice of initial prototype may therefore lead to a suboptimal final design. Second, prototype selection may impact quality when the design process is guided by a nonlinear evaluation function with unknown global properties. Since there is no known method that is guaranteed to find the global optimum of an arbitrary nonlinear function [7], most design systems rely on iterative local search methods whose results are sensitive to the initial starting point. Finally, the choice of prototype may have an impact on the time needed to carry out the design modification process|two different starting points may yield the same final design but take very different amounts of time to get there. In design problems where evaluating even just a single design can take tremendous amounts of time, selecting an appropriate initial prototype can be the determining factor in the success or failure of the design process. This paper describes the application of inductive learning [11] to form rules for selecting appropriate prototype designs. The paper is structured as follows. In Section 2, we describe our inductive method for learning prototype-selection rules. In Section 3 we describe the domain of racing-yacht-hull design, in which we tested our prototype-selection methods. In Sections 4 and 5, we describe the experiments I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
2,078
test
1-hop neighbor's text information: Generating Declarative Language Bias for Top-Down ILP Algorithms: Many of today's algorithms for Inductive Logic Programming (ILP) put a heavy burden and responsibility on the user, because their declarative bias have to be defined in a rather low-level fashion. To address this issue, we developed a method for generating declarative language bias for top-down ILP systems from high-level declarations. The key feature of our approach is the distinction between a user level and an expert level of language bias declarations. The expert provides abstract meta-declarations, and the user declares the relationship between the meta-level and the given database to obtain a low-level declarative language bias. The suggested languages allow for compact and abstract specifications of the declarative language bias for top-down ILP systems using schemata. We verified several properties of the translation algorithm that generates schemata, and applied it successfully to a few chemical domains. As a consequence, we propose to use a two-level approach to generate declarative language bias. 1-hop neighbor's text information: An investigation of noise-tolerant relational concept learning algorithms. : We discuss the types of noise that may occur in relational learning systems and describe two approaches to addressing noise in a relational concept learning algorithm. We then evaluate each approach experimentally. 1-hop neighbor's text information: Top-down pruning in relational learn-ing. : Pruning is an effective method for dealing with noise in Machine Learning. Recently pruning algorithms, in particular Reduced Error Pruning, have also attracted interest in the field of Inductive Logic Programming. However, it has been shown that these methods can be very inefficient, because most of the time is wasted for generating clauses that explain noisy examples and subsequently pruning these clauses. We introduce a new method which searches for good theories in a top-down fashion to get a better starting point for the pruning algorithm. Experiments show that this approach can significantly lower the complexity of the task as well as increase predictive accuracy. Target text information: A comparison of pruning methods for relational concept learning. : Pre-Pruning and Post-Pruning are two standard methods of dealing with noise in concept learning. Pre-Pruning methods are very efficient, while Post-Pruning methods typically are more accurate, but much slower, because they have to generate an overly specific concept description first. We have experimented with a variety of pruning methods, including two new methods that try to combine and integrate pre- and post-pruning in order to achieve both accuracy and efficiency. This is verified with test series in a chess position classification task. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
0
Rule Learning
cora
1,915
val