content
stringlengths
633
9.91k
label
stringclasses
7 values
category
stringclasses
7 values
dataset
stringclasses
1 value
node_id
int64
0
2.71k
split
stringclasses
3 values
1-hop neighbor's text information: Automatic Definition of Modular Neural Networks. Adaptive Behavior, : 1-hop neighbor's text information: "Discontinuity in evolution: how different levels of organization imply pre-adaptation", : Target text information: GENE REGULATION AND BIOLOGICAL DEVELOPMENT IN NEURAL NETWORKS: AN EXPLORATORY MODEL: I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
644
test
1-hop neighbor's text information: Planning with closed-loop macro actions. : Planning and learning at multiple levels of temporal abstraction is a key problem for artificial intelligence. In this paper we summarize an approach to this problem based on the mathematical framework of Markov decision processes and reinforcement learning. Conventional model-based reinforcement learning uses primitive actions that last one time step and that can be modeled independently of the learning agent. These can be generalized to macro actions, multi-step actions specified by an arbitrary policy and a way of completing. Macro actions generalize the classical notion of a macro operator in that they are closed loop, uncertain, and of variable duration. Macro actions are needed to represent common-sense higher-level actions such as going to lunch, grasping an object, or traveling to a distant city. This paper generalizes prior work on temporally abstract models (Sutton 1995) and extends it from the prediction setting to include actions, control, and planning. We define a semantics of models of macro actions that guarantees the validity of planning using such models. This paper present new results in the theory of planning with macro actions and illustrates its potential advantages in a gridworld task. 1-hop neighbor's text information: Multi-time models for temporally abstract planning. : Planning and learning at multiple levels of temporal abstraction is a key problem for artificial intelligence. In this paper we summarize an approach to this problem based on the mathematical framework of Markov decision processes and reinforcement learning. Current model-based reinforcement learning is based on one-step models that cannot represent common-sense higher-level actions, such as going to lunch, grasping an object, or flying to Denver. This paper generalizes prior work on temporally abstract models [Sutton, 1995] and extends it from the prediction setting to include actions, control, and planning. We introduce a more general form of temporally abstract model, the multi-time model, and establish its suitability for planning and learning by virtue of its relationship to the Bellman equations. This paper summarizes the theoretical framework of multi-time models and illustrates their potential advantages in a The need for hierarchical and abstract planning is a fundamental problem in AI (see, e.g., Sacerdoti, 1977; Laird et al., 1986; Korf, 1985; Kaelbling, 1993; Dayan & Hinton, 1993). Model-based reinforcement learning offers a possible solution to the problem of integrating planning with real-time learning and decision-making (Peng & Williams, 1993, Moore & Atkeson, 1993; Sutton and Barto, 1998). However, current model-based reinforcement learning is based on one-step models that cannot represent common-sense, higher-level actions. Modeling such actions requires the ability to handle different, interrelated levels of temporal abstraction. A new approach to modeling at multiple time scales was introduced by Sutton (1995) based on prior work by Singh , Dayan , and Sutton and Pinette . This approach enables models of the environment at different temporal scales to be intermixed, producing temporally abstract models. However, that work was concerned only with predicting the environment. This paper summarizes an extension of the approach including actions and control of the environment [Precup & Sutton, 1997]. In particular, we generalize the usual notion of a gridworld planning task. 1-hop neighbor's text information: Hierarchical recurrent networks for long-term dependencies. : We have already shown that extracting long-term dependencies from sequential data is difficult, both for deterministic dynamical systems such as recurrent networks, and probabilistic models such as hidden Markov models (HMMs) or input/output hidden Markov models (IOHMMs). In practice, to avoid this problem, researchers have used domain specific a-priori knowledge to give meaning to the hidden or state variables representing past context. In this paper, we propose to use a more general type of a-priori knowledge, namely that the temporal dependencies are structured hierarchically. This implies that long-term dependencies are represented by variables with a long time scale. This principle is applied to a recurrent network which includes delays and multiple time scales. Experiments confirm the advantages of such structures. A similar approach is proposed for HMMs and IOHMMs. Target text information: TD models: modeling the world at a mixture of time scales. : Temporal-difference (TD) learning can be used not just to predict rewards, as is commonly done in reinforcement learning, but also to predict states, i.e., to learn a model of the world's dynamics. We present theory and algorithms for intermixing TD models of the world at different levels of temporal abstraction within a single structure. Such multi-scale TD models can be used in model-based reinforcement-learning architectures and dynamic programming methods in place of conventional Markov models. This enables planning at higher and varied levels of abstraction, and, as such, may prove useful in formulating methods for hierarchical or multi-level planning and reinforcement learning. In this paper we treat only the prediction problem|that of learning a model and value function for the case of fixed agent behavior. Within this context, we establish the theoretical foundations of multi-scale models and derive TD algorithms for learning them. Two small computational experiments are presented to test and illustrate the theory. This work is an extension and generalization of the work of Singh (1992), Dayan (1993), and Sutton & Pinette (1985). I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
5
Reinforcement Learning
cora
1,536
val
1-hop neighbor's text information: The Observer-Observation Dilemma in Neuro-Forecasting: Reliable Models From Unreliable Data Through CLEARNING: This paper introduces the idea of clearning, of simultaneously cleaning data and learning the underlying structure. The cleaning step can be viewed as top-down processing (the model modifies the data), and the learning step can be viewed as bottom-up processing (where the data modifies the model). After discussing the statistical foundation of the proposed method from a maximum likelihood perspective, we apply clearning to a notoriously hard problem where benchmark performances are very well known: the prediction of foreign exchange rates. On the difficult 1993-1994 test period, clearning in conjunction with pruning yields an annualized return between 35 and 40% (out-of-sample), significantly better than an otherwise identical network trained without cleaning. The network was started with 69 inputs and 15 hidden units and ended up with only 39 non-zero weights between inputs and hidden units. The resulting ultra-sparse final architectures obtained with clearning and pruning are immune against overfitting, even on very noisy problems since the cleaned data allow for a simpler model. Apart from the very competitive performance, clearning gives insight into the data: we show how to estimate the overall signal-to-noise ratio of each input variable, and we show that error estimates for each pattern can be used to detect and remove outliers, and to replace missing or corrupted data by cleaned values. Clearning can be used in any nonlinear regression or classification problem. 1-hop neighbor's text information: Hoeffding Races: Accelerating Model Selection Search for Classification and Function Approximation, : Selecting a good model of a set of input points by cross validation is a computationally intensive process, especially if the number of possible models or the number of training points is high. Techniques such as gradient descent are helpful in searching through the space of models, but problems such as local minima, and more importantly, lack of a distance metric between various models reduce the applicability of these search methods. Hoeffding Races is a technique for finding a good model for the data by quickly discarding bad models, and concentrating the computational effort at differentiating between the better ones. This paper focuses on the special case of leave-one-out cross validation applied to memory-based learning algorithms, but we also argue that it is applicable to any class of model selection problems. 1-hop neighbor's text information: A practical Bayesian framework for backpropagation networks. : A quantitative and practical Bayesian framework is described for learning of mappings in feedforward networks. The framework makes possible: (1) objective comparisons between solutions using alternative network architectures; (2) objective stopping rules for network pruning or growing procedures; (3) objective choice of magnitude and type of weight decay terms or additive regularisers (for penalising large weights, etc.); (4) a measure of the effective number of well-determined parameters in a model; (5) quantified estimates of the error bars on network parameters and on network output; (6) objective comparisons with alternative learning and interpolation models such as splines and radial basis functions. The Bayesian `evidence' automatically embodies `Occam's razor,' penalising over-flexible and over-complex models. The Bayesian approach helps detect poor underlying assumptions in learning models. For learning models well matched to a problem, a good correlation between generalisation ability This paper makes use of the Bayesian framework for regularisation and model comparison described in the companion paper `Bayesian interpolation' (MacKay, 1991a). This framework is due to Gull and Skilling (Gull, 1989a). and the Bayesian evidence is obtained. Target text information: Selecting input variables using mutual information and nonparamteric density estimation. : In learning problems where a connectionist network is trained with a finite sized training set, better generalization performance is often obtained when unneeded weights in the network are eliminated. One source of unneeded weights comes from the inclusion of input variables that provide little information about the output variables. We propose a method for identifying and eliminating these input variables. The method first determines the relationship between input and output variables using nonparametric density estimation and then measures the relevance of input variables using the information theoretic concept of mutual information. We present results from our method on a simple toy problem and a nonlinear time series. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
2,355
test
1-hop neighbor's text information: A theory and methodology of machine learning. : Target text information: R.S. Learning Evolving Concepts Using Partial Memory Approach. : This paper addresses the problem of learning evolving concepts, that is, concepts whose meaning gradually evolves in time. Solving this problem is important to many applications, for example, building intelligent agents for helping users in Internet search, active vision, automatically updating knowledge-bases, or acquiring profiles of users of telecommunication networks. Requirements for a learning architecture supporting such applications include the ability to incrementally modify concept definitions to accommodate new information, fast learning and recognition rates, low memory needs, and the understandability of computer-created concept descriptions. To address these requirements, we propose a learning architecture based on Variable-Valued Logic, the Star Methodology, and the AQ algorithm. The method uses a partial-memory approach, which means that in each step of learning, the system remembers the current concept descriptions and specially selected representative examples from the past experience. The developed method has been experimentally applied to the problem of computer system intrusion detection. The results show significant advantages of the method in learning speed and memory requirements with only slight decreases in predictive accuracy and concept simplicity when compared to traditional batch-style learning in which all training examples are provided at once. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
0
Rule Learning
cora
1,428
test
1-hop neighbor's text information: "Adding Learning to the Cellular development of Neural Networks: Evolution and the Baldwin Effect," : This paper compares the efficiency of two encoding schemes for Artificial Neural Networks optimized by evolutionary algorithms. Direct Encoding encodes the weights for an a priori fixed neural network architecture. Cellular Encoding encodes both weights and the architecture of the neural network. In previous studies, Direct Encoding and Cellular Encoding have been used to create neural networks for balancing 1 and 2 poles attached to a cart on a fixed track. The poles are balanced by a controller that pushes the cart to the left or the right. In some cases velocity information about the pole and cart is provided as an input; in other cases the network must learn to balance a single pole without velocity information. A careful study of the behavior of these systems suggests that it is possible to balance a single pole with velocity information as an input and without learning to compute the velocity. A new fitness function is introduced that forces the neural network to compute the velocity. By using this new fitness function and tuning the syntactic constraints used with cellular encoding, we achieve a tenfold speedup over our previous study and solve a more difficult problem: balancing two poles when no information about the velocity is provided as input. 1-hop neighbor's text information: Proben1: A set of neural network benchmark problems and benchmarking rules. : Proben1 is a collection of problems for neural network learning in the realm of pattern classification and function approximation plus a set of rules and conventions for carrying out benchmark tests with these or similar problems. Proben1 contains 15 data sets from 12 different domains. All datasets represent realistic problems which could be called diagnosis tasks and all but one consist of real world data. The datasets are all presented in the same simple format, using an attribute representation that can directly be used for neural network training. Along with the datasets, Proben1 defines a set of rules for how to conduct and how to document neural network benchmarking. The purpose of the problem and rule collection is to give researchers easy access to data for the evaluation of their algorithms and networks and to make direct comparison of the published results feasible. This report describes the datasets and the benchmarking rules. It also gives some basic performance measures indicating the difficulty of the various problems. These measures can be used as baselines for comparison. Target text information: An Evolutionary Method to Find Good Building-Blocks for Architectures of Artificial Neural Networks: This paper deals with the combination of Evolutionary Algorithms and Artificial Neural Networks (ANN). A new method is presented, to find good building-blocks for architectures of Artificial Neural Networks. The method is based on Cellular Encoding, a representation scheme by F. Gruau, and on Genetic Programming by J. Koza. First it will be shown that a modified Cellular Encoding technique is able to find good architectures even for non-boolean networks. With the help of a graph-database and a new graph-rewriting method, it is secondly possible to build architectures from modular structures. The information about building-blocks for architectures is obtained by statistically analyzing the data in the graph-database. Simulation results for two real world problems are given. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
533
test
1-hop neighbor's text information: Artificial Life as Theoretical Biology: How to do real science with computer simulation: Artificial Life (A-Life) research offers, among other things, a new style of computer simulation for understanding biological systems and processes. But most current A-Life work does not show enough methodological sophistication to count as good theoretical biology. As a first step towards developing a stronger methodology for A-Life, this paper (1) identifies some methodological pitfalls arising from the `computer science inuence' in A-Life, (2) suggests some methodological heuristics for A-Life as theoretical biology, (3) notes the strengths of A-Life methods versus previous research methods in biology, (4) examines some open questions in theoretical biology that may benefit from A-Life simulation, and (5) argues that the debate over `Strong A-Life' is not relevant to A-Life's utility for theoretical biology. 1 Introduction: Simulating our way into the Dark Continent Target text information: Evolutionary wanderlust: Sexual selection with directional mate preferences. : In the pantheon of evolutionary forces, the optimizing Apollonian powers of natural selection are generally assumed to dominate the dark Dionysian dynamics of sexual selection. But this need not be the case, particularly with a class of selective mating mechanisms called `directional mate preferences' (Kirkpatrick, 1987). In previous simulation research, we showed that nondirectional assortative mating preferences could cause populations to spontaneously split apart into separate species (Todd & Miller, 1991). In this paper, we show that directional mate preferences can cause populations to wander capriciously through phenotype space, under a strange form of runaway sexual selection, with or without the influence of natural selection pressures. When directional mate preferences are free to evolve, they do not always evolve to point in the direction of natural-selective peaks. Sexual selection can thus take on a life of its own, such that mate preferences within a species become a distinct and important part of the environment to which the species' phenotypes adapt. These results suggest a broader conception of `adaptive behavior', in which attracting potential mates becomes as important as finding food and avoiding predators. We present a framework for simulating a wide range of directional and non-directional mate preferences, and discuss some practical and scientific applications of simu lating sexual selection. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
2,161
test
1-hop neighbor's text information: Predicting sunspots and exchange rates with connectionist networks. : We investigate the effectiveness of connectionist networks for predicting the future continuation of temporal sequences. The problem of overfitting, particularly serious for short records of noisy data, is addressed by the method of weight-elimination: a term penalizing network complexity is added to the usual cost function in back-propagation. The ultimate goal is prediction accuracy. We analyze two time series. On the benchmark sunspot series, the networks outperform traditional statistical approaches. We show that the network performance does not deteriorate when there are more input units than needed. Weight-elimination also manages to extract some part of the dynamics of the notoriously noisy currency exchange rates and makes the network solution interpretable. 1-hop neighbor's text information: Local error bars for nonlinear regression and time series prediction. : We present a new method for obtaining local error bars for nonlinear regression, i.e., estimates of the confidence in predicted values that depend on the input. We approach this problem by applying a maximum-likelihood framework to an assumed distribution of errors. We demonstrate our method first on computer-generated data with locally varying, normally distributed target noise. We then apply it to laser data from the Santa Fe Time Series Competition where the underlying system noise is known quantization error and the error bars give local estimates of model misspecification. In both cases, the method also provides a weighted-regression effect that improves generalization performance. 1-hop neighbor's text information: Learning to predict by the methods of temporal differences. : This article introduces a class of incremental learning procedures specialized for prediction|that is, for using past experience with an incompletely known system to predict its future behavior. Whereas conventional prediction-learning methods assign credit by means of the difference between predicted and actual outcomes, the new methods assign credit by means of the difference between temporally successive predictions. Although such temporal-difference methods have been used in Samuel's checker player, Holland's bucket brigade, and the author's Adaptive Heuristic Critic, they have remained poorly understood. Here we prove their convergence and optimality for special cases and relate them to supervised-learning methods. For most real-world prediction problems, temporal-difference methods require less memory and less peak computation than conventional methods; and they produce more accurate predictions. We argue that most problems to which supervised learning is currently applied are really prediction problems of the sort to which temporal-difference methods can be applied to advantage. Target text information: Kazlas and A.S. Weigend. (1995) Direct Multi-Step Time Series Prediction Using TD(). : This paper explores the application of Temporal Difference (TD) learning (Sutton, 1988) to forecasting the behavior of dynamical systems with real-valued outputs (as opposed to game-like situations). The performance of TD learning in comparison to standard supervised learning depends on the amount of noise present in the data. In this paper, we use a deterministic chaotic time series from a low-noise laser. For the task of direct five-step ahead predictions, our experiments show that standard supervised learning is better than TD learning. The TD algorithm can be viewed as linking adjacent predictions. A similar effect can be obtained by sharing the internal representation in the network. We thus compare two architectures for both paradigms: the first architecture (separate hidden units) consists of individual networks for each of the five direct multi-step prediction tasks, the second (shared hidden units) has a single (larger) hidden layer that finds a representation from which all five predictions for the next five steps are generated. For this data set we do not find any significant difference between the two architectures. fl http://www.cs.colorado.edu/~andreas/Home.html. This paper is available as ftp://ftp.cs.colorado.edu/pub/Time-Series/MyPapers/kazlas.weigend nips7.ps.Z I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
2,356
test
1-hop neighbor's text information: Graphical Models in Applied Multivariate Statistics. : Target text information: DYNAMIC CONDITIONAL INDEPENDENCE MODELS AND MARKOV CHAIN MONTE CARLO METHODS: I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
603
test
1-hop neighbor's text information: Is Transfer Inductive?: Work is currently underway to devise learning methods which are better able to transfer knowledge from one task to another. The process of knowledge transfer is usually viewed as logically separate from the inductive procedures of ordinary learning. However, this paper argues that this `seperatist' view leads to a number of conceptual difficulties. It offers a task analysis which situates the transfer process inside a generalised inductive protocol. It argues that transfer should be viewed as a subprocess within induction and not as an independent procedure for transporting knowledge between learning trials. 1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming. 1-hop neighbor's text information: Avoiding overfitting with BP-SOM. : Overfitting is a well-known problem in the fields of symbolic and connectionist machine learning. It describes the deterioration of gen-eralisation performance of a trained model. In this paper, we investigate the ability of a novel artificial neural network, bp-som, to avoid overfitting. bp-som is a hybrid neural network which combines a multi-layered feed-forward network (mfn) with Kohonen's self-organising maps (soms). During training, supervised back-propagation learning and unsupervised som learning cooperate in finding adequate hidden-layer representations. We show that bp-som outperforms standard backpropagation, and also back-propagation with a weight decay when dealing with the problem of overfitting. In addition, we show that bp-som succeeds in preserving generalisation performance under hidden-unit pruning, where both other methods fail. Target text information: Measuring the difficulty of specific learning problems. : Existing complexity measures from contemporary learning theory cannot be conveniently applied to specific learning problems (e.g., training sets). Moreover, they are typically non-generic, i.e., they necessitate making assumptions about the way in which the learner will operate. The lack of a satisfactory, generic complexity measure for learning problems poses difficulties for researchers in various areas; the present paper puts forward an idea which may help to alleviate these. It shows that supervised learning problems fall into two, generic, complexity classes only one of which is associated with computational tractability. By determining which class a particular problem belongs to, we can thus effectively evaluate its degree of generic difficulty. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
1,489
test
1-hop neighbor's text information: Genetic programming and redundancy. : The Genetic Programming optimization method (GP) elaborated by John Koza [ Koza, 1992 ] is a variant of Genetic Algorithms. The search space of the problem domain consists of computer programs represented as parse trees, and the crossover operator is realized by an exchange of subtrees. Empirical analyses show that large parts of those trees are never used or evaluated which means that these parts of the trees are irrelevant for the solution or redundant. This paper is concerned with the identification of the redundancy occuring in GP. It starts with a mathematical description of the behavior of GP and the conclusions drawn from that description among others explain the "size problem" which denotes the phenomenon that the average size of trees in the population grows with time. 1-hop neighbor's text information: Causality in genetic programming. : Machine learning aims towards the acquisition of knowledge based on either experience from the interaction with the external environment or by analyzing the internal problem-solving traces. Both approaches can be implemented in the Genetic Programming (GP) paradigm. [Hillis, 1990] proves in an ingenious way how the first approach can work. There have not been any significant tests to prove that GP can take advantage of its own search traces. This paper presents an approach to automatic discovery of functions in GP based on the ideas of discovery of useful building blocks by analyzing the evolution trace, generalizing of blocks to define new functions and finally adapting of the problem representation on-the-fly. Adaptation of the representation determines a hierarchical organization of the extended function set which enables a restructuring of the search space so that solutions can be found more easily. Complexity measures of solution trees are defined for an adaptive representation framework and empirical results are presented. This material is based on work supported by the National Science Foundation under Grant numbered IRI-8903582 by NIH/PHS research grant numbered 1 R24 RR06853-02 and by a Human Science Frontiers Program research grant. The government has certain rights in this material. 1-hop neighbor's text information: Fitness causes bloat: Mutation. : In many cases programs length's increase (known as "bloat", "fluff" and increasing "structural complexity") during artificial evolution. We show bloat is not specific to genetic programming and suggest it is inherent in search techniques with discrete variable length representations using simple static evaluation functions. We investigate the bloating characteristics of three non-population and one population based search techniques using a novel mutation operator. An artificial ant following the Santa Fe trail problem is solved by simulated annealing, hill climbing, strict hill climbing and population based search using two variants of the the new subtree based mutation operator. As predicted bloat is observed when using unbiased mutation and is absent in simulated annealing and both hill climbers when using the length neutral mutation however bloat occurs with both mutations when using a population. We conclude that there are two causes of bloat. Target text information: Fitness causes bloat in variable size representations. : We argue based upon the numbers of representations of given length, that increase in representation length is inherent in using a fixed evaluation function with a discrete but variable length representation. Two examples of this are analysed, including the use of Price's Theorem. Both examples confirm the tendency for solutions to grow in size is caused by fitness based selection. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
2,116
train
1-hop neighbor's text information: Digital Neural Networks: Demands for applications requiring massive parallelism in symbolic environments have given rebirth to research in models labeled as neura l networks. These models are made up of many simple nodes which are highly interconnected such that computation takes place as data flows amongst the nodes of the network. To present, most models have proposed nodes based on simple analog functions, where inputs are multiplied by weights and summed, the total then optionally being transformed by an arbitrary function at the node. Learning in these systems is accomplished by adjusting the weights on the input lines. This paper discusses the use of digital (boolean) nodes as a primitive building block in connectionist systems. Digital nodes naturally engender new paradigms and mechanisms for learning and processing in connectionist networks. The digital nodes are used as the basic building block of a class of models called ASOCS (Adaptive Self-Organizing Concurrent Systems). These models combine massive parallelism with the ability to adapt in a self-organizing fashion. Basic features of standard neural network learning algorithms and those proposed using digital nodes are compared and contrasted. The latter mechanisms can lead to vastly improved efficiency for many applications. 1-hop neighbor's text information: ASOCS: A Multilayered Connectionist Network with Guaranteed Learning of Arbitrary Mappings: This paper reviews features of a new class of multilayer connectionist architectures known as ASOCS (Adaptive Self-Organizing Concurrent Systems). ASOCS is similar to most decision-making neural network models in that it attempts to learn an adaptive set of arbitrary vector mappings. However, it differs dramatically in its mechanisms. ASOCS is based on networks of adaptive digital elements which self-modify using local information. Function specification is entered incrementally by use of rules, rather than complete input-output vectors, such that a processing network is able to extract critical features from a large environment and give output in a parallel fashion. Learning also uses parallelism and self-organization such that a new rule is completely learned in time linear with the depth of the network. The model guarantees learning of any arbitrary mapping of boolean input-output vectors. The model is also stable in that learning does not erase any previously learned mappings except those explicitly contradicted. 1-hop neighbor's text information: Neural Network Applicability: Classifying the Problem Space, : The tremendous current effort to propose neurally inspired methods of computation forces closer scrutiny of real world application potential of these models. This paper categorizes applications into classes and particularly discusses features of applications which make them efficiently amenable to neural network methods. Computational machines do deterministic mappings of inputs to outputs and many computational mechanisms have been proposed for problem solutions. Neural network features include parallel execution, adaptive learning, generalization, and fault tolerance. Often, much effort is given to a model and applications which can already be implemented in a much more efficient way with an alternate technology. Neural networks are potentially powerful devices for many classes of applications, but not all. However, it is proposed that the class of applications for which neural networks are efficient is both large and commonly occurring in nature. Comparison of supervised, unsupervised, and generalizing systems is also included. Target text information: "Models of Parallel Adaptive Logic," : This paper overviews a proposed architecture for adaptive parallel logic referred to as ASOCS (Adaptive Self-Organizing Concurrent System). The ASOCS approach is based on an adaptive network composed of many simple computing elements which operate in a parallel asynchronous fashion. Problem specification is given to the system by presenting if-then rules in the form of boolean conjunctions. Rules are added incrementally and the system adapts to the changing rule-base. Adaptation and data processing form two separate phases of operation. During processing the system acts as a parallel hardware circuit. The adaptation process is distributed amongst the computing elements and efficiently exploits parallelism. Adaptation is done in a self-organizing fashion and takes place in time linear with the depth of the network. This paper summarizes the overall ASOCS concept and overviews three specific architectures. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
1,812
test
1-hop neighbor's text information: Introduction to the Theory of Neural Computa 92 tion. : Neural computation, also called connectionism, parallel distributed processing, neural network modeling or brain-style computation, has grown rapidly in the last decade. Despite this explosion, and ultimately because of impressive applications, there has been a dire need for a concise introduction from a theoretical perspective, analyzing the strengths and weaknesses of connectionist approaches and establishing links to other disciplines, such as statistics or control theory. The Introduction to the Theory of Neural Computation by Hertz, Krogh and Palmer (subsequently referred to as HKP) is written from the perspective of physics, the home discipline of the authors. The book fulfills its mission as an introduction for neural network novices, provided that they have some background in calculus, linear algebra, and statistics. It covers a number of models that are often viewed as disjoint. Critical analyses and fruitful comparisons between these models 1-hop neighbor's text information: Parsimonious least norm approximation. : A theoretically justifiable fast finite successive linear approximation algorithm is proposed for obtaining a parsimonious solution to a corrupted linear system Ax = b + p, where the corruption p is due to noise or error in measurement. The proposed linear-programming-based algorithm finds a solution x by parametrically minimizing the number of nonzero elements in x and the error k Ax b p k 1 . Numerical tests on a signal-processing-based example indicate that the proposed method is comparable to a method that parametrically minimizes the 1-norm of the solution x and the error k Ax b p k 1 , and that both methods are superior, by orders of magnitude, to solutions obtained by least squares as well by combinatorially choosing an optimal solution with a specific number of nonzero elements. 1-hop neighbor's text information: Irrelevant features and the subset selection problem. : We address the problem of finding a subset of features that allows a supervised induction algorithm to induce small high-accuracy concepts. We examine notions of relevance and irrelevance, and show that the definitions used in the machine learning literature do not adequately partition the features into useful categories of relevance. We present definitions for irrelevance and for two degrees of relevance. These definitions improve our understanding of the behavior of previous subset selection algorithms, and help define the subset of features that should be sought. The features selected should depend not only on the features and the target concept, but also on the induction algorithm. We describe a method for feature subset selection using cross-validation that is applicable to any induction algorithm, and discuss experiments conducted with ID3 and C4.5 on artificial and real datasets. Target text information: Street. Feature selection via mathematical programming. : The problem of discriminating between two finite point sets in n-dimensional feature space by a separating plane that utilizes as few of the features as possible, is formulated as a mathematical program with a parametric objective function and linear constraints. The step function that appears in the objective function can be approximated by a sigmoid or by a concave exponential on the nonnegative real line, or it can be treated exactly by considering the equivalent linear program with equilibrium constraints (LPEC). Computational tests of these three approaches on publicly available real-world databases have been carried out and compared with an adaptation of the optimal brain damage (OBD) method for reducing neural network complexity. One feature selection algorithm via concave minimization (FSV) reduced cross-validation error on a cancer prognosis database by 35.4% while reducing problem features from 32 to 4. Feature selection is an important problem in machine learning [18, 15, 16, 17, 33]. In its basic form the problem consists of eliminating as many of the features in a given problem as possible, while still carrying out a preassigned task with acceptable accuracy. Having a minimal number of features often leads to better generalization and simpler models that can be more easily interpreted. In the present work, our task is to discriminate between two given sets in an n-dimensional feature space by using as few of the given features as possible. We shall formulate this problem as a mathematical program with a parametric objective function that will attempt to achieve this task by generating a separating plane in a feature space of as small a dimension as possible while minimizing the average distance of misclassified points to the plane. One of the computational experiments that we carried out on our feature selection procedure showed its effectiveness, not only in minimizing the number of features selected, but also in quickly recognizing and removing spurious random features that were introduced. Thus, on the Wisconsin Prognosis Breast Cancer WPBC database [36] with a feature space of 32 dimensions and 6 random features added, one of our algorithms FSV (11) immediately removed the 6 random features as well as 28 of the original features resulting in a separating plane in a 4-dimensional reduced feature space. By using tenfold cross-validation [35], separation error in the 4-dimensional space was reduced 35.4% from the corresponding error in the original problem space. (See Section 3 for details.) We note that mathematical programming approaches to the feature selection problem have been recently proposed in [4, 22]. Even though the approach of [4] is based on an LPEC formulation, both the LPEC and its method of solution are different from the ones used here. The polyhedral concave minimization approach of [22] is principally involved with theoretical considerations of one specific algorithm and no cross-validatory results are given. Other effective computational applications of mathematical programming to neural networks are given in [30, 26]. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
1,426
val
1-hop neighbor's text information: The Management of Context-Sensitive Features: A Review of Strategies: In this paper, we review five heuristic strategies for handling context-sensitive features in supervised machine learning from examples. We discuss two methods for recovering lost (implicit) contextual information. We mention some evidence that hybrid strategies can have a synergetic effect. We then show how the work of several machine learning researchers fits into this framework. While we do not claim that these strategies exhaust the possibilities, it appears that the framework includes all of the techniques that can be found in the published literature on context-sensitive learning. 1-hop neighbor's text information: Case-Based Sonogram Classification: This report replicates and extends results reported by Naval Air Warfare Center (NAWC) personnel on the automatic classification of sonar images. They used novel case-based reasoning systems in their empirical studies, but did not obtain comparative analyses using standard classification algorithms. Therefore, the quality of the NAWC results were unknown. We replicated the NAWC studies and also tested several other classifiers (i.e., both case-based and otherwise) from the machine learning literature. These comparisons and their ramifications are detailed in this paper. Next, we investigated Fala and Walker's two suggestions for future work (i.e., on combining their similarity functions and on an alternative case representation). Finally, we describe several ways to incorporate additional domain-specific knowledge when applying case-based classifiers to similar tasks. 1-hop neighbor's text information: Improving minority class prediction using case-specific feature weights. : This paper addresses the problem of handling skewed class distributions within the case-based learning (CBL) framework. We first present as a baseline an information-gain-weighted CBL algorithm and apply it to three data sets from natural language processing (NLP) with skewed class distributions. Although overall performance of the baseline CBL algorithm is good, we show that the algorithm exhibits poor performance on minority class instances. We then present two CBL algorithms designed to improve the performance of minority class predictions. Each variation creates test-case-specific feature weights by first observing the path taken by the test case in a decision tree created for the learning task, and then using path-specific information gain values to create an appropriate weight vector for use during case retrieval. When applied to the NLP data sets, the algorithms are shown to significantly increase the accuracy of minority class predictions while maintaining or improving over all classification accuracy. Target text information: Concept learning and flexible weighting. : We previously introduced an exemplar model, named GCM-ISW, that exploits a highly flexible weighting scheme. Our simulations showed that it records faster learning rates and higher asymptotic accuracies on several artificial categorization tasks than models with more limited abilities to warp input spaces. This paper extends our previous work; it describes experimental results that suggest human subjects also invoke such highly flexible schemes. In particular, our model provides significantly better fits than models with less flexibility, and we hypothesize that humans selectively weight attributes depending on an item's location in the input space. We need more flexible models Many theories of human concept learning posit that concepts are represented by prototypes (Reed, 1972) or exemplars (Medin & Schaffer, 1978). Prototype models represent concepts by the "best example" or "central tendency" of the concept. 1 A new item belongs in a category C if it is relatively similar to C's prototype. Prototype models are relatively inflexible; they discard a great deal of information that people use during concept learning (e.g., the number of exemplars in a concept (Homa & Cultice, 1984), the variability of features (Fried & Holyoak, 1984), correlations between features (Medin et al., 1982), and the particular exemplars used (Whittlesea, 1987)). of concept learning I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
1,074
test
1-hop neighbor's text information: Estimating the square root of a density via compactly supported wavelets. : A large body of nonparametric statistical literature is devoted to density estimation. Overviews are given in Silverman (1986) and Izenman (1991). This paper addresses the problem of univariate density estimation in a novel way. Our approach falls in the class of so called projection estimators, introduced by Cencov (1962). The orthonor-mal basis used is a basis of compactly supported wavelets from Daubechies' family. Kerkyacharian and Picard (1992, 1993), Donoho et al. (1996), and Delyon and Judit-sky (1993), among others, applied wavelets in density estimation. The local nature of wavelet functions makes the wavelet estimator superior to projection estimators that use classical orthonormal bases (Fourier, Hermite, etc.) Instead of estimating the unknown density directly, we estimate the square root of the density, which enables us to control the positiveness and the L 1 norm of the density estimate. However, in that approach one needs a pre-estimator of the density to calculate sample wavelet coefficients. We describe VISUSTOP, a data-driven procedure for determining the maximum number of levels in the wavelet density estimator. Coefficients in the selected levels are thresholded to make the estimator parsimonious. 1-hop neighbor's text information: M (1992a). Minimax risk over ` p -balls for l q loss. : Consider estimating the mean vector from data N n (; 2 I) with l q norm loss, q 1, when is known to lie in an n-dimensional l p ball, p 2 (0; 1). For large n, the ratio of minimax linear risk to minimax risk can be arbitrarily large if p < q. Obvious exceptions aside, the limiting ratio equals 1 only if p = q = 2. Our arguments are mostly indirect, involving a reduction to a univariate Bayes minimax problem. When p < q, simple non-linear co-ordinatewise threshold rules are asymptotically minimax at small signal-to-noise ratios, and within a bounded factor of asymptotic minimaxity in general. Our results are basic to a theory of estimation in Besov spaces 1-hop neighbor's text information: I.M.: Adapting to unknown smoothness via wavelet shrinkage. : We attempt to recover a function of unknown smoothness from noisy, sampled data. We introduce a procedure, SureShrink, which suppresses noise by thresholding the empirical wavelet coefficients. The thresholding is adaptive: a threshold level is assigned to each dyadic resolution level by the principle of minimizing the Stein Unbiased Estimate of Risk (Sure) for threshold estimates. The computational effort of the overall procedure is order N log(N ) as a function of the sample size N. SureShrink is smoothness-adaptive: if the unknown function contains jumps, the reconstruction (essentially) does also; if the unknown function has a smooth piece, the reconstruction is (essentially) as smooth as the mother wavelet will allow. The procedure is in a sense optimally smoothness-adaptive: it is near-minimax simultaneously over a whole interval of the Besov scale; the size of this interval depends on the choice of mother wavelet. We know from a previous paper by the authors that traditional smoothing methods kernels, splines, and orthogonal series estimates - even with optimal choices of the smoothing parameter, would be unable to perform in a near-minimax way over many spaces in the Besov scale. Acknowledgements. The first author was supported at U.C. Berkeley by NSF DMS 88-10192, by NASA Contract NCA2-488, and by a grant from ATT Foundation. The second author was supported in part by NSF grants DMS 84-51750, 86-00235, and NIH PHS grant GM21215-12, and by a grant from ATT Foundation. Target text information: Density estimation by wavelet thresholding. : Density estimation is a commonly used test case for non-parametric estimation methods. We explore the asymptotic properties of estimators based on thresholding of empirical wavelet coefficients. Minimax rates of convergence are studied over a large range of Besov function classes B s;p;q and for a range of global L 0 p error measures, 1 p 0 < 1. A single wavelet threshold estimator is asymptotically minimax within logarithmic terms simultaneously over a range of spaces and error measures. In particular, when p 0 > p, some form of non-linearity is essential, since the minimax linear estimators are suboptimal by polynomial powers of n. A second approach, using an approximation of a Gaussian white noise model in a Mallows metric, is used Acknowledgements: We thank Alexandr Sakhanenko for helpful discussions and references to his work on Berry Esseen theorems used in Section 5. This work was supported in part by NSF DMS 92-09130. The second author would like to thank Universite de I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
2,217
train
1-hop neighbor's text information: Generalized queries in probabilistic context-free grammars. : Probabilistic context-free grammars (PCFGs) provide a simple way to represent a particular class of distributions over sentences in a context-free language. Efficient parsing algorithms for answering particular queries about a PCFG (i.e., calculating the probability of a given sentence, or finding the most likely parse) have been developed, and applied to a variety of pattern-recognition problems. We extend the class of queries that can be answered in several ways: (1) allowing missing tokens in a sentence or sentence fragment, (2) supporting queries about intermediate structure, such as the presence of particular nonterminals, and (3) flexible conditioning on a variety of types of evidence. Our method works by constructing a Bayesian network to represent the distribution of parse trees induced by a given PCFG. The network structure mirrors that of the chart in a standard parser, and is generated using a similar dynamic-programming approach. We present an algorithm for constructing Bayesian networks from PCFGs, and show how queries or patterns of queries on the network correspond to interesting queries on PCFGs. The network formalism also supports extensions to encode various context sensitivities within the probabilistic dependency structure. 1-hop neighbor's text information: Global conditioning for probabilistic inference in belief networks. : In this paper we propose a new approach to probabilistic inference on belief networks, global conditioning, which is a simple generalization of Pearl's (1986b) method of loop-cutset conditioning. We show that global conditioning, as well as loop-cutset conditioning, can be thought of as a special case of the method of Lauritzen and Spiegelhalter (1988) as refined by Jensen et al (1990a; 1990b). Nonetheless, this approach provides new opportunities for parallel processing and, in the case of sequential processing, a tradeoff of time for memory. We also show how a hybrid method (Suermondt and others 1990) combining loop-cutset conditioning with Jensen's method can be viewed within our framework. By exploring the relationships between these methods, we develop a unifying framework in which the advantages of each approach can be combined successfully. 1-hop neighbor's text information: "Bucket elimination: A unifying framework for probabilistic inference," : Probabilistic inference algorithms for finding the most probable explanation, the maximum aposteriori hypothesis, and the maximum expected utility and for updating belief are reformulated as an elimination-type algorithm called bucket elimination. This emphasizes the principle common to many of the algorithms appearing in that literature and clarifies their relationship to nonserial dynamic programming algorithms. We also present a general way of combining conditioning and elimination within this framework. Bounds on complexity are given for all the algorithms as a function of the problem's struc ture. Target text information: "Topological parameters for time-space tradeoff," : In this paper we propose a family of algorithms combining tree-clustering with conditioning that trade space for time. Such algorithms are useful for reasoning in probabilistic and deterministic networks as well as for accomplishing optimization tasks. By analyzing the problem structure it will be possible to select from a spectrum the algorithm that best meets a given time-space specifica tion. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
60
test
1-hop neighbor's text information: Learning an Optimally Accurate Representational System: The multiple extension problem arises because a default theory can use different subsets of its defaults to propose different, mutually incompatible, answers to some queries. This paper presents an algorithm that uses a set of observations to learn a credulous version of this default theory that is (essentially) "optimally accurate". In more detail, we can associate a given default theory with a set of related credulous theories R = fR i g, where each R i uses its own total ordering of the defaults to determine which single answer to return for each query. Our goal is to select the credulous theory that has the highest "expected accuracy", where each R i 's expected accuracy is the probability that the answer it produces to a query will correspond correctly to the world. Unfortunately, a theory's expected accuracy depends on the distribution of queries, which is usually not known. Moreover, the task of identifying the optimal R opt 2 R, even given that distribution information, is intractable. This paper presents a method, OptAcc, that sidesteps these problems by using a set of samples to estimate the unknown distribution, and by hill-climbing to a local optimum. In particular, given any parameters *; ffi > 0, OptAcc produces an R oa 2 R whose expected accuracy is, with probability at least 1 ffi, within * of a local optimum. Appeared in ECAI Workshop on Theoretical Foundations of Knowledge Representation and Reasoning, 1-hop neighbor's text information: A Statistical Approach to Solving the EBL Utility Problem, : Many "learning from experience" systems use information extracted from problem solving experiences to modify a performance element PE, forming a new element PE 0 that can solve these and similar problems more efficiently. However, as transformations that improve performance on one set of problems can degrade performance on other sets, the new PE 0 is not always better than the original PE; this depends on the distribution of problems. We therefore seek the performance element whose expected performance, over this distribution, is optimal. Unfortunately, the actual distribution, which is needed to determine which element is optimal, is usually not known. Moreover, the task of finding the optimal element, even knowing the distribution, is intractable for most interesting spaces of elements. This paper presents a method, palo, that side-steps these problems by using a set of samples to estimate the unknown distribution, and by using a set of transformations to hill-climb to a local optimum. This process is based on a mathematically rigorous form of utility analysis: in particular, it uses statistical techniques to determine whether the result of a proposed transformation will be better than the original system. We also present an efficient way of implementing this learning system in the context of a general class of performance elements, and include empirical evidence that this approach can work effectively. fl Much of this work was performed at the University of Toronto, where it was supported by the Institute for Robotics and Intelligent Systems and by an operating grant from the National Science and Engineering Research Council of Canada. We also gratefully acknowledge receiving many helpful comments from William Cohen, Dave Mitchell, Dale Schuurmans and the anonymous referees. 1-hop neighbor's text information: On the sample complexity of finding good search strategies. : A satisficing search problem consists of a set of probabilistic experiments to be performed in some order, without repetitions, until a satisfying configuration of successes and failures has been reached. The cost of performing the experiments depends on the order chosen. Earlier work has concentrated on finding optimal search strategies in special cases of this model, such as search trees and and-or graphs, when the cost function and the success probabilities for the experiments are given. In contrast, we study the complexity of "learning" an approximately optimal search strategy when some of the success probabilities are not known at the outset. Working in the fully general model, we show that if n is the number of unknown probabilities, and C is the maximum cost of performing all the experiments, then Target text information: Probably approximately optimal satisfic-ing strategies. : A satisficing search problem consists of a set of probabilistic experiments to be performed in some order, seeking a satisfying configuration of successes and failures. The expected cost of the search depends both on the success probabilities of the individual experiments, and on the search strategy, which specifies the order in which the experiments are to be performed. A strategy that minimizes the expected cost is optimal. Earlier work has provided "optimizing functions" that compute optimal strategies for certain classes of search problems from the success probabilities of the individual experiments. We extend those results by providing a general model of such strategies, and an algorithm pao that identifies an approximately optimal strategy when the probability values are not known. The algorithm first estimates the relevant probabilities from a number of trials of each undetermined experiment, and then uses these estimates, and the proper optimizing function, to identify a strategy whose cost is, with high probability, close to optimal. We also show that if the search problem can be formulated as an and-or tree, then the pao algorithm can also "learn while doing", i.e. gather the necessary statistics while performing the search. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
2,487
test
1-hop neighbor's text information: Blind separation of delayed sources based on information maximisation, : Blind separation of independent sources from their convolutive mixtures is a problem in many real world multi-sensor applications. In this paper we present a solution to this problem based on the information maximization principle, which was recently proposed by Bell and Sejnowski for the case of blind separation of instantaneous mixtures. We present a feedback network architecture capable of coping with convolutive mixtures, and we derive the adaptation equations for the adaptive filters in the network by maximizing the information transferred through the network. Examples using speech signals are presented to illustrate the algorithm. 1-hop neighbor's text information: An Information Maximization Approach to Blind Separation and Blind Deconvolution. : We derive a new self-organising learning algorithm which maximises the information transferred in a network of non-linear units. The algorithm does not assume any knowledge of the input distributions, and is defined here for the zero-noise limit. Under these conditions, information maximisation has extra properties not found in the linear case (Linsker 1989). The non-linearities in the transfer function are able to pick up higher-order moments of the input distributions and perform something akin to true redundancy reduction between units in the output representation. This enables the network to separate statistically independent components in the inputs: a higher-order generalisation of Principal Components Analysis. We apply the network to the source separation (or cocktail party) problem, successfully separating unknown mixtures of up to ten speakers. We also show that a variant on the network architecture is able to perform blind deconvolution (cancellation of unknown echoes and reverberation in a speech signal). Finally, we derive dependencies of information transfer on time delays. We suggest that information max-imisation provides a unifying framework for problems in `blind' signal processing. fl Please send comments to [email protected]. This paper will appear as Neural Computation, 7, 6, 1004-1034 (1995). The reference for this version is: Technical Report no. INC-9501, February 1995, Institute for Neural Computation, UCSD, San Diego, CA 92093-0523. 1-hop neighbor's text information: "Adaptive source separation without prewhitening," : Source separation consists in recovering a set of independent signals when only mixtures with unknown coefficients are observed. This paper introduces a class of adaptive algorithms for source separation which implements an adaptive version of equivariant estimation and is henceforth called EASI (Equivariant Adaptive Separation via Independence). The EASI algorithms are based on the idea of serial updating: this specific form of matrix updates systematically yields algorithms with a simple, parallelizable structure, for both real and complex mixtures. Most importantly, the performance of an EASI algorithm does not depend on the mixing matrix. In particular, convergence rates, stability conditions and interference rejection levels depend only on the (normalized) distributions of the source signals. Close form expressions of these quantities are given via an asymptotic performance analysis. This is completed by some numerical experiments illustrating the effectiveness of the proposed approach. Target text information: Working Paper IS-97-22 (Information Systems) A First Application of Independent Component Analysis to Extracting Structure: This paper discusses the application of a modern signal processing technique known as independent component analysis (ICA) or blind source separation to multivariate financial time series such as a portfolio of stocks. The key idea of ICA is to linearly map the observed multivariate time series into a new space of statistically independent components (ICs). This can be viewed as a factorization of the portfolio since joint probabilities become simple products in the coordinate system of the ICs. We apply ICA to three years of daily returns of the 28 largest Japanese stocks and compare the results with those obtained using principal component analysis. The results indicate that the estimated ICs fall into two categories, (i) infrequent but large shocks (responsible for the major changes in the stock prices), and (ii) frequent smaller fluctuations (contributing little to the overall level of the stocks). We show that the overall stock price can be reconstructed surprisingly well by using a small number of thresholded weighted ICs. In contrast, when using shocks derived from principal components instead of independent components, the reconstructed price is less similar to the original one. Independent component analysis is a potentially powerful method of analyzing and understanding driving mechanisms in financial markets. There are further promising applications to risk management since ICA focuses on higher order statistics. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
671
test
1-hop neighbor's text information: Machine learning research: Four current directions. : Machine Learning research has been making great progress is many directions. This article summarizes four of these directions and discusses some current open problems. The four directions are (a) improving classification accuracy by learning ensembles of classifiers, (b) methods for scaling up supervised learning algorithms, (c) reinforcement learning, and (d) learning complex stochastic models. 1-hop neighbor's text information: A system for induction of oblique decision trees. : This article describes a new system for induction of oblique decision trees. This system, OC1, combines deterministic hill-climbing with two forms of randomization to find a good oblique split (in the form of a hyperplane) at each node of a decision tree. Oblique decision tree methods are tuned especially for domains in which the attributes are numeric, although they can be adapted to symbolic or mixed symbolic/numeric attributes. We present extensive empirical studies, using both real and artificial data, that analyze OC1's ability to construct oblique trees that are smaller and more accurate than their axis-parallel counterparts. We also examine the benefits of randomization for the construction of oblique decision trees. Target text information: A hierarchical ensemble of decision trees applied to classifying data from a psychological experiment: Classifying by hand complex data coming from psychology experiments can be a long and difficult task, because of the quantity of data to classify and the amount of training it may require. One way to alleviate this problem is to use machine learning techniques. We built a classifier based on decision trees that reproduces the classifying process used by two humans on a sample of data and that learns how to classify unseen data. The automatic classifier proved to be more accurate, more constant and much faster than classification by hand. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
92
test
1-hop neighbor's text information: Hoeffding Races: Accelerating Model Selection Search for Classification and Function Approximation, : Selecting a good model of a set of input points by cross validation is a computationally intensive process, especially if the number of possible models or the number of training points is high. Techniques such as gradient descent are helpful in searching through the space of models, but problems such as local minima, and more importantly, lack of a distance metric between various models reduce the applicability of these search methods. Hoeffding Races is a technique for finding a good model for the data by quickly discarding bad models, and concentrating the computational effort at differentiating between the better ones. This paper focuses on the special case of leave-one-out cross validation applied to memory-based learning algorithms, but we also argue that it is applicable to any class of model selection problems. 1-hop neighbor's text information: Predicting probability distributions: A connectionist approach. : Most traditional prediction techniques deliver the mean of the probability distribution (a single point). For multimodal processes, instead of predicting the mean of the probability distribution, it is important to predict the full distribution. This article presents a new connectionist method to predict the conditional probability distribution in response to an input. The main idea is to transform the problem from a regression to a classification problem. The conditional probability distribution network can perform both direct predictions and iterated predictions, a task which is specific for time series problems. We compare our method to fuzzy logic and discuss important differences, and also demonstrate the architecture on two time series. The first is the benchmark laser series used in the Santa Fe competition, a deterministic chaotic system. The second is a time series from a Markov process which exhibits structure on two time scales. The network produces multimodal predictions for this series. We compare the predictions of the network with a nearest-neighbor predictor and find that the conditional probability network is more than twice as likely a model. 1-hop neighbor's text information: Introduction to the Theory of Neural Computa 92 tion. : Neural computation, also called connectionism, parallel distributed processing, neural network modeling or brain-style computation, has grown rapidly in the last decade. Despite this explosion, and ultimately because of impressive applications, there has been a dire need for a concise introduction from a theoretical perspective, analyzing the strengths and weaknesses of connectionist approaches and establishing links to other disciplines, such as statistics or control theory. The Introduction to the Theory of Neural Computation by Hertz, Krogh and Palmer (subsequently referred to as HKP) is written from the perspective of physics, the home discipline of the authors. The book fulfills its mission as an introduction for neural network novices, provided that they have some background in calculus, linear algebra, and statistics. It covers a number of models that are often viewed as disjoint. Critical analyses and fruitful comparisons between these models Target text information: NONPARAMETRIC SELECTION OF INPUT VARIABLES FOR CONNECTIONIST LEARNING: I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
666
test
1-hop neighbor's text information: Vector associative maps: Unsupervised real-time error-based learning and control of movement trajectories, : 1-hop neighbor's text information: Neural competitive maps for reactive and adaptive navigation: We have recently introduced a neural network for reactive obstacle avoidance based on a model of classical and operant conditioning. In this article we describe the success of this model when implemented on two real autonomous robots. Our results show the promise of self-organizing neural networks in the domain of intelligent robotics. 1-hop neighbor's text information: Robot shaping: Developing autonomous agents though learning. : Learning plays a vital role in the development of situated agents. In this paper, we explore the use of reinforcement learning to "shape" a robot to perform a predefined target behavior. We connect both simulated and real robots to A LECSYS, a parallel implementation of a learning classifier system with an extended genetic algorithm. After classifying different kinds of Animat-like behaviors, we explore the effects on learning of different types of agent's architecture (monolithic, flat and hierarchical) and of training strategies. In particular, hierarchical architecture requires the agent to learn how to coordinate basic learned responses. We show that the best results are achieved when both the agent's architecture and the training strategy match the structure of the behavior pattern to be learned. We report the results of a number of experiments carried out both in simulated and in real environments, and show that the results of simulations carry smoothly to real robots. While most of our experiments deal with simple reactive behavior, in one of them we demonstrate the use of a simple and general memory mechanism. As a whole, our experimental activity demonstrates that classifier systems with genetic algorithms can be practically employed to develop autonomous agents. Target text information: An unsupervised neural network for real-time, low-level control of a mobile robot: noise resistance, stability, and hardware implementation. : We have recently introduced a neural network mobile robot controller (NETMORC) that autonomously learns the forward and inverse odometry of a differential drive robot through an unsupervised learning-by-doing cycle. After an initial learning phase, the controller can move the robot to an arbitrary stationary or moving target while compensating for noise and other forms of disturbance, such as wheel slippage or changes in the robot's plant. In addition, the forward odometric map allows the robot to reach targets in the absence of sensory feedback. The controller is also able to adapt in response to long-term changes in the robot's plant, such as a change in the radius of the wheels. In this article we review the NETMORC architecture and describe its simplified algorithmic implementation, we present new, quantitative results on NETMORC's performance and adaptability under noise-free and noisy conditions, we compare NETMORC's performance on a trajectory-following task with the performance of an alternative controller, and we describe preliminary results on the hardware implementation of NETMORC with the mobile robot ROBUTER. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
2,021
test
1-hop neighbor's text information: A Genome Compiler for High Performance Genetic Programming: Genetic Programming is very computationally expensive. For most applications, the vast majority of time is spent evaluating candidate solutions, so it is desirable to make individual evaluation as efficient as possible. We describe a genome compiler which compiles s-expressions to machine code, resulting in significant speedup of individual evaluations over standard GP systems. Based on performance results with symbolic regression, we show that the execution of the genome compiler system is comparable to the fastest alternative GP systems. We also demonstrate the utility of compilation on a real-world problem, lossless image compression. A somewhat surprising result is that in our test domains, the overhead of compilation is negligible. 1-hop neighbor's text information: A survey of intron research in genetics. : A brief survey of biological research on non-coding DNA is presented here. There has been growing interest in the effects of non-coding segments in evolutionary algorithms (EAs). To better understand and conduct research on non-coding segments and EAs, it is important to understand the biological background of such work. This paper begins with a review of basic genetics and terminology, describes the different types of non-coding DNA, and then surveys recent intron research. Target text information: Evolving Turing-complete programs for a register machine with self-modifying code. : The majority of commercial computers today are register machines of von Neumann type. We have developed a method to evolve Turing-complete programs for a register machine. The described implementation enables the use of most program constructs, such as arithmetic operators, large indexed memory, automatic decomposition into subfunctions and subroutines (ADFs), conditional constructs i.e. if-then-else, jumps, loop structures, recursion, protected functions, string and list functions. Any C-function can be compiled and linked into the function set of the system. The use of register machine language allows us to work at the lowest level of binary machine code without any interpreting steps. In a von Neumann machine, programs and data reside in the same memory and the genetic operators can thus directly manipulate the binary machine code in memory. The genetic operators themselves are written in C-language but they modify individuals in binary representation. The result is an execution speed enhancement of up to 100 times compared to an interpreting C-language implementation, and up to 2000 times compared to a LISP implementation. The use of binary machine code demands a very compact coding of about one byte per node in the individual. The resulting evolved programs are disassembled into C-modules and can be incorporated into a conventional software development environment. The low memory requirements and the significant speed enhancement of this technique could be of use when applying genetic programming to new application areas, platforms and research domains. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
1,887
test
1-hop neighbor's text information: Characterizing Carbon Dynamics in a Northern Forest Using SIR-C/X-SAR Imagery Characterizing Carbon Dynamics in a: 1 ABSTRACT Target text information: "Regional stability of and ERS/JERS-1 classifier," : The achievements of SAR-based land-cover classification have progressed rapidly in recent years using data from the JPL AirSAR [1]. With the launches of the European ERS-1 and Japanese JERS-1 satellites worldwide SAR data is now routinely available from a spaceborne sensor. While the previous efforts have been impressive[2], the combination of these two sensors promises even more discrimination ability than either alone. Consequently, it is important to devise a classification algorithm that is robust to variability in seasonal variations, weather, and local vegetation species. This paper presents a first step toward that goal. Several scenes acquired during August, and one during December using ERS-1 were combined with two scenes from August and October from JERS-1, but from different years. Each same-season pair is then combined and classified using a hierarchical Bayesian approach. The August ERS-1 scenes were chosen based on local moisture conditions: some having relatively dry soil and another scene was acquired just after a significant rain storm. The Octo-ber/December pair is used to assess the ability to classify in the fall/winter. Despite the variability in conditions, a single classifier is to be developed that can classify five or six structural classes (bare surfaces, short vegetation, and a few different kinds of trees) with high accuracy. This has been achieved with a single ERS/JERS image with accuracies higher than 90% [3]. Once all the different scenes have been classified the differences betweeen the rules in each of the classifiers is to be related to changes in physical parameters due to rain and the change of season. For example, since the deciduous trees do not have leaves during the winter, the C-band (ERS-1) radar response is quite different in the summer than in the winter. Basic knowledge such as this can be used to adapt the classifier to seasonal changes. fl * Visiting Research Scientist from University of Munich I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
2,438
test
1-hop neighbor's text information: Global stabilization of linear systems with bounded feedback. : This paper deals with the problem of global stabilization of linear discrete time systems by means of bounded feedback laws. The main result proved is an analog of one proved for the continuous time case by the authors, and shows that such stabilization is possible if and only if the system is stabilizable with arbitrary controls and the transition matrix has spectral radius less or equal to one. The proof provides in principle an algorithm for the construction of such feedback laws, which can be implemented either as cascades or as parallel connections ("single hidden layer neural networks") of simple saturation functions. Target text information: Avoiding Saturation By Trajectory Reparameterization: The problem of trajectory tracking in the presence of input constraints is considered. The desired trajectory is reparameterized on a slower time scale in order to avoid input saturation. Necessary conditions that the reparameterizing function must satisfy are derived. The deviation from the nominal trajectory is minimized by formulating the problem as an optimal control problem. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
280
test
1-hop neighbor's text information: Ok. Scaling up average reward reinforcement learning by approx-imating the domain models and the value function. : Almost all the work in Average-reward Reinforcement Learning (ARL) so far has focused on table-based methods which do not scale to domains with large state spaces. In this paper, we propose two extensions to a model-based ARL method called H-learning to address the scale-up problem. We extend H-learning to learn action models and reward functions in the form of Bayesian networks, and approximate its value function using local linear regression. We test our algorithms on several scheduling tasks for a simulated Automatic Guided Vehicle (AGV) and show that they are effective in significantly reducing the space requirement of H-learning and making it converge faster. To the best of our knowledge, our results are the first in apply ing function approximation to ARL. 1-hop neighbor's text information: From knowledge bases to decision models. : Modeling techniques developed recently in the AI and uncertain reasoning communities permit significantly more flexible specifications of probabilistic knowledge. Specifically, graphical decision-modeling formalisms|belief networks, influence diagrams, and their variants|provide compact representation of probabilistic relationships, and support inference algorithms that automatically exploit the dependence structure in such models [1, 3, 4]. These advances have brought on a resurgence of interest in computational decision systems based on normative theories of belief and preference. However, graphical decision-modeling languages are still quite limited for purposes of knowledge representation because, while they can describe the relationships among particular event instances, they cannot capture general knowledge about probabilistic relationships across classes of events. The inability to capture general knowledge is a serious impediment for those AI tasks in which the relevant factors of a decision problem cannot be enumerated in advance. A graphical decision model encodes a particular set of probabilistic dependencies, a predefined set of decision alternatives, and a specific mathematical form for a utility function. Given a properly specified model, there exist relatively efficient algorithms for calculating posterior probabilities and optimal decision policies. A range of similar cases may be handled by parametric variations of the original model. However, if the structure of dependencies, the set of available alternatives, or the form of utility function changes from situation to situation, then a fixed network representation is no longer adequate. An ideal computational decision system would possess general, broad knowledge of a domain, but would have the ability to reason about the particular circumstances of any given decision problem within the domain. One obvious approach|which we call call knowledge-based model construction (KBMC)|is to generate a decision model dynamically at run-time, based on the problem description and information received thus far. Model construction consists of selection, instantiation, and assembly of causal and associational relationships from a broad knowledge base of general relationships among domain concepts. For example, suppose we wish to develop a system to recommend appropriate actions for maintaining a computer network. The natural graphical decision model would include chance 1-hop neighbor's text information: Fall diagnosis using dynamic belief networks. : The task is to monitor walking patterns and give early warning of falls using foot switch and mercury trigger sensors. We describe a dynamic belief network model for fall diagnosis which, given evidence from sensor observations, outputs beliefs about the current walking status and makes predictions regarding future falls. The model represents possible sensor error and is parametrised to allow customisation to the individual being monitored. Target text information: The data association problem when monitoring robot vehicles using dynamic belief networks. : We describe the development of a monitoring system which uses sensor observation data about discrete events to construct dynamically a probabilistic model of the world. This model is a Bayesian network incorporating temporal aspects, which we call a Dynamic Belief Network; it is used to reason under uncertainty about both the causes and consequences of the events being monitored. The basic dynamic construction of the network is data-driven. However the model construction process combines sensor data about events with externally provided information about agents' behaviour, and knowledge already contained within the model, to control the size and complexity of the network. This means that both the network structure within a time interval, and the amount of history and detail maintained, can vary over time. We illustrate the system with the example domain of monitoring robot vehicles and people in a restricted dynamic environment using light-beam sensor data. In addition to presenting a generic network structure for monitoring domains, we describe the use of more complex network structures which address two specific monitoring problems, sensor validation and the Data Association Problem. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
1,719
test
1-hop neighbor's text information: Cliff (1993). "Issues in evolutionary robotics," From Animals to Animats 2 (Ed. : A version of this paper appears in: Proceedings of SAB92, the Second International Conference on Simulation of Adaptive Behaviour J.-A. Meyer, H. Roitblat, and S. Wilson, editors, MIT Press Bradford Books, Cambridge, MA, 1993. 1-hop neighbor's text information: "Coevolving High Level Representations," : 1-hop neighbor's text information: Challenges in evolving controllers for physical robots. Robot and Autonomous Systems, : This paper discusses the feasibility of applying evolutionary methods to automatically generating controllers for physical mobile robots. We overview the state of the art in the field, describe some of the main approaches, discuss the key challenges, unanswered problems, and some promising directions. Target text information: Artificial evolution of visual control systems for robots. : Many arthropods (particularly insects) exhibit sophisticated visually guided behaviours. Yet in most cases the behaviours are guided by input from a few hundreds or thousands of "pixels" (i.e. ommatidia in the compound eye). Inspired by this observation, we have for several years been exploring the possibilities of visually guided robots with low-bandwidth vision. Rather than design the robot controllers by hand, we use artificial evolution (in the form of an extended genetic algorithm) to automatically generate the architectures for artificial neural networks which generate effective sensory-motor coordination when controlling mobile robots. Analytic techniques drawn from neuroethology and dynamical systems theory allow us to understand how the evolved robot controllers function, and to predict their behaviour in environments other than those used during the evolutionary process. Initial experiments were performed in simulation, but the techniques have now been successfully transferred to work with a variety of real physical robot platforms. This chapter reviews our past work, concentrating on the analysis of evolved controllers, and gives an overview of our current research. We conclude with a discussion of the application of our evolutionary techniques to problems in biological vision. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
2,026
val
1-hop neighbor's text information: Applying ILP to diterpene structure elucidation from 13C NMR spectra. : We present a novel application of ILP to the problem of diterpene structure elucidation from 13 C NMR spectra. Diterpenes are organic compounds of low molecular weight that are based on a skeleton of 20 carbon atoms. They are of significant chemical and commercial interest because of their use as lead compounds in the search for new pharmaceutical effectors. The structure elucidation of diterpenes based on 13 C NMR spectra is usually done manually by human experts with specialized background knowledge on peak patterns and chemical structures. In the process, each of the 20 skeletal atoms is assigned an atom number that corresponds to its proper place in the skeleton and the diterpene is classified into one of the possible skeleton types. We address the problem of learning classification rules from a database of peak patterns for diterpenes with known structure. Recently, propositional learning was successfully applied to learn classification rules from spectra with assigned atom numbers. As the assignment of atom numbers is a difficult process in itself (and possibly indistinguishable from the classification process), we apply ILP, i.e., relational learning, to the problem of classifying spectra without assigned atom numbers. 1-hop neighbor's text information: Learning logical definitions from relations. : 1-hop neighbor's text information: Combining FOIL and EBG to speedup logic programs. : This paper presents an algorithm that combines traditional EBL techniques and recent developments in inductive logic programming to learn effective clause selection rules for Prolog programs. When these control rules are incorporated into the original program, significant speed-up may be achieved. The algorithm is shown to be an improvement over competing EBL approaches in several domains. Additionally, the algorithm is capable of automatically transforming some intractable algorithms into ones that run in polynomial time. Target text information: An intelligent search method using Inductive Logic Programming: We propose a method to use Inductive Logic Programming to give heuristic functions for searching goals to solve problems. The method takes solutions of a problem or a history of search and a set of background knowledge on the problem. In a large class of problems, a problem is described as a set of states and a set of operators, and is solved by finding a series of operators. A solution, a series of operators that brings an initial state to a final state, is transformed into positive and negative examples of a relation "better-choice", which describes that an operator is better than others in a state. We also give a way to use the "better-choice" relation as a heuristic function. The method can use any logic program as background knowledge to induce heuristics, and induced heuristics has high readability. The paper inspects the method by applying to a puzzle. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
0
Rule Learning
cora
359
test
1-hop neighbor's text information: Emergent Hierarchical Control Structures: Learning Reactive/Hierarchical Relationships in Reinforcement Environments, : The use of externally imposed hierarchical structures to reduce the complexity of learning control is common. However, it is acknowledged that learning the hierarchical structure itself is an important step towards more general (learning of many things as required) and less bounded (learning of a single thing as specified) learning. Presented in this paper is a reinforcement learning algorithm called Nested Q-learning that generates a hierarchical control structure in reinforcement learning domains. The emergent structure combined with learned bottom-up reactive reactions results in a reactive hierarchical control system. Effectively, the learned hierarchy decomposes what would otherwise be a monolithic evaluation function into many smaller evaluation functions that can be recombined without the loss of previously learned information. 1-hop neighbor's text information: Transfer of Learning by Composing Solutions of Elemental Sequential Tasks, : Although building sophisticated learning agents that operate in complex environments will require learning to perform multiple tasks, most applications of reinforcement learning have focussed on single tasks. In this paper I consider a class of sequential decision tasks (SDTs), called composite sequential decision tasks, formed by temporally concatenating a number of elemental sequential decision tasks. Elemental SDTs cannot be decomposed into simpler SDTs. I consider a learning agent that has to learn to solve a set of elemental and composite SDTs. I assume that the structure of the composite tasks is unknown to the learning agent. The straightforward application of reinforcement learning to multiple tasks requires learning the tasks separately, which can waste computational resources, both memory and time. I present a new learning algorithm and a modular architecture that learns the decomposition of composite SDTs, and achieves transfer of learning by sharing the solutions of elemental SDTs across multiple composite SDTs. The solution of a composite SDT is constructed by computationally inexpensive modifications of the solutions of its constituent elemental SDTs. I provide a proof of one aspect of the learning algorithm. 1-hop neighbor's text information: Learning in continuous domains with delayed rewards. : Much has been done to develop learning techniques for delayed reward problems in worlds where the actions and/or states are approximated by discrete representations. Although this is acceptable in some applications there are many more situations where such an approximation is difficult and unnatural. For instance, in applications such as robotic,s where real machines interact with the real world, learning techniques that use real valued continuous quantities are required. Presented in this paper is an extension to Q-learning that uses both real valued states and actions. This is achieved by introducing activation strengths to each actuator system of the robot. This allow all actuators to be active to some continuous amount simultaneously. Learning occurs by incrementally adapting both the expected future reward to goal evaluation function and the gradients of that function with respect to each actuator system. Target text information: Learning Hierarchical Control Structures for Multiple Tasks and Changing Environments, : While the need for hierarchies within control systems is apparent, it is also clear to many researchers that such hierarchies should be learned. Learning both the structure and the component behaviors is a difficult task. The benefit of learning the hierarchical structures of behaviors is that the decomposition of the control structure into smaller transportable chunks allows previously learned knowledge to be applied to new but related tasks. Presented in this paper are improvements to Nested Q-learning (NQL) that allow more realistic learning of control hierarchies in reinforcement environments. Also presented is a simulation of a simple robot performing a series of related tasks that is used to compare both hierarchical and non-hierarchal learning techniques. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
5
Reinforcement Learning
cora
899
test
1-hop neighbor's text information: Worst-case quadratic loss bounds for on-line prediction of linear functions by gradient descent. : In this paper we study the performance of gradient descent when applied to the problem of on-line linear prediction in arbitrary inner product spaces. We show worst-case bounds on the sum of the squared prediction errors under various assumptions concerning the amount of a priori information about the sequence to predict. The algorithms we use are variants and extensions of on-line gradient descent. Whereas our algorithms always predict using linear functions as hypotheses, none of our results requires the data to be linearly related. In fact, the bounds proved on the total prediction loss are typically expressed as a function of the total loss of the best fixed linear predictor with bounded norm. All the upper bounds are tight to within constants. Matching lower bounds are provided in some cases. Finally, we apply our results to the problem of on-line prediction for classes of smooth functions. 1-hop neighbor's text information: Long. The learning complexity of smooth functions of a single variable. : We study the on-line learning of classes of functions of a single real variable formed through bounds on various norms of functions' derivatives. We determine the best bounds obtainable on the worst-case sum of squared errors (also "absolute" errors) for several such classes. We prove upper bounds for these classes of smooth functions for other loss functions, and prove upper and lower bounds in terms of the number of trials. 1-hop neighbor's text information: Warmuth "How to use expert advice", : We analyze algorithms that predict a binary value by combining the predictions of several prediction strategies, called experts. Our analysis is for worst-case situations, i.e., we make no assumptions about the way the sequence of bits to be predicted is generated. We measure the performance of the algorithm by the difference between the expected number of mistakes it makes on the bit sequence and the expected number of mistakes made by the best expert on this sequence, where the expectation is taken with respect to the randomization in the predictions. We show that the minimum achievable difference is on the order of the square root of the number of mistakes of the best expert, and we give efficient algorithms that achieve this. Our upper and lower bounds have matching leading constants in most cases. We then show how this leads to certain kinds of pattern recognition/learning algorithms with performance bounds that improve on the best results currently known in this context. We also compare our analysis to the case in which log loss is used instead of the expected number of mistakes. Target text information: On-line learning of linear functions. : We present an algorithm for the on-line learning of linear functions which is optimal to within a constant factor with respect to bounds on the sum of squared errors for a worst case sequence of trials. The bounds are logarithmic in the number of variables. Furthermore, the algorithm is shown to be optimally robust with respect to noise in the data (again to within a constant factor). Key words. Machine learning; computational learning theory; on-line learning; linear functions; worst-case loss bounds; adaptive filter theory. Subject classifications. 68T05. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
1,543
val
1-hop neighbor's text information: Learning Where To Go without Knowing Where That Is: The Acquisition of a Non-reactive Mobot: In the path-imitation task, one agent traces out a path through a second agent's sensory field. The second agent then has to reproduce that path exactly, i.e. move through the sequence of locations visited by the first agent. This is a non-trivial behaviour whose acquisition might be expected to involve special-purpose (i.e., strongly biased) learning machinery. However, the present paper shows this is not the case. The behaviour can be acquired using a fairly primitive learning regime provided that the agent's environment can be made to pass through a specific sequence of dynamic states. Target text information: Unsupervised learning with the soft-means algorithm. : This note describes a useful adaptation of the `peak seeking' regime used in unsupervised learning processes such as competitive learning and `k-means'. The adaptation enables the learning to capture low-order probability effects and thus to more fully capture the probabilistic structure of the training data. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
1,935
test
1-hop neighbor's text information: Refining conversational case libraries. : Conversational case-based reasoning (CBR) shells (e.g., Inference's CBR Express) are commercially successful tools for supporting the development of help desk and related applications. In contrast to rule-based expert systems, they capture knowledge as cases rather than more problematic rules, and they can be incrementally extended. However, rather than eliminate the knowledge engineering bottleneck, they refocus it on case engineering, the task of carefully authoring cases according to library design guidelines to ensure good performance. Designing complex libraries according to these guidelines is difficult; software is needed to assist users with case authoring. We describe an approach for revising case libraries according to design guidelines, its implementation in Clire, and empirical results showing that, under some conditions, this approach can improve conversational CBR performance. 1-hop neighbor's text information: Supporting conversational case-based reasoning in an integrated reasoning framework. : Conversational case-based reasoning (CCBR) has been successfully used to assist in case retrieval tasks. However, behavioral limitations of CCBR motivate the search for integrations with other reasoning approaches. This paper briefly describes our group's ongoing efforts towards enhancing the inferencing behaviors of a conversational case-based reasoning development tool named NaCoDAE. In particular, we focus on integrating NaCoDAE with machine learning, model-based reasoning, and generative planning modules. This paper defines CCBR, briefly summarizes the integrations, and explains how they enhance the overall system. Our research focuses on enhancing the performance of conversational case-based reasoning (CCBR) systems (Aha & Breslow, 1997). CCBR is a form of case-based reasoning where users initiate problem solving conversations by entering an initial problem description in natural language text. This text is assumed to be a partial rather than a complete problem description. The CCBR system then assists in eliciting refinements of this description and in suggesting solutions. Its primary purpose is to provide a focus of attention for the user so as to quickly provide a solution(s) for their problem. Figure 1 summarizes the CCBR problem solving cycle. Cases in a CCBR library have three components: Target text information: A model-based approach for supporting dialogue inferencing in a conversational case-based reasoner. : Conversational case-based reasoning (CCBR) is a form of interactive case-based reasoning where users input a partial problem description (in text). The CCBR system responds with a ranked solution display, which lists the solutions of stored cases whose problem descriptions best match the user's, and a ranked question display, which lists the unanswered questions in these cases. Users interact with these displays, either refining their problem description by answering selected questions, or selecting a solution to apply. CCBR systems should support dialogue inferencing; they should infer answers to questions that are implied by the problem description. Otherwise, questions will be listed that the user believes they have already answered. The standard approach to dialogue inferencing allows case library designers to insert rules that define implications between the problem description and unanswered questions. However, this approach imposes substantial knowledge engineering requirements. We introduce an alternative approach whereby an intelligent assistant guides the designer in defining a model of their case library, from which implication rules are derived. We detail this approach, its benefits, and explain how it can be supported through an integration with Parka-DB, a fast relational database system. We will evaluate our approach in the context of our CCBR system, named NaCoDAE. This paper appeared at the 1998 AAAI Spring Symposium on Multimodal Reasoning, and is NCARAI TR AIC-97-023. We introduce an integrated reasoning approach in which a model-based reasoning component performs an important inferencing role in a conversational case-based reasoning (CCBR) system named NaCoDAE (Breslow & Aha, 1997) (Figure 1). CCBR is a form of case-based reasoning where users enter text queries describing a problem and the system assists in eliciting refinements of it (Aha & Breslow, 1997). Cases have three components: I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
1,397
test
1-hop neighbor's text information: New Modes of Generalization in Perceptual Learning: The learning of many visual perceptual tasks, such as motion discrimination, has been shown to be specific to the practiced stimulus, and new stimuli require re-learning from scratch [1-6]. This specificity, found in so many different tasks, supports the hypothesis that perceptual learning takes place in early visual cortical areas. In contrast, using a novel paradigm in motion discrimination where learning has been shown to be specific, we found generalization: We trained subjects to discriminate the directions of moving dots, and verified that learning does not transfer from the trained direction to a new one. However, by tracking the subjects' performance across time in the new direction, we found that their rate of learning doubled. Moreover, after mastering the task with an easy stimulus, subjects who had practiced briefly to discriminate the easy stimulus in a new direction generalized to a difficult stimulus in that direction. This generalization demanded both the mastering and the brief practice. Thus learning in motion discrimination always generalizes to new stimuli. Learning is manifested in various forms: acceleration of learning rate, indirect transfer, or direct transfer [7, 8]. These results challenge existing theories of perceptual learning, and suggest a more complex picture in which learning takes place at multiple levels. Learning in biological systems is of great importance. But while cognitive learning (or "problem solving") is abrupt and generalizes to analogous problems, we appear to acquire our perceptual skills gradually and specifically: human subjects cannot generalize a perceptual discrimination skill to solve similar problems with different attributes. For example, in a discrimination task as described in Fig. 1, a subject who is trained to discriminate motion directions between 43:5 ffi and 46:5 ffi cannot use this skill to discriminate 133:5 ffi from 136:5 ffi . 1 Such specificity supports the hypothesis that perceptual learning embodies neuronal modifications in the brain's stimulus-specific cortical areas (e.g., visual area MT) [1-6]. In contrast to previous results of specificity, we will show, in three experiments, that learning in motion discrimination always generalizes. (1) When the task is easy, it generalizes to all directions after training in Target text information: Stimulus specific learning: a consequence of stimulus-specific experiments? Perception, : I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
2,560
val
1-hop neighbor's text information: Hierarchical Mixtures of Experts and the EM Algorithm, : We present a tree-structured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIM's). Learning is treated as a maximum likelihood problem; in particular, we present an Expectation-Maximization (EM) algorithm for adjusting the parameters of the architecture. We also develop an on-line learning algorithm in which the parameters are updated incrementally. Comparative simulation results are presented in the robot dynamics domain. *We want to thank Geoffrey Hinton, Tony Robinson, Mitsuo Kawato and Daniel Wolpert for helpful comments on the manuscript. This project was supported in part by a grant from the McDonnell-Pew Foundation, by a grant from ATR Human Information Processing Research Laboratories, by a grant from Siemens Corporation, by by grant IRI-9013991 from the National Science Foundation, and by grant N00014-90-J-1942 from the Office of Naval Research. The project was also supported by NSF grant ASC-9217041 in support of the Center for Biological and Computational Learning at MIT, including funds provided by DARPA under the HPCC program, and NSF grant ECS-9216531 to support an Initiative in Intelligent Control at MIT. Michael I. Jordan is a NSF Presidential Young Investigator. 1-hop neighbor's text information: Beyond independence: Conditions for the optimality of the simple bayesian classifier. : The simple Bayesian classifier (SBC) is commonly thought to assume that attributes are independent given the class, but this is apparently contradicted by the surprisingly good performance it exhibits in many domains that contain clear attribute dependences. No explanation for this has been proposed so far. In this paper we show that the SBC does not in fact assume attribute independence, and can be optimal even when this assumption is violated by a wide margin. The key to this finding lies in the distinction between classification and probability estimation: correct classification can be achieved even when the probability estimates used contain large errors. We show that the previously-assumed region of optimality of the SBC is a second-order infinitesimal fraction of the actual one. This is followed by the derivation of several necessary and several sufficient conditions for the optimality of the SBC. For example, the SBC is optimal for learning arbitrary conjunctions and disjunctions, even though they violate the independence assumption. The paper also reports empirical evidence of the SBC's competitive performance in domains containing substantial degrees of attribute dependence. 1-hop neighbor's text information: Bias plus variance decomposition for zero-one loss functions. : We present a bias-variance decomposition of expected misclassification rate, the most commonly used loss function in supervised classification learning. The bias-variance decomposition for quadratic loss functions is well known and serves as an important tool for analyzing learning algorithms, yet no decomposition was offered for the more commonly used zero-one (misclassification) loss functions until the recent work of Kong & Dietterich (1995) and Breiman (1996). Their decomposition suffers from some major shortcomings though (e.g., potentially negative variance), which our decomposition avoids. We show that, in practice, the naive frequency-based estimation of the decomposition terms is by itself biased and show how to correct for this bias. We illustrate the decomposition on various algorithms and datasets from the UCI repository. Target text information: On learning hierarchical classifications: Many significant real-world classification tasks involve a large number of categories which are arranged in a hierarchical structure; for example, classifying documents into subject categories under the library of congress scheme, or classifying world-wide-web documents into topic hierarchies. We investigate the potential benefits of using a given hierarchy over base classes to learn accurate multi-category classifiers for these domains. First, we consider the possibility of exploiting a class hierarchy as prior knowledge that can help one learn a more accurate classifier. We explore the benefits of learning category-discriminants in a hard top-down fashion and compare this to a soft approach which shares training data among sibling categories. In doing so, we verify that hierarchies have the potential to improve prediction accuracy. But we argue that the reasons for this can be subtle. Sometimes, the improvement is only because using a hierarchy happens to constrain the expressiveness of a hypothesis class in an appropriate manner. However, various controlled experiments show that in other cases the performance advantage associated with using a hierarchy really does seem to be due to the prior knowledge it encodes. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
94
val
1-hop neighbor's text information: The Structure-Mapping Engine: Algorithms and Examples. : This paper describes the Structure-Mapping Engine (SME), a program for studying analogical processing. SME has been built to explore Gentner's Structure-mapping theory of analogy, and provides a "tool kit" for constructing matching algorithms consistent with this theory. Its flexibility enhances cognitive simulation studies by simplifying experimentation. Furthermore, SME is very efficient, making it a useful component in machine learning systems as well. We review the Structure-mapping theory and describe the design of the engine. We analyze the complexity of the algorithm, and demonstrate that most of the steps are polynomial, typically bounded by O (N 2 ). Next we demonstrate some examples of its operation taken from our cognitive simulation studies and work in machine learning. Finally, we compare SME to other analogy programs and discuss several areas for future work. This paper appeared in Artificial Intelligence, 41, 1989, pp 1-63. For more information, please contact [email protected] Target text information: Is analogical problem solving always analogical? The case for imitation. Second draft Is analogical problem: HCRL Technical Report 97 By I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
847
test
1-hop neighbor's text information: "A Coevolutionary Approach to Learning Sequential Decision Rules", : We present a coevolutionary approach to learning sequential decision rules which appears to have a number of advantages over non-coevolutionary approaches. The coevolutionary approach encourages the formation of stable niches representing simpler sub-behaviors. The evolutionary direction of each subbehavior can be controlled independently, providing an alternative to evolving complex behavior using intermediate training steps. Results are presented showing a significant learning rate speedup over a non-coevolutionary approach in a simulated robot domain. In addition, the results suggest the coevolutionary approach may lead to emer gent problem decompositions. 1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming. 1-hop neighbor's text information: Learning Concept Classification Rules Using Genetic Algorithms. : In this paper, we explore the use of genetic algorithms (GAs) as a key element in the design and implementation of robust concept learning systems. We describe and evaluate a GA-based system called GABIL that continually learns and refines concept classification rules from its interaction with the environment. The use of GAs is motivated by recent studies showing the effects of various forms of bias built into different concept learning systems, resulting in systems that perform well on certain concept classes (generally, those well matched to the biases) and poorly on others. By incorporating a GA as the underlying adaptive search mechanism, we are able to construct a concept learning system that has a simple, unified architecture with several important features. First, the system is surprisingly robust even with minimal bias. Second, the system can be easily extended to incorporate traditional forms of bias found in other concept learning systems. Finally, the architecture of the system encourages explicit representation of such biases and, as a result, provides for an important additional feature: the ability to dynamically adjust system bias. The viability of this approach is illustrated by comparing the performance of GABIL with that of four other more traditional concept learners (AQ14, C4.5, ID5R, and IACL) on a variety of target concepts. We conclude with some observations about the merits of this approach and about possible extensions. Target text information: "Knowledge-Based Genetic Learning", : Genetic algorithms have been proven to be a powerful tool within the area of machine learning. However, there are some classes of problems where they seem to be scarcely applicable, e.g. when the solution to a given problem consists of several parts that influence each other. In that case the classic genetic operators cross-over and mutation do not work very well thus preventing a good performance. This paper describes an approach to overcome this problem by using high-level genetic operators and integrating task specific but domain independent knowledge to guide the use of these operators. The advantages of this approach are shown for learning a rule base to adapt the parameters of an image processing operator path within the SOLUTION system. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
1,206
test
1-hop neighbor's text information: Cost-Sensitive Classification: Empirical Evaluation of a Hybrid Genetic Decision Tree Induction Algorithm. : This paper introduces ICET, a new algorithm for costsensitive classification. ICET uses a genetic algorithm to evolve a population of biases for a decision tree induction algorithm. The fitness function of the genetic algorithm is the average cost of classification when using the decision tree, including both the costs of tests (features, measurements) and the costs of classification errors. ICET is compared here with three other algorithms for costsensitive classification EG2, CS-ID3, and IDX and also with C4.5, which classifies without regard to cost. The five algorithms are evaluated empirically on five real-world medical datasets. Three sets of experiments are performed. The first set examines the baseline performance of the five algorithms on the five datasets and establishes that ICET performs significantly better than its competitors. The second set tests the robustness of ICET under a variety of conditions and shows that ICET maintains its advantage. The third set looks at ICETs search in bias space and discovers a way to improve the search. 1-hop neighbor's text information: Irrelevant features and the subset selection problem. : We address the problem of finding a subset of features that allows a supervised induction algorithm to induce small high-accuracy concepts. We examine notions of relevance and irrelevance, and show that the definitions used in the machine learning literature do not adequately partition the features into useful categories of relevance. We present definitions for irrelevance and for two degrees of relevance. These definitions improve our understanding of the behavior of previous subset selection algorithms, and help define the subset of features that should be sought. The features selected should depend not only on the features and the target concept, but also on the induction algorithm. We describe a method for feature subset selection using cross-validation that is applicable to any induction algorithm, and discuss experiments conducted with ID3 and C4.5 on artificial and real datasets. 1-hop neighbor's text information: Prototype and feature selection by sampling and random mutation hill climbing algorithms. : With the goal of reducing computational costs without sacrificing accuracy, we describe two algorithms to find sets of prototypes for nearest neighbor classification. Here, the term prototypes refers to the reference instances used in a nearest neighbor computation the instances with respect to which similarity is assessed in order to assign a class to a new data item. Both algorithms rely on stochastic techniques to search the space of sets of prototypes and are simple to implement. The first is a Monte Carlo sampling algorithm; the second applies random mutation hill climbing. On four datasets we show that only three or four prototypes sufficed to give predictive accuracy equal or superior to a basic nearest neighbor algorithm whose run-time storage costs were approximately 10 to 200 times greater. We briefly investigate how random mutation hill climbing may be applied to select features and prototypes simultaneously. Finally, we explain the performance of the sampling algorithm on these datasets in terms of a statistical measure of the extent of clustering displayed by the target classes. Target text information: Cost-Sensitive Feature Reduction Applied to a Hybrid Genetic Algorithm, : This study is concerned with whether it is possible to detect what information contained in the training data and background knowledge is relevant for solving the learning problem, and whether irrelevant information can be eliminated in preprocessing before starting the learning process. A case study of data preprocessing for a hybrid genetic algorithm shows that the elimination of irrelevant features can substantially improve the efficiency of learning. In addition, cost-sensitive feature elimination can be effective for reducing costs of induced hypotheses. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
0
Rule Learning
cora
2,388
test
1-hop neighbor's text information: "Learning to Segment Images Using Dynamic Feature Binding," : Despite the fact that complex visual scenes contain multiple, overlapping objects, people perform object recognition with ease and accuracy. One operation that facilitates recognition is an early segmentation process in which features of objects are grouped and labeled according to which object they belong. Current computational systems that perform this operation are based on predefined grouping heuristics. We describe a system called MAGIC that learns how to group features based on a set of presegmented examples. In many cases, MAGIC discovers grouping heuristics similar to those previously proposed, but it also has the capability of finding nonintuitive structural regularities in images. Grouping is performed by a relaxation network that attempts to dynamically bind related features. Features transmit a complex-valued signal (amplitude and phase) to one another; binding can thus be represented by phase locking related features. MAGIC's training procedure is a generalization of recurrent back propagation to complex-valued units. Target text information: Lending Direction to Neural Networks: We present a general formulation for a network of stochastic directional units. This formulation is an extension of the Boltzmann machine in which the units are not binary, but take on values on a cyclic range, between 0 and 2 radians. This measure is appropriate to many domains, representing cyclic or angular values, e.g., wind direction, days of the week, phases of the moon. The state of each unit in a Directional-Unit Boltzmann Machine (DUBM) is described by a complex variable, where the phase component specifies a direction; the weights are also complex variables. We associate a quadratic energy function, and corresponding probability, with each DUBM configuration. The conditional distribution of a unit's stochastic state is a circular version of the Gaussian probability distribution, known as the von Mises distribution. In a mean-field approximation to a stochastic dubm, the phase component of a unit's state represents its mean direction, and the magnitude component specifies the degree of certainty associated with this direction. This combination of a value and a certainty provides additional representational power in a unit. We present a proof that the settling dynamics for a mean-field DUBM cause convergence to a free energy minimum. Finally, we describe a learning algorithm and simulations that demonstrate a mean-field DUBM's ability to learn interesting mappings. fl To appear in: Neural Networks. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
760
test
1-hop neighbor's text information: Priority ASOCS. : This paper presents an ASOCS (Adaptive Self-Organizing Concurrent System) model for massively parallel processing of incrementally defined rule systems in such areas as adaptive logic, robotics, logical inference, and dynamic control. An ASOCS is an adaptive network composed of many simple computing elements operating asynchronously and in parallel. An ASOCS can operate in either a data processing mode or a learning mode. During data processing mode, an ASOCS acts as a parallel hardware circuit. During learning mode, an ASOCS incorporates a rule expressed as a Boolean conjunction in a distributed fashion in time logarithmic in the number of rules. This paper proposes a learning algorithm and architecture for Priority ASOCS. This new ASOCS model uses rules with priorities. The new model has significant learning time and space complexity improvements over previous models. Non-von Neumann architectures such as neural networks attack the word-at-a-time bottleneck of traditional computing systems [1]. Neural networks learn input-output mappings using highly distributed processing and memory [10,11,12]. Their numerous simple processing elements with modifiable weighted links permit a high degree of parallelism. A typical neural network has fixed topology. It learns by modifying weighted links between nodes. A new class of connectionist architectures has been proposed called ASOCS (Adaptive Self-Organizing Concurrent Systems) [4,5]. ASOCS models support efficient computation through self-organized learning and parallel execution. Learning is done through the incremental presentation of rules and/or examples. ASOCS models learn by modifying their topology. Data types include Boolean and multi-state variables; recent models support analog variables. The model incorporates rules into an adaptive logic network in a parallel and self organizing fashion. In processing mode, ASOCS supports fully parallel execution on actual inputs according to the learned rules. The adaptive logic network acts as a parallel hardware circuit during execution, mapping n input boolean vectors into m output boolean vectors, in a combinatoric fashion. The overall philosophy of ASOCS follows the high level goals of current neural network models. However, the mechanisms of learning and execution vary significantly. The ASOCS logic network is topologically dynamic with the network growing to efficiently fit the specific application. Current ASOCS models are based on digital nodes. ASOCS also supports use of symbolic and heuristic learning mechanisms, thus combining the parallelism and distributed nature of connectionist computing with the potential power of AI symbolic learning. A proof of concept ASOCS chip has been developed [2]. Target text information: The Potential of Prototype Styles of Generalization: There are many ways for a learning system to generalize from training set data. This paper presents several generalization styles using prototypes in an attempt to provide accurate generalization on training set data for a wide variety of applications. These generalization styles are efficient in terms of time and space, and lend themselves well to massively parallel architectures. Empirical results of generalizing on several real-world applications are given, and these results indicate that the prototype styles of generalization presented have potential to provide accurate generalization for many applications. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
110
test
1-hop neighbor's text information: Systematic Evaluation of Design Decisions in CBR Systems: Two important goals in the evaluation of an AI theory or model are to assess the merit of the design decisions in the performance of an implemented computer system and to analyze the impact in the performance when the system faces problem domains with different characteristics. This is particularly difficult in case-based reasoning systems because such systems are typically very complex, as are the tasks and domains in which they operate. We present a methodology for the evaluation of case-based reasoning systems through systematic empirical experimentation over a range of system configurations and environmental conditions, coupled with rigorous statistical analysis of the results of the experiments. This methodology enables us to understand the behavior of the system in terms of the theory and design of the computational model, to select the best system configuration for a given domain, and to predict how the system will behave in response to changing domain and problem characteristics. A case study of a mul-tistrategy case-based and reinforcement learning system which performs autonomous robotic navigation is presented as an example. 1-hop neighbor's text information: Prototype and feature selection by sampling and random mutation hill climbing algorithms. : With the goal of reducing computational costs without sacrificing accuracy, we describe two algorithms to find sets of prototypes for nearest neighbor classification. Here, the term prototypes refers to the reference instances used in a nearest neighbor computation the instances with respect to which similarity is assessed in order to assign a class to a new data item. Both algorithms rely on stochastic techniques to search the space of sets of prototypes and are simple to implement. The first is a Monte Carlo sampling algorithm; the second applies random mutation hill climbing. On four datasets we show that only three or four prototypes sufficed to give predictive accuracy equal or superior to a basic nearest neighbor algorithm whose run-time storage costs were approximately 10 to 200 times greater. We briefly investigate how random mutation hill climbing may be applied to select features and prototypes simultaneously. Finally, we explain the performance of the sampling algorithm on these datasets in terms of a statistical measure of the extent of clustering displayed by the target classes. 1-hop neighbor's text information: Solving the multiple-instance problem with axis-parallel rectangles. : The multiple instance problem arises in tasks where the training examples are ambiguous: a single example object may have many alternative feature vectors (instances) that describe it, and yet only one of those feature vectors may be responsible for the observed classification of the object. This paper describes and compares three kinds of algorithms that learn axis-parallel rectangles to solve the multiple-instance problem. Algorithms that ignore the multiple instance problem perform very poorly. An algorithm that directly confronts the multiple instance problem (by attempting to identify which feature vectors are responsible for the observed classifications) performs best, giving 89% correct predictions on a musk-odor prediction task. The paper also illustrates the use of artificial data to debug and compare these algorithms. Target text information: Generalizing from case studies: A case study. : Most empirical evaluations of machine learning algorithms are case studies evaluations of multiple algorithms on multiple databases. Authors of case studies implicitly or explicitly hypothesize that the pattern of their results, which often suggests that one algorithm performs significantly better than others, is not limited to the small number of databases investigated, but instead holds for some general class of learning problems. However, these hypotheses are rarely supported with additional evidence, which leaves them suspect. This paper describes an empirical method for generalizing results from case studies and an example application. This method yields rules describing when some algorithms significantly outperform others on some dependent measures. Advantages for generalizing from case studies and limitations of this particular approach are also described. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
2,369
test
1-hop neighbor's text information: Probabilistic evaluation of sequential plans from causal models with hidden variables. : The paper concerns the probabilistic evaluation of plans in the presence of unmeasured variables, each plan consisting of several concurrent or sequential actions. We establish a graphical criterion for recognizing when the effects of a given plan can be predicted from passive observations on measured variables only. When the criterion is satisfied, a closed-form expression is provided for the probability that the plan will achieve a specified goal. 1-hop neighbor's text information: A theory of inferred causation. : This paper concerns the empirical basis of causation, and addresses the following issues: We propose a minimal-model semantics of causation, and show that, contrary to common folklore, genuine causal influences can be distinguished from spurious covariations following standard norms of inductive reasoning. We also establish a sound characterization of the conditions under which such a distinction is possible. We provide an effective algorithm for inferred causation and show that, for a large class of data the algorithm can uncover the direction of causal influences as defined above. Finally, we ad dress the issue of non-temporal causation. 1-hop neighbor's text information: "Causal diagrams for experimental research," : The primary aim of this paper is to show how graphical models can be used as a mathematical language for integrating statistical and subject-matter information. In particular, the paper develops a principled, nonparametric framework for causal inference, in which diagrams are queried to determine if the assumptions available are sufficient for identifying causal effects from nonexperimental data. If so the diagrams can be queried to produce mathematical expressions for causal effects in terms of observed distributions; otherwise, the diagrams can be queried to suggest additional observations or auxiliary experiments from which the desired inferences can be obtained. Key words: Causal inference, graph models, structural equations, treatment effect. Target text information: On the testability of causal models with latent and instrumental variables. : Certain causal models involving unmeasured variables induce no independence constraints among the observed variables but imply, nevertheless, inequality constraints on the observed distribution. This paper derives a general formula for such inequality constraints as induced by instrumental variables, that is, exogenous variables that directly affect some variables but not all. With the help of this formula, it is possible to test whether a model involving instrumental variables may account for the data, or, conversely, whether a given vari able can be deemed instrumental. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
1,994
test
1-hop neighbor's text information: Jordan (1996b). Recursive algorithms for approximating proba-bilities in graphical models. : MIT Computational Cognitive Science Technical Report 9604 Abstract We develop a recursive node-elimination formalism for efficiently approximating large probabilistic networks. No constraints are set on the network topologies. Yet the formalism can be straightforwardly integrated with exact methods whenever they are/become applicable. The approximations we use are controlled: they maintain consistently upper and lower bounds on the desired quantities at all times. We show that Boltzmann machines, sigmoid belief networks, or any combination (i.e., chain graphs) can be handled within the same framework. The accuracy of the methods is verified exper imentally. 1-hop neighbor's text information: Learning in Boltzmann trees. : We introduce a large family of Boltzmann machines that can be trained using standard gradient descent. The networks can have one or more layers of hidden units, with tree-like connectivity. We show how to implement the supervised learning algorithm for these Boltzmann machines exactly, without resort to simulated or mean-field annealing. The stochastic averages that yield the gradients in weight space are computed by the technique of decimation. We present results on the problems of N -bit parity and the detection of hidden symmetries. 1-hop neighbor's text information: Theory of correlations in stochastic neural networks. : One of the main experimental tools in probing the interactions between neurons has been the measurement of the correlations in their activity. In general, however the interpretation of the observed correlations is difficult, since the correlation between a pair of neurons is influenced not only by the direct interaction between them but also by the dynamic state of the entire network to which they belong. Thus, a comparison between the observed correlations and the predictions from specific model networks is needed. In this paper we develop the theory of neuronal correlation functions in large networks comprising of several highly connected subpopulations, and obeying stochastic dynamic rules. When the networks are in asynchronous states, the cross-correlations are relatively weak, i.e., their amplitude relative to that of the auto-correlations is of order of 1=N , N being the size of the interacting populations. Using the weakness of the cross-correlations, general equations which express the matrix of cross-correlations in terms of the mean neuronal activities, and the effective interaction matrix are presented. The effective interactions are the synaptic efficacies multiplied by the the gain of the postsynaptic neurons. The time-delayed cross-correlation matrix can be expressed as a sum of exponentially decaying modes that correspond to the (non-orthogonal) eigenvectors of the effective interaction matrix. The theory is extended to networks with random connectivity, such as randomly dilute networks. This allows for the comparison between the contribution from the internal common input and that from the direct Target text information: Efficient learning in Boltzmann Machines using linear response theory. : The learning process in Boltzmann Machines is computationally very expensive. The computational complexity of the exact algorithm is exponential in the number of neurons. We present a new approximate learning algorithm for Boltzmann Machines, which is based on mean field theory and the linear response theorem. The computational complexity of the algorithm is cubic in the number of neurons. In the absence of hidden units, we show how the weights can be directly computed from the fixed point equation of the learning rules. Thus, in this case we do not need to use a gradient descent procedure for the learning process. We show that the solutions of this method are close to the optimal solutions and give a significant improvement when correlations play a significant role. Finally, we apply the method to a pattern completion task and show good performance for networks up to 100 neurons. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
1,775
val
1-hop neighbor's text information: Towards a better understanding of memory-based and bayesian classifiers. : We quantify both experimentally and analytically the performance of memory-based reasoning (MBR) algorithms. To start gaining insight into the capabilities of MBR algorithms, we compare an MBR algorithm using a value difference metric to a popular Bayesian classifier. These two approaches are similar in that they both make certain independence assumptions about the data. However, whereas MBR uses specific cases to perform classification, Bayesian methods summarize the data probabilistically. We demonstrate that a particular MBR system called Pebls works comparatively well on a wide range of domains using both real and artificial data. With respect to the artificial data, we consider distributions where the concept classes are separated by functional discriminants, as well as time-series data generated by Markov models of varying complexity. Finally, we show formally that Pebls can learn (in the limit) natural concept classes that the Bayesian classifier cannot learn, and that it will attain perfect accuracy whenever 1-hop neighbor's text information: Dirichlet mixtures: A method for improving detection of weak but significant protein sequence homology. COS. : This paper presents the mathematical foundations of Dirichlet mixtures, which have been used to improve database search results for homologous sequences, when a variable number of sequences from a protein family or domain are known. We present a method for condensing the information in a protein database into a mixture of Dirichlet densities. These mixtures are designed to be combined with observed amino acid frequencies, to form estimates of expected amino acid probabilities at each position in a profile, hidden Markov model, or other statistical model. These estimates give a statistical model greater generalization capacity, such that remotely related family members can be more reliably recognized by the model. Dirichlet mixtures have been shown to outperform substitution matrices and other methods for computing these expected amino acid distributions in database search, resulting in fewer false positives and false negatives for the families tested. This paper corrects a previously published formula for estimating these expected probabilities, and contains complete derivations of the Dirichlet mixture formulas, methods for optimizing the mixtures to match particular databases, and suggestions for efficient implementation. 1-hop neighbor's text information: Protein Secondary Structure Modelling with Probabilistic Networks (Extended Abstract): In this paper we study the performance of probabilistic networks in the context of protein sequence analysis in molecular biology. Specifically, we report the results of our initial experiments applying this framework to the problem of protein secondary structure prediction. One of the main advantages of the probabilistic approach we describe here is our ability to perform detailed experiments where we can experiment with different models. We can easily perform local substitutions (mutations) and measure (probabilistically) their effect on the global structure. Window-based methods do not support such experimentation as readily. Our method is efficient both during training and during prediction, which is important in order to be able to perform many experiments with different networks. We believe that probabilistic methods are comparable to other methods in prediction quality. In addition, the predictions generated by our methods have precise quantitative semantics which is not shared by other classification methods. Specifically, all the causal and statistical independence assumptions are made explicit in our networks thereby allowing biologists to study and experiment with different causal models in a convenient manner. Target text information: Using dirichlet mixture priors to derive hidden Markov models for protein families. : A Bayesian method for estimating the amino acid distributions in the states of a hidden Markov model (HMM) for a protein family or the columns of a multiple alignment of that family is introduced. This method uses Dirichlet mixture densities as priors over amino acid distributions. These mixture densities are determined from examination of previously constructed HMMs or multiple alignments. It is shown that this Bayesian method can improve the quality of HMMs produced from small training sets. Specific experiments on the EF-hand motif are reported, for which these priors are shown to produce HMMs with higher likelihood on unseen data, and fewer false positives and false negatives in a database search task. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
42
test
1-hop neighbor's text information: A cooperative coevolutionary approach to function optimization. : A general model for the coevolution of cooperating species is presented. This model is instantiated and tested in the domain of function optimization, and compared with a traditional GA-based function optimizer. The results are encouraging in two respects. They suggest ways in which the performance of GA and other EA-based optimizers can be improved, and they suggest a new approach to evolving complex structures such as neural networks and rule sets. 1-hop neighbor's text information: "Using genetic algorithms to explore pattern recognition in the immune system," : We describe an immune system model based on a universe of binary strings. The model is directed at understanding the pattern recognition processes and learning that take place at both the individual and species levels in the immune system. The genetic algorithm (GA) is a central component of our model. In the paper we study the behavior of the GA on two pattern recognition problems that are relevant to natural immune systems. Finally, we compare our model with explicit fitness sharing techniques for genetic algorithms, and show that our model implements a form of implicit fitness sharing. 1-hop neighbor's text information: "A Coevolutionary Approach to Learning Sequential Decision Rules", : We present a coevolutionary approach to learning sequential decision rules which appears to have a number of advantages over non-coevolutionary approaches. The coevolutionary approach encourages the formation of stable niches representing simpler sub-behaviors. The evolutionary direction of each subbehavior can be controlled independently, providing an alternative to evolving complex behavior using intermediate training steps. Results are presented showing a significant learning rate speedup over a non-coevolutionary approach in a simulated robot domain. In addition, the results suggest the coevolutionary approach may lead to emer gent problem decompositions. Target text information: Evolving neural networks with collaborative species. : We present a coevolutionary architecture for solving decomposable problems and apply it to the evolution of artificial neural networks. Although this work is preliminary in nature it has a number of advantages over non-coevolutionary approaches. The coevolutionary approach utilizes a divide-and-conquer technique in which species representing simpler subtasks are evolved in separate instances of a genetic algorithm executing in parallel. Collaborations among the species are formed representing complete solutions. Species are created dynamically as needed. Results are presented in which the coevolutionary architecture produces higher quality solutions in fewer evolutionary trials when compared with an alternative non- coevolutionary approach on the problem of evolving cascade networks for parity computation. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
1,770
val
1-hop neighbor's text information: Phenes and the Baldwin Effect: Learning and evolution in a simulated population, : The Baldwin Effect, first proposed in the late nineteenth century, suggests that the course of evolutionary change can be influenced by individually learned behavior. The existence of this effect is still a hotly debated topic. In this paper clear evidence is presented that learning-based plasticity at the phenotypic level can and does produce directed changes at the genotypic level. This research confirms earlier experimental work done by others, notably Hinton & Nowlan (1987). Further, the amount of plasticity of the learned behavior is shown to be crucial to the size of the Baldwin Effect: either too little or too much and the effect disappears or is significantly reduced. Finally, for learnable traits, the case is made that over many generations it will become easier for the population as a whole to learn these traits (i.e. the phenotypic plasticity of these traits will increase). In this gradual transition from a genetically driven population to one driven by learning, the importance of the Baldwin Effect decreases. 1-hop neighbor's text information: Evolutionary wanderlust: Sexual selection with directional mate preferences. : In the pantheon of evolutionary forces, the optimizing Apollonian powers of natural selection are generally assumed to dominate the dark Dionysian dynamics of sexual selection. But this need not be the case, particularly with a class of selective mating mechanisms called `directional mate preferences' (Kirkpatrick, 1987). In previous simulation research, we showed that nondirectional assortative mating preferences could cause populations to spontaneously split apart into separate species (Todd & Miller, 1991). In this paper, we show that directional mate preferences can cause populations to wander capriciously through phenotype space, under a strange form of runaway sexual selection, with or without the influence of natural selection pressures. When directional mate preferences are free to evolve, they do not always evolve to point in the direction of natural-selective peaks. Sexual selection can thus take on a life of its own, such that mate preferences within a species become a distinct and important part of the environment to which the species' phenotypes adapt. These results suggest a broader conception of `adaptive behavior', in which attracting potential mates becomes as important as finding food and avoiding predators. We present a framework for simulating a wide range of directional and non-directional mate preferences, and discuss some practical and scientific applications of simu lating sexual selection. Target text information: Artificial Life as Theoretical Biology: How to do real science with computer simulation: Artificial Life (A-Life) research offers, among other things, a new style of computer simulation for understanding biological systems and processes. But most current A-Life work does not show enough methodological sophistication to count as good theoretical biology. As a first step towards developing a stronger methodology for A-Life, this paper (1) identifies some methodological pitfalls arising from the `computer science inuence' in A-Life, (2) suggests some methodological heuristics for A-Life as theoretical biology, (3) notes the strengths of A-Life methods versus previous research methods in biology, (4) examines some open questions in theoretical biology that may benefit from A-Life simulation, and (5) argues that the debate over `Strong A-Life' is not relevant to A-Life's utility for theoretical biology. 1 Introduction: Simulating our way into the Dark Continent I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
125
test
1-hop neighbor's text information: The frame problem and Bayesian network action representations. : We examine a number of techniques for representing actions with stochastic effects using Bayesian networks and influence diagrams. We compare these techniques according to ease of specification and size of the representation required for the complete specification of the dynamics of a particular system, paying particular attention the role of persistence relationships. We precisely characterize two components of the frame problem for Bayes nets and stochastic actions, propose several ways to deal with these problems, and compare our solutions with Re-iter's solution to the frame problem for the situation calculus. The result is a set of techniques that permit both ease of specification and compact representation of probabilistic system dynamics that is of comparable size (and timbre) to Reiter's representation (i.e., with no explicit frame axioms). 1-hop neighbor's text information: Context-specific independence in Bayesian networks. : Bayesian networks provide a language for qualitatively representing the conditional independence properties of a distribution. This allows a natural and compact representation of the distribution, eases knowledge acquisition, and supports effective inference algorithms. It is well-known, however, that there are certain independencies that we cannot capture qualitatively within the Bayesian network structure: independencies that hold only in certain contexts, i.e., given a specific assignment of values to certain variables. In this paper, we propose a formal notion of context-specific independence (CSI), based on regularities in the conditional probability tables (CPTs) at a node. We present a technique, analogous to (and based on) d-separation, for determining when such independence holds in a given network. We then focus on a particular qualitative representation schemetree-structured CPTs for capturing CSI. We suggest ways in which this representation can be used to support effective inference algorithms. In particular, we present a structural decomposition of the resulting network which can improve the performance of clustering algorithms, and an alternative algorithm based on cutset conditioning. 1-hop neighbor's text information: Stochastic simulation algorithms for dynamic probabilistic networks. : Stochastic simulation algorithms such as likelihood weighting often give fast, accurate approximations to posterior probabilities in probabilistic networks, and are the methods of choice for very large networks. Unfortunately, the special characteristics of dynamic probabilistic networks (DPNs), which are used to represent stochastic temporal processes, mean that standard simulation algorithms perform very poorly. In essence, the simulation trials diverge further and further from reality as the process is observed over time. In this paper, we present simulation algorithms that use the evidence observed at each time step to push the set of trials back towards reality. The first algorithm, "evidence reversal" (ER) restructures each time slice of the DPN so that the evidence nodes for the slice become ancestors of the state variables. The second algorithm, called "survival of the fittest" sampling (SOF), "repopulates" the set of trials at each time step using a stochastic reproduction rate weighted by the likelihood of the evidence according to each trial. We compare the performance of each algorithm with likelihood weighting on the original network, and also investigate the benefits of combining the ER and SOF methods. The ER/SOF combination appears to maintain bounded error independent of the number of time steps in the simulation. Target text information: Structured Arc Reversal and Simulation of Dynamic Probabilistic Networks: We present an algorithm for arc reversal in Bayesian networks with tree-structured conditional probability tables, and consider some of its advantages, especially for the simulation of dynamic probabilistic networks. In particular, the method allows one to produce CPTs for nodes involved in the reversal that exploit regularities in the conditional distributions. We argue that this approach alleviates some of the overhead associated with arc reversal, plays an important role in evidence integration and can be used to restrict sampling of variables in DPNs. We also provide an algorithm that detects the dynamic irrelevance of state variables in forward simulation. This algorithm exploits the structured CPTs in a reversed network to determine, in a time-independent fashion, the conditions under which a variable does or does not need to be sampled. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
204
test
1-hop neighbor's text information: Mbp on t0: mixing floating- and fix-point formats in bp learning. : We examine the efficient implementation of back prop type algorithms on T0 [4], a vector processor with a fixed point engine, designed for neural network simulation. A matrix formulation of back prop, Matrix Back Prop [1], has been shown to be very efficient on some RISCs [2]. Using Matrix Back Prop, we achieve an asymptotically optimal performance on T0 (about 0.8 GOPS) for both forward and backward phases, which is not possible with the standard on-line method. Since high efficiency is futile if convergence is poor (due to the use of fixed point arithmetic), we use a mixture of fixed and floating point operations. The key observation is that the precision of fixed point is sufficient for good convergence, if the range is appropriately chosen. Though the most expensive computations are implemented in fixed point, we achieve a rate of convergence that is comparable to the floating point version. The time taken for conversion between fixed and floating point is also shown to be reasonable. 1-hop neighbor's text information: Mixtures of probabilistic principle component analysers. : Principal component analysis (PCA) is one of the most popular techniques for processing, compressing and visualising data, although its effectiveness is limited by its global linearity. While nonlinear variants of PCA have been proposed, an alternative paradigm is to capture data complexity by a combination of local linear PCA projections. However, conventional PCA does not correspond to a probability density, and so there is no unique way to combine PCA models. Previous attempts to formulate mixture models for PCA have therefore to some extent been ad hoc. In this paper, PCA is formulated within a maximum-likelihood framework, based on a specific form of Gaussian latent variable model. This leads to a well-defined mixture model for probabilistic principal component analysers, whose parameters can be determined using an EM algorithm. We discuss the advantages of this model in the context of clustering, density modelling and local dimensionality reduction, and we demonstrate its application to image compression and handwritten digit recognition. 1-hop neighbor's text information: "Recognizing handwritten digits using mixtures of linear models", : We construct a mixture of locally linear generative models of a collection of pixel-based images of digits, and use them for recognition. Different models of a given digit are used to capture different styles of writing, and new images are classified by evaluating their log-likelihoods under each model. We use an EM-based algorithm in which the M-step is computationally straightforward principal components analysis (PCA). Incorporating tangent-plane information [12] about expected local deformations only requires adding tangent vectors into the sample covariance matrices for the PCA, and it demonstrably improves performance. Target text information: TK (1994). Fast non-linear dimension reduction. : We present a fast algorithm for non-linear dimension reduction. The algorithm builds a local linear model of the data by merging PCA with clustering based on a new distortion measure. Experiments with speech and image data indicate that the local linear algorithm produces encodings with lower distortion than those built by five layer auto-associative networks. The local linear algorithm is also more than an order of magnitude faster to train. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
1,473
test
1-hop neighbor's text information: Constructing nominal X-of-N attributes. : Most constructive induction researchers focus only on new boolean attributes. This paper reports a new constructive induction algorithm, called XofN, that constructs new nominal attributes in the form of Xof-N representations. An Xof-N is a set containing one or more attribute-value pairs. For a given instance, its value corresponds to the number of its attribute-value pairs that are true. The promising preliminary experimental results, on both artificial and real-world domains, show that constructing new nominal attributes in the form of Xof-N representations can significantly improve the performance of selective induction in terms of both higher prediction accuracy and lower theory complexity. 1-hop neighbor's text information: "A Comparative Study of ID3 and Backpropagation for English Text-to-Speech Mapping," : The performance of the error backpropagation (BP) and ID3 learning algorithms was compared on the task of mapping English text to phonemes and stresses. Under the distributed output code developed by Sejnowski and Rosenberg, it is shown that BP consistently out-performs ID3 on this task by several percentage points. Three hypotheses explaining this difference were explored: (a) ID3 is overfitting the training data, (b) BP is able to share hidden units across several output units and hence can learn the output units better, and (c) BP captures statistical information that ID3 does not. We conclude that only hypothesis (c) is correct. By augmenting ID3 with a simple statistical learning procedure, the performance of BP can be approached but not matched. More complex statistical procedures can improve the performance of both BP and ID3 substantially. A study of the residual errors suggests that there is still substantial room for improvement in learning methods for text-to-speech mapping. 1-hop neighbor's text information: Constructing conjunctive tests for decision trees. : This paper discusses an approach of constructing new attributes based on decision trees and production rules. It can improve the concepts learned in the form of decision trees by simplifying them and improving their predictive accuracy. In addition, this approach can distinguish relevant primitive attributes from irrelevant primitive attributes. Target text information: Continuous-valued Xof-N attributes versus nominal Xof-N attributes for constructive induction: a case study. : An Xof-N is a set containing one or more attribute-value pairs. For a given instance, its value corresponds to the number of its attribute-value pairs that are true. In this paper, we explore the characteristics and performance of continuous-valued Xof-N attributes versus nominal Xof-N attributes for constructive induction. Nominal Xof-Ns are more representationally powerful than continuous-valued Xof-Ns, but the former suffer the "fragmentation" problem, although some mechanisms such as subsetting can help to solve the problem. Two approaches to constructive induction using continuous-valued Xof-Ns are described. Continuous-valued Xof-Ns perform better than nominal ones on domains that need Xof-Ns with only one cut point. On domains that need Xof-N representations with more than one cut point, nominal Xof-Ns perform better than continuous-valued ones. Experimental results on a set of artificial and real-world domains support these statements. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
52
val
1-hop neighbor's text information: Global stabilization of linear systems with bounded feedback. : This paper deals with the problem of global stabilization of linear discrete time systems by means of bounded feedback laws. The main result proved is an analog of one proved for the continuous time case by the authors, and shows that such stabilization is possible if and only if the system is stabilizable with arbitrary controls and the transition matrix has spectral radius less or equal to one. The proof provides in principle an algorithm for the construction of such feedback laws, which can be implemented either as cascades or as parallel connections ("single hidden layer neural networks") of simple saturation functions. 1-hop neighbor's text information: : A General Result on the Stabilization of Linear Systems Using Bounded Controls 1 ABSTRACT We present two constructions of controllers that globally stabilize linear systems subject to control saturation. We allow essentially arbitrary saturation functions. The only conditions imposed on the system are the obvious necessary ones, namely that no eigenvalues of the uncontrolled system have positive real part and that the standard stabilizability rank condition hold. One of the constructions is in terms of a "neural-network type" one-hidden layer architecture, while the other one is in terms of cascades of linear maps and saturations. Target text information: "Stabilization with saturated actuators, a worked example: F-8 longitudinal flight control," : The authors and coworkers recently proved general theorems on the global stabilization of linear systems subject to control saturation. This paper develops in detail an explicit design for the linearized equations of longitudinal flight control for an F-8 aircraft, and tests the obtained controller on the original nonlinear model. This paper represents the first detailed derivation of a controller using the techniques in question, and the results are very encouraging. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
1,797
test
1-hop neighbor's text information: Jamshidi "On Genetic Programming of Fuzzy Rule-Based Systems for Intelligent Control." : Fuzzy logic and evolutionary computation have proven to be convenient tools for handling real-world uncertainty and designing control systems, respectively. An approach is presented that combines attributes of these paradigms for the purpose of developing intelligent control systems. The potential of the genetic programming paradigm (GP) for learning rules for use in fuzzy logic controllers (FLCs) is evaluated by focussing on the problem of discovering a controller for mobile robot path tracking. Performance results of incomplete rule-bases compare favorably to those of a complete FLC designed by the usual trial-and-error approach. A constrained syntactic representation supported by structure-preserving genetic operators is also introduced. Target text information: Behavior Hierarchy for Autonomous Mobile Robots: Fuzzy-behavior modulation and evolution: Realization of autonomous behavior in mobile robots, using fuzzy logic control, requires formulation of rules which are collectively responsible for necessary levels of intelligence. Such a collection of rules can be conveniently decomposed and efficiently implemented as a hierarchy of fuzzy-behaviors. This article describes how this can be done using a behavior-based architecture. A behavior hierarchy and mechanisms of control decision-making are described. In addition, an approach to behavior coordination is described with emphasis on evolution of fuzzy coordination rules using the genetic programming (GP) paradigm. Both conventional GP and steady-state GP are applied to evolve a fuzzy-behavior for sensor-based goal-seeking. The usefulness of the behavior hierarchy, and partial design by GP, is evident in performance results of simulated autonomous navigation. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
545
test
1-hop neighbor's text information: Veloso (1994). Planning and Learning by Analogical Reasoning. : Realistic and complex planning situations require a mixed-initiative planning framework in which human and automated planners interact to mutually construct a desired plan. Ideally, this joint cooperation has the potential of achieving better plans than either the human or the machine can create alone. Human planners often take a case-based approach to planning, relying on their past experience and planning by retrieving and adapting past planning cases. Planning by analogical reasoning in which generative and case-based planning are combined, as in Prodigy/Analogy, provides a suitable framework to study this mixed-initiative integration. However, having a human user engaged in this planning loop creates a variety of new research questions. The challenges we found creating a mixed-initiative planning system fall into three categories: planning paradigms differ in human and machine planning; visualization of the plan and planning process is a complex, but necessary task; and human users range across a spectrum of experience, both with respect to the planning domain and the underlying planning technology. This paper presents our approach to these three problems when designing an interface to incorporate a human into the process of planning by analogical reasoning with Prodigy/Analogy. The interface allows the user to follow both generative and case-based planning, it supports visualization of both plan and the planning rationale, and it addresses the variance in the experience of the user by allowing the user to control the presentation of information. 1-hop neighbor's text information: Constructive similarity assessment: Using stored cases to define new situa tions. : A fundamental issue in case-based reasoning is similarity assessment: determining similarities and differences between new and retrieved cases. Many methods have been developed for comparing input case descriptions to the cases already in memory. However, the success of such methods depends on the input case description being sufficiently complete to reflect the important features of the new situation, which is not assured. In case-based explanation of anomalous events during story understanding, the anomaly arises because the current situation is incompletely understood; consequently, similarity assessment based on matches between known current features and old cases is likely to fail because of gaps in the current case's description. Our solution to the problem of gaps in a new case's description is an approach that we call constructive similarity assessment. Constructive similarity assessment treats similarity assessment not as a simple comparison between fixed new and old cases, but as a process for deciding which types of features should be investigated in the new situation and, if the features are borne out by other knowledge, added to the description of the current case. Constructive similarity assessment does not merely compare new cases to old: using prior cases as its guide, it dynamically carves augmented descriptions of new cases out of memory. 1-hop neighbor's text information: Case-based similarity assessment: Estimating adaptability from experience. : Case-based problem-solving systems rely on similarity assessment to select stored cases whose solutions are easily adaptable to fit current problems. However, widely-used similarity assessment strategies, such as evaluation of semantic similarity, can be poor predictors of adaptability. As a result, systems may select cases that are difficult or impossible for them to adapt, even when easily adaptable cases are available in memory. This paper presents a new similarity assessment approach which couples similarity judgments directly to a case library containing the system's adaptation knowledge. It examines this approach in the context of a case-based planning system that learns both new plans and new adaptations. Empirical tests of alternative similarity assessment strategies show that this approach enables better case selection and increases the benefits accrued from learned adaptations. Target text information: Learning to integrate multiple knowledge sources for case-based reasoning. : The case-based reasoning process depends on multiple overlapping knowledge sources, each of which provides an opportunity for learning. Exploiting these opportunities requires not only determining the learning mechanisms to use for each individual knowledge source, but also how the different learning mechanisms interact and their combined utility. This paper presents a case study examining the relative contributions and costs involved in learning processes for three different knowledge sources|cases, case adaptation knowledge, and similarity information|in a case-based planner. It demonstrates the importance of interactions between different learning processes and identifies a promising method for integrating multiple learning methods to improve case-based reasoning. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
68
test
1-hop neighbor's text information: Transferring and retraining learned information filters. : Any system that learns how to filter documents will suffer poor performance during an initial training phase. One way of addressing this problem is to exploit filters learned by other users in a collaborative fashion. We investigate "direct transfer" of learned filters in this setting|a limiting case for any collaborative learning system. We evaluate the stability of several different learning methods under direct transfer, and conclude that symbolic learning methods that use negatively correlated features of the data perform poorly in transfer, even when they perform well in more conventional evaluation settings. This effect is robust: it holds for several learning methods, when a diverse set of users is used in training the classifier, and even when the learned classifiers can be adapted to the new user's distribution. Our experiments give rise to several concrete proposals for improving generalization performance in a collaborative setting, including a beneficial variation on a feature selection method that has been widely used in text categorization. 1-hop neighbor's text information: More Efficient Windowing: Windowing has been proposed as a procedure for efficient memory use in the ID3 decision tree learning algorithm. However, previous work has shown that windowing may often lead to a decrease in performance. In this work, we try to argue that separate-and-conquer rule learning algorithms are more appropriate for windowing than divide-and-conquer algorithms, because they learn rules independently and are less susceptible to changes in class distributions. In particular, we will present a new windowing algorithm that achieves additional gains in efficiency by exploiting this property of separate-and-conquer algorithms. While the presented algorithm is only suitable for redundant, noise-free data sets, we will also briefly discuss the problem of noisy data in windowing and present some preliminary ideas how it might be solved with an extension of the algorithm introduced in this paper. Target text information: Incremental reduced error pruning. : This paper outlines some problems that may occur with Reduced Error Pruning in Inductive Logic Programming, most notably efficiency. Thereafter a new method, Incremental Reduced Error Pruning, is proposed that attempts to address all of these problems. Experiments show that in many noisy domains this method is much more efficient than alternative algorithms, along with a slight gain in accuracy. However, the experiments show as well that the use of this algorithm cannot be recommended for domains with a very specific concept description. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
0
Rule Learning
cora
1,875
train
1-hop neighbor's text information: Proben1: A set of neural network benchmark problems and benchmarking rules. : Proben1 is a collection of problems for neural network learning in the realm of pattern classification and function approximation plus a set of rules and conventions for carrying out benchmark tests with these or similar problems. Proben1 contains 15 data sets from 12 different domains. All datasets represent realistic problems which could be called diagnosis tasks and all but one consist of real world data. The datasets are all presented in the same simple format, using an attribute representation that can directly be used for neural network training. Along with the datasets, Proben1 defines a set of rules for how to conduct and how to document neural network benchmarking. The purpose of the problem and rule collection is to give researchers easy access to data for the evaluation of their algorithms and networks and to make direct comparison of the published results feasible. This report describes the datasets and the benchmarking rules. It also gives some basic performance measures indicating the difficulty of the various problems. These measures can be used as baselines for comparison. 1-hop neighbor's text information: A portable parallel programming language for artificial neural networks. : CuPit-2 is a programming language specifically designed to express neural network learning algorithms. It provides most of the flexibility of general-purpose languages like C/C ++ , but results in much clearer and more elegant programs due to higher expressiveness, in particular for algorithms that change the network topology dynamically (constructive algorithms, pruning algorithms). Furthermore, CuPit-2 programs can be compiled into efficient code for parallel machines; no changes are required in the source program. This article presents a description of the language constructs and reports performance results for an implementation of CuPit-2 on symmetric multiprocessors (SMPs). 1-hop neighbor's text information: CuPit | a parallel language for neural algorithms: Language reference and tutorial. : and load balancing even for irregular neural networks. The idea to achieve these goals lies in the programming model: CuPit programs are object-centered, with connections and nodes of a graph (which is the neural network) being the objects. Algorithms are based on parallel local computations in the nodes and connections and communication along the connections (plus broadcast and reduction operations). This report describes the design considerations and the resulting language definition and discusses in detail a tutorial example program. Target text information: A parallel programming model for irregular dynamic neural networks. In W.K. Giloi, : A compiler for CuPit has been built for the MasPar MP-1/MP-2 using compilation techniques that can also be applied to most other parallel machines. The paper shortly presents the main ideas of the techniques used and results obtained by the various optimizations. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
2,144
test
1-hop neighbor's text information: Nonlinear resonance in neuron dynamics. : Hubler's technique using aperiodic forces to drive nonlinear oscillators to resonance is analyzed. The oscillators being examined are effective neurons that model Hopfield neural networks. The method is shown to be valid under several different circumstances. It is verified through analysis of the power spectrum, force, resonance, and energy transfer of the system. Target text information: Stability and Chaos in an Inertial Two Neuron System in Statistical Mechanics and Complex Systems: Inertia is added to a continuous-time, Hopfield [1] effective-neuron system. We explore the effects on the stability of the fixed points of the system. A two neuron system with one or two inertial terms added is shown to exhibit chaos. The chaos is confirmed by Lyapunov exponents, power spectra, and phase space plots. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
137
test
1-hop neighbor's text information: Self-Organization and Associative Memory, : Selective suppression of transmission at feedback synapses during learning is proposed as a mechanism for combining associative feedback with self-organization of feedforward synapses. Experimental data demonstrates cholinergic suppression of synaptic transmission in layer I (feedback synapses), and a lack of suppression in layer IV (feed-forward synapses). A network with this feature uses local rules to learn mappings which are not linearly separable. During learning, sensory stimuli and desired response are simultaneously presented as input. Feedforward connections form self-organized representations of input, while suppressed feedback connections learn the transpose of feedfor-ward connectivity. During recall, suppression is removed, sensory input activates the self-organized representation, and activity generates the learned response. Target text information: Figure 8: time complexity of unit parallelism measured on MANNA theoretical prediction #nodes N time: Our experience showed us that exibility in expressing a parallel algorithm for simulating neural networks is desirable even if it is not possible then to obtain the most efficient solution for any single training algorithm. We believe that the advantages of a clear and easy to understand program predominates the disadvantages of approaches allowing only for a specific machine or neural network algorithm. We currently investigate if other neural network models are worth while being parallelized, and how the resulting parallel algorithms can be composed of a few common basic building blocks and the logarithmic tree as efficient communication structure. 1 2 4 8 2 500 connections 40 000 connections [1] D. Ackley, G. Hinton, T. Sejnowski: A Learning Algorithm for Boltzmann Machines, Cognitive Science 9, pp. 147-169, 1985 [2] B. M. Forrest et al.: Implementing Neural Network Models on Parallel Computers, The computer Journal, vol. 30, no. 5, 1987 [3] W. Giloi: Latency Hiding in Message Passing Architectures, International Parallel Processing Symposium, April 1994, Cancun, Mexico, IEEE Computer Society Press [4] T. Nordstrm, B. Svensson: Using And Designing Massively Parallel Computers for Artificial Neural Networks, Journal Of Parallel And Distributed Computing, vol. 14, pp. 260-285, 1992 [5] A. Kramer, A. Vincentelli: Efficient parallel learning algorithms for neural networks, Advances in Neural Information Processing Systems I, D. Touretzky (ed.), pp. 40-48, 1989 [6] T. Kohonen: Self-Organization and Associative Memory, Springer-Verlag, Berlin, 1988 [7] D. A. Pomerleau, G. L. Gusciora, D. L. Touretzky, H. T. Kung: Neural Network Simulation at Warp Speed: How We Got 17 Million Connections Per Second, IEEE Intern. Conf. Neural Networks, July 1988 [8] A. Rbel: Dynamic selection of training patterns for neural networks: A new method to control the generalization, Technical Report 92-39, Technical University of Berlin, 1993 [9] D. E. Rumelhart, D. E. Hinton, R. J. Williams: Learning internal representations by error propagation, Rumelhart & McClelland (eds.), Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol. I, pp. 318-362, Bradford Books/MIT Press, Cambridge, MA, 1986 [10] W. Schiffmann, M. Joost, R. Werner: Comparison of optimized backpropagation algorithms, Proc. of the European Symposium on Artificial Neural Networks, ESANN '93, Brussels, pp. 97-104, 1993 [11] J. Schmidhuber: Accelerated Learning in BackPropagation Nets, Connectionism in perspective, Elsevier Science Publishers B.V. (North-Holland), pp 439-445,1989 [12] M. Taylor, P. Lisboa (eds.): Techniques and Applications of Neural Networks, Ellis Horwood, 1993 [13] M. Witbrock, M. Zagha: An implementation of backpropagation learning on GF11, a large SIMD parallel computer, Parallel Computing, vol. 14, pp. 329-346, 1990 [14] X. Zhang, M. Mckenna, J. P. Mesirov, D. L. Waltz: The backpropagation algorithm on grid and hypercube architectures, Parallel Computing, vol. 14, pp. 317-327, 1990 I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
796
test
1-hop neighbor's text information: "Learning to Segment Images Using Dynamic Feature Binding," : Despite the fact that complex visual scenes contain multiple, overlapping objects, people perform object recognition with ease and accuracy. One operation that facilitates recognition is an early segmentation process in which features of objects are grouped and labeled according to which object they belong. Current computational systems that perform this operation are based on predefined grouping heuristics. We describe a system called MAGIC that learns how to group features based on a set of presegmented examples. In many cases, MAGIC discovers grouping heuristics similar to those previously proposed, but it also has the capability of finding nonintuitive structural regularities in images. Grouping is performed by a relaxation network that attempts to dynamically bind related features. Features transmit a complex-valued signal (amplitude and phase) to one another; binding can thus be represented by phase locking related features. MAGIC's training procedure is a generalization of recurrent back propagation to complex-valued units. 1-hop neighbor's text information: "Efficient Visual Search: A Connectionist Solution," : Searching for objects in scenes is a natural task for people and has been extensively studied by psychologists. In this paper we examine this task from a connectionist perspective. Computational complexity arguments suggest that parallel feed-forward networks cannot perform this task efficiently. One difficulty is that, in order to distinguish the target from distractors, a combination of features must be associated with a single object. Often called the binding problem, this requirement presents a serious hurdle for connectionist models of visual processing when multiple objects are present. Psychophysical experiments suggest that people use covert visual attention to get around this problem. In this paper we describe a psychologically plausible system which uses a focus of attention mechanism to locate target objects. A strategy that combines top-down and bottom-up information is used to minimize search time. The behavior of the resulting system matches the reaction time behavior of people in several interesting tasks. 1-hop neighbor's text information: An Efficient Computational Model of Human Visual Attention. : One of the challenges for models of cognitive phenomena is the development of efficient and exible interfaces between low level sensory information and high level processes. For visual processing, researchers have long argued that an attentional mechanism is required to perform many of the tasks required by high level vision. This thesis presents VISIT, a connectionist model of covert visual attention that has been used as a vehicle for studying this interface. The model is efficient, exible, and is biologically plausible. The complexity of the network is linear in the number of pixels. Effective parallel strategies are used to minimize the number of iterations required. The resulting system is able to efficiently solve two tasks that are particularly difficult for standard bottom-up models of vision: computing spatial relations and visual search. Simulations show that the networks behavior matches much of the known psychophysical data on human visual attention. The general architecture of the model also closely matches the known physiological data on the human attention system. Various extensions to VISIT are discussed, including methods for learning the component modules. Target text information: Computational modeling of spatial attention: I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
758
val
1-hop neighbor's text information: Hierarchical Learning with Procedural Abstraction Mechanisms. : 1-hop neighbor's text information: Why ants are hard. : The problem of programming an artificial ant to follow the Santa Fe trail is used as an example program search space. Previously reported genetic programming, simulated annealing and hill climbing performance is shown not to be much better than random search on the Ant problem. Analysis of the program search space in terms of fixed length schema suggests it is highly deceptive and that for the simplest solutions large building blocks must be assembled before they have above average fitness. In some cases we show solutions cannot be assembled using a fixed representation from small building blocks of above average fitness. This suggest the Ant problem is difficult for Genetic Algorithms. 1-hop neighbor's text information: Fitness causes bloat: Mutation. : In many cases programs length's increase (known as "bloat", "fluff" and increasing "structural complexity") during artificial evolution. We show bloat is not specific to genetic programming and suggest it is inherent in search techniques with discrete variable length representations using simple static evaluation functions. We investigate the bloating characteristics of three non-population and one population based search techniques using a novel mutation operator. An artificial ant following the Santa Fe trail problem is solved by simulated annealing, hill climbing, strict hill climbing and population based search using two variants of the the new subtree based mutation operator. As predicted bloat is observed when using unbiased mutation and is absent in simulated annealing and both hill climbers when using the length neutral mutation however bloat occurs with both mutations when using a population. We conclude that there are two causes of bloat. Target text information: Boolean Functions Fitness Spaces: We investigate the distribution of performance of the Boolean functions of 3 Boolean inputs (particularly that of the parity functions), the always-on-6 and even-6 parity functions. We us enumeration, uniform Monte-Carlo random sampling and sampling random full trees. As expected XOR dramatically changes the fitness distributions. In all cases once some minimum size threshold has been exceeded, the distribution of performance is approximately independent of program length. However the distribution of the performance of full trees is different from that of asymmetric trees and varies with tree depth. We consider but reject testing the No Free Lunch (NFL) theorems on these functions. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
541
test
1-hop neighbor's text information: Generative Learning Structures for Generalized Connectionist Networks. : Massively parallel networks of relatively simple computing elements offer an attractive and versatile framework for exploring a variety of learning structures and processes for intelligent systems. This paper briefly summarizes some popular learning structures and processes used in such networks. It outlines a range of potentially more powerful alternatives for pattern-directed inductive learning in such systems. It motivates and develops a class of new learning algorithms for massively parallel networks of simple computing elements. We call this class of learning processes generative for they offer a set of mechanisms for constructive and adaptive determination of the network architecture the number of processing elements and the connectivity among them as a function of experience. Generative learning algorithms attempt to overcome some of the limitations of some approaches to learning in networks that rely on modification of weights on the links within an otherwise fixed network topology e.g., rather slow learning and the need for an a-priori choice of a network architecture. Several alternative designs as well as a range of control structures and processes which can be used to regulate the form and content of internal representations learned by such networks are examined. Empirical results from the study of some generative learning algorithms are briefly summarized and several extensions and refinements of such algorithms, and directions for future research are outlined. 1-hop neighbor's text information: Symbolic and Subsymbolic Learning for Vision: Some Possibilities: Robust, flexible and sufficiently general vision systems such as those for recognition and description of complex 3-dimensional objects require an adequate armamentarium of representations and learning mechanisms. This paper briefly analyzes the strengths and weaknesses of different learning paradigms such as symbol processing systems, connectionist networks, and statistical and syntactic pattern recognition systems as possible candidates for providing such capabilities and points out several promising directions for integrating multiple such paradigms in a synergistic fashion towards that goal. 1-hop neighbor's text information: Brain-Structured Networks That Perceive and Learn. : This paper specifies the main features of Brain-like, Neuronal, and Connectionist models; argues for the need for, and usefulness of, appropriate successively larger brain-like structures; and examines parallel-hierarchical Recognition Cone models of perception from this perspective, as examples of such structures. The anatomy, physiology, behavior, and development of the visual system are briefly summarized to motivate the architecture of brain-structured networks for perceptual recognition. Results are presented from simulations of carefully pre-designed Recognition Cone structures that perceive objects (e.g., houses) in digitized photographs. A framework for perceptual learning is introduced, including mechanisms for generation-discovery (feedback-guided growth of new links and nodes, subject to brain-like constraints (e.g., local receptive fields, global convergence-divergence). The information processing transforms discovered through generation are fine-tuned by feedback-guided reweight-ing of links. Some preliminary results are presented of brain-structured networks that learn to recognize simple objects (e.g., letters of the alphabet, cups, apples, bananas) through feedback-guided generation and reweighting. These show large improvements over networks that either lack brain-like structure or/and learn by reweighting of links alone. Target text information: Some Biases For Efficient Learning of Spatial, Temporal, and Spatio-Temporal Patterns. : This paper introduces and explores some representational biases for efficient learning of spatial, temporal, or spatio-temporal patterns in connectionist networks (CN) massively parallel networks of simple computing elements. It examines learning mechanisms that constructively build up network structures that encode information from environmental stimuli at successively higher resolutions as needed for the tasks (e.g., perceptual recognition) that the network has to perform. Some simple examples are presented to illustrate the the basic structures and processes used in such networks to ensure the parsimony of learned representations by guiding the system to focus its efforts at the minimal adequate resolution. Several extensions of the basic algorithm for efficient learning using multi-resolution representations of spatial, temporal, or spatio-temporal patterns are discussed. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
1,019
val
1-hop neighbor's text information: Sonderforschungsbereich 314 K unstliche Intelligenz Wissensbasierte Systeme KI-Labor am Lehrstuhl f ur Informatik IV Numerical: 1-hop neighbor's text information: From knowledge bases to decision models. : Modeling techniques developed recently in the AI and uncertain reasoning communities permit significantly more flexible specifications of probabilistic knowledge. Specifically, graphical decision-modeling formalisms|belief networks, influence diagrams, and their variants|provide compact representation of probabilistic relationships, and support inference algorithms that automatically exploit the dependence structure in such models [1, 3, 4]. These advances have brought on a resurgence of interest in computational decision systems based on normative theories of belief and preference. However, graphical decision-modeling languages are still quite limited for purposes of knowledge representation because, while they can describe the relationships among particular event instances, they cannot capture general knowledge about probabilistic relationships across classes of events. The inability to capture general knowledge is a serious impediment for those AI tasks in which the relevant factors of a decision problem cannot be enumerated in advance. A graphical decision model encodes a particular set of probabilistic dependencies, a predefined set of decision alternatives, and a specific mathematical form for a utility function. Given a properly specified model, there exist relatively efficient algorithms for calculating posterior probabilities and optimal decision policies. A range of similar cases may be handled by parametric variations of the original model. However, if the structure of dependencies, the set of available alternatives, or the form of utility function changes from situation to situation, then a fixed network representation is no longer adequate. An ideal computational decision system would possess general, broad knowledge of a domain, but would have the ability to reason about the particular circumstances of any given decision problem within the domain. One obvious approach|which we call call knowledge-based model construction (KBMC)|is to generate a decision model dynamically at run-time, based on the problem description and information received thus far. Model construction consists of selection, instantiation, and assembly of causal and associational relationships from a broad knowledge base of general relationships among domain concepts. For example, suppose we wish to develop a system to recommend appropriate actions for maintaining a computer network. The natural graphical decision model would include chance 1-hop neighbor's text information: Accounting for context in plan recognition, with application to traffic monitoring. : Typical approaches to plan recognition start from a representation of an agent's possible plans, and reason evidentially from observations of the agent's actions to assess the plausibility of the various candidates. A more expansive view of the task (consistent with some prior work) accounts for the context in which the plan was generated, the mental state and planning process of the agent, and consequences of the agent's actions in the world. We present a general Bayesian framework encompassing this view, and focus on how context can be exploited in plan recognition. We demonstrate the approach on a problem in traffic monitoring, where the objective is to induce the plan of the driver from observation of vehicle movements. Starting from a model of how the driver generates plans, we show how the highway context can appropriately influence the recognizer's interpretation of observed driver be havior. Target text information: "The Automated Mapping of Plans for Plan Recognition," : To coordinate with other agents in its environment, an agent needs models of what the other agents are trying to do. When communication is impossible or expensive, this information must be acquired indirectly via plan recognition. Typical approaches to plan recognition start with a specification of the possible plans the other agents may be following, and develop special techniques for discriminating among the possibilities. Perhaps more desirable would be a uniform procedure for mapping plans to general structures supporting inference based on uncertain and incomplete observations. In this paper, we describe a set of methods for converting plans represented in a flexible procedural language to observation models represented as probabilistic belief networks, and we outline issues in applying the resulting probabilistic models of agents when coordinating activity in physical domains. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
2,623
val
1-hop neighbor's text information: Efficient theta-subsumption based on graph algorithms. : The -subsumption problem is crucial to the efficiency of ILP learning systems. We discuss two -subsumption algorithms based on strategies for preselecting suitable matching literals. The class of clauses, for which subsumption becomes polynomial, is a superset of the deterministic clauses. We further map the general problem of -subsumption to a certain problem of finding a clique of fixed size in a graph, and in return show that a specialization of the pruning strategy of the Car-raghan and Pardalos clique algorithm provides a dramatic reduction of the subsumption search space. We also present empirical results for the mesh design data set. 1-hop neighbor's text information: An Efficient Subsumbtion Algorith for Inductive Logic Programming. : In this paper we investigate the efficiency of - subsumption (` ), the basic provability relation in ILP. As D ` C is NP-complete even if we restrict ourselves to linked Horn clauses and fix C to contain only a small constant number of literals, we investigate in several restrictions of D. We first adapt the notion of determinate clauses used in ILP and show that -subsumption is decidable in polynomial time if D is determinate with respect to C. Secondly, we adapt the notion of k-local Horn clauses and show that - subsumption is efficiently computable for some reasonably small k. We then show how these results can be combined, to give an efficient reasoning procedure for determinate k-local Horn clauses, an ILP-problem recently suggested to be polynomial predictable by Cohen (1993) by a simple counting argument. We finally outline how the -reduction algorithm, an essential part of every lgg ILP-learning algorithm, can be im proved by these ideas. Target text information: Efficient Algorithms for -Subsumption: subsumption is a decidable but incomplete approximation of logic implication, important to inductive logic programming and theorem proving. We show that by context based elimination of possible matches a certain superset of the determinate clauses can be tested for subsumption in polynomial time. We discuss the relation between subsumption and the clique problem, showing in particular that using additional prior knowledge about the substitution space only a small fraction of the search space can be identified as possibly containing globally consistent solutions, which leads to an effective pruning rule. We present empirical results, demonstrating that a combination of both of the above approaches provides an extreme reduction of computational effort. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
0
Rule Learning
cora
165
test
1-hop neighbor's text information: Q-learning with hidden-unit restarting. : Platt's resource-allocation network (RAN) (Platt, 1991a, 1991b) is modified for a reinforcement-learning paradigm and to "restart" existing hidden units rather than adding new units. After restarting, units continue to learn via back-propagation. The resulting restart algorithm is tested in a Q-learning network that learns to solve an inverted pendulum problem. Solutions are found faster on average with the restart algorithm than without it. 1-hop neighbor's text information: Learning to predict by the methods of temporal differences. : This article introduces a class of incremental learning procedures specialized for prediction|that is, for using past experience with an incompletely known system to predict its future behavior. Whereas conventional prediction-learning methods assign credit by means of the difference between predicted and actual outcomes, the new methods assign credit by means of the difference between temporally successive predictions. Although such temporal-difference methods have been used in Samuel's checker player, Holland's bucket brigade, and the author's Adaptive Heuristic Critic, they have remained poorly understood. Here we prove their convergence and optimality for special cases and relate them to supervised-learning methods. For most real-world prediction problems, temporal-difference methods require less memory and less peak computation than conventional methods; and they produce more accurate predictions. We argue that most problems to which supervised learning is currently applied are really prediction problems of the sort to which temporal-difference methods can be applied to advantage. Target text information: Reinforcement Learning, Neural Networks and PI Control Applied to a Heating Coil: An accurate simulation of a heating coil is used to compare the performance of a proportional plus integral (PI) controller, a neural network trained to predict the steady-state output of the PI controller, a neural network trained to minimize the n-step ahead error between the coil output and the set point, and a reinforcement learning agent trained to minimize the sum of the squared error over time. Although the PI controller works very well for this task, the neural networks produce improved performance. The reinforcement learning agent, when combined with a PI controller, learned to augment the PI control output for a small number of states for which control can be improved. Keywords: neural networks, reinforcement learning, PI control, HVAC I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
5
Reinforcement Learning
cora
248
test
1-hop neighbor's text information: "A General Lower Bound on the Number of Examples Needed for Learning," : We prove a lower bound of ( 1 * ln 1 ffi + VCdim(C) * ) on the number of random examples required for distribution-free learning of a concept class C, where VCdim(C) is the Vapnik-Chervonenkis dimension and * and ffi are the accuracy and confidence parameters. This improves the previous best lower bound of ( 1 * ln 1 ffi + VCdim(C)), and comes close to the known general upper bound of O( 1 ffi + VCdim(C) * ln 1 * ) for consistent algorithms. We show that for many interesting concept classes, including kCNF and kDNF, our bound is actually tight to within a constant factor. 1-hop neighbor's text information: Self bounding learning algorithms: Most of the work which attempts to give bounds on the generalization error of the hypothesis generated by a learning algorithm is based on methods from the theory of uniform convergence. These bounds are a-priori bounds that hold for any distribution of examples and are calculated before any data is observed. In this paper we propose a different approach for bounding the generalization error after the data has been observed. A self-bounding learning algorithm is an algorithm which, in addition to the hypothesis that it outputs, outputs a reliable upper bound on the generalization error of this hypothesis. We first explore the idea in the statistical query learning framework of Kearns [10]. After that we give an explicit self bounding algorithm for learning algorithms that are based on local search. 1-hop neighbor's text information: General bounds on statistical query learning and PAC learning with noise via hypothesis boosting. : We derive general bounds on the complexity of learning in the Statistical Query model and in the PAC model with classification noise. We do so by considering the problem of boosting the accuracy of weak learning algorithms which fall within the Statistical Query model. This new model was introduced by Kearns [12] to provide a general framework for efficient PAC learning in the presence of classification noise. We first show a general scheme for boosting the accuracy of weak SQ learning algorithms, proving that weak SQ learning is equivalent to strong SQ learning. The boosting is efficient and is used to show our main result of the first general upper bounds on the complexity of strong SQ learning. Specifically, we derive simultaneous upper bounds with respect to * on the number of queries, O(log 2 1 * ), the Vapnik-Chervonenkis dimension of the query space, O(log 1 * ), and the inverse of the minimum tolerance, O( 1 * log 1 * ). In addition, we show that these general upper bounds are nearly optimal by describing a class of learning problems for which we simultaneously lower bound the number of queries by (log 1 * ) We further apply our boosting results in the SQ model to learning in the PAC model with classification noise. Since nearly all PAC learning algorithms can be cast in the SQ model, we can apply our boosting techniques to convert these PAC algorithms into highly efficient SQ algorithms. By simulating these efficient SQ algorithms in the PAC model with classification noise, we show that nearly all PAC algorithms can be converted into highly efficient PAC algorithms which tolerate classification noise. We give an upper bound on the sample complexity of these noise-tolerant PAC algorithms which is nearly optimal with respect to the noise rate. We also give upper bounds on space complexity and hypothesis size and show that these two measures are in fact independent of the noise rate. We note that the running times of these noise-tolerant PAC algorithms are efficient. This sequence of simulations also demonstrates that it is possible to boost the accuracy of nearly all PAC algorithms even in the presence of noise. This provides a partial answer to an open problem of Schapire [15] and the first theoretical evidence for an empirical result of Drucker, Schapire and Simard [4]. Target text information: On the sample complexity of noise-tolerant learning. : In this paper, we further characterize the complexity of noise-tolerant learning in the PAC model. Specifically, we show a general lower bound of log(1=ffi) on the number of examples required for PAC learning in the presence of classification noise. Combined with a result of Simon, we effectively show that the sample complexity of PAC learning in the presence of classification noise is VC(F) "(12) 2 : Furthermore, we demonstrate the optimality of the general lower bound by providing a noise-tolerant learning algorithm for the class of symmetric Boolean functions which uses a sample size within a constant factor of this bound. Finally, we note that our general lower bound compares favorably with various general upper bounds for PAC learning in the presence of classification noise. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
1,331
test
1-hop neighbor's text information: Non-Deterministic, Constraint-Based Parsing of Human Genes: 1-hop neighbor's text information: A generalized hidden Markov model for the recognition of human genes in DNA. : We present a statistical model of genes in DNA. A Generalized Hidden Markov Model (GHMM) provides the framework for describing the grammar of a legal parse of a DNA sequence (Stormo & Haussler 1994). Probabilities are assigned to transitions between states in the GHMM and to the generation of each nucleotide base given a particular state. Machine learning techniques are applied to optimize these probabilities using a standardized training set. Given a new candidate sequence, the best parse is deduced from the model using a dynamic programming algorithm to identify the path through the model with maximum probability. The GHMM is flexible and modular, so new sensors and additional states can be inserted easily. In addition, it provides simple solutions for integrating cardinality constraints, reading frame constraints, "indels", and homology searching. The description and results of an implementation of such a gene-finding model, called Genie, is presented. The exon sensor is a codon frequency model conditioned on windowed nucleotide frequency and the preceding codon. Two neural networks are used, as in (Brunak, Engelbrecht, & Knudsen 1991), for splice site prediction. We show that this simple model performs quite well. For a cross-validated standard test set of 304 genes [ftp://www-hgc.lbl.gov/pub/genesets] in human DNA, our gene-finding system identified up to 85% of protein-coding bases correctly with a specificity of 80%. 58% of exons were exactly identified with a specificity of 51%. Genie is shown to perform favorably compared with several other gene-finding systems. 1-hop neighbor's text information: Prediction of human mRNA donor and acceptor sites from the DNA sequence. : Artificial neural networks have been applied to the prediction of splice site location in human pre-mRNA. A joint prediction scheme where prediction of transition regions between introns and exons regulates a cutoff level for splice site assignment was able to predict splice site locations with confidence levels far better than previously reported in the literature. The problem of predicting donor and acceptor sites in human genes is hampered by the presence of numerous amounts of false positives | in the paper the distribution of these false splice sites is examined and linked to a possible scenario for the splicing mechanism in vivo. When the presented method detects 95% of the true donor and acceptor sites it makes less than 0.1% false donor site assignments and less than 0.4% false acceptor site assignments. For the large data set used in this study this means that on the average there are one and a half false donor sites per true donor site and six false acceptor sites per true acceptor site. With the joint assignment method more than a fifth of the true donor sites and around one fourth of the true acceptor sites could be detected without accompaniment of any false positive predictions. Highly confident splice sites could not be isolated with a widely used weight matrix method or by separate splice site networks. A complementary relation between the confidence levels of the coding/non-coding and the separate splice site networks was observed, with many weak splice sites having sharp transitions in the coding/non-coding signal and many stronger splice sites having more ill-defined transitions between coding and non-coding. Target text information: Searls. Gene structure prediction by linguistic methods. : The higher-order structure of genes and other features of biological sequences can be described by means of formal grammars. These grammars can then be used by general-purpose parsers to detect and assemble such structures by means of syntactic pattern recognition. We describe a grammar and parser for eukaryotic protein-encoding genes, which by some measures is as effective as current connectionist and combinatorial algorithms in predicting gene structures for sequence database entries. Parameters on the grammar rules are optimized for several different species, and mixing experiments performed to determine the degree of species specificity and the relative importance of compositional, signal-based, and syntactic components in gene prediction. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
2,017
test
1-hop neighbor's text information: Martinez (1993.) The Design and Evaluation of a Rule Induction Algorithm. : This paper appeared in Proceedings of the 6th Australian Joint Conference on Artificial Intelligence, Melbourne, Australia, 17 Nov. 1993, pp. 348-355. 1-hop neighbor's text information: The minimum feature set problem. : This paper appeared in Neural Networks 7 (1994), no. 3, pp. 491-494. Target text information: NP-Completeness of Minimum Rule Sets: Rule induction systems seek to generate rule sets which are optimal in the complexity of the rule set. This paper develops a formal proof of the NP-Completeness of the problem of generating the simplest rule set (MIN RS) which accurately predicts examples in the training set for a particular type of generalization algorithm algorithm and complexity measure. The proof is then informally extended to cover a broader spectrum of complexity measures and learning algorithms. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
446
test
1-hop neighbor's text information: A similarity-based retrieval tool for software repositories. : In this paper we present a prototype of a flexible similarity-based retrieval system. Its flexibility is supported by allowing for an imprecisely specified query. Moreover, our algorithm allows for assessing if the retrieved items are relevant in the initial context, specified in the query. The presented system can be used as a supporting tool for a software repository. We also discuss system evaluation with concerns on usefulness, scalability, applicability and comparability. Evaluation of the T A3 system on three domains gives us encouraging results and an integration of TA3 into a real software repository as a retrieval tool is ongoing. 1-hop neighbor's text information: On the informativeness of the DNA promoter sequences domain theory. : The DNA promoter sequences domain theory and database have become popular for testing systems that integrate empirical and analytical learning. This note reports a simple change and reinterpretation of the domain theory in terms of M-of-N concepts, involving no learning, that results in an accuracy of 93.4% on the 106 items of the database. Moreover, an exhaustive search of the space of M-of-N domain theory interpretations indicates that the expected accuracy of a randomly chosen interpretation is 76.5%, and that a maximum accuracy of 97.2% is achieved in 12 cases. This demonstrates the informativeness of the domain theory, without the complications of understanding the interactions between various learning algorithms and the theory. In addition, our results help characterize the difficulty of learning using the DNA promoters theory. 1-hop neighbor's text information: Inductive learning and case-based reasoning. : This paper describes an application of an inductive learning techniques to case-based reasoning. We introduce two main forms of induction, define case-based reasoning and present a combination of both. The evaluation of the proposed system, called TA3, is carried out on a classification task, namely character recognition. We show how inductive knowledge improves knowledge representation and in turn flexibility of the system, its performance (in terms of classification accuracy) and its scalability. Target text information: Supporting flexibility. a case-based reasoning approach. : The AAAI Fall Symposium; Flexible Computation in Intelligent Systems: Results, Issues, and Opportunities. Nov. 9-11, 1996, Cambridge, MA Abstract This paper presents a case-based reasoning system TA3. We address the flexibility of the case-based reasoning process, namely flexible retrieval of relevant experiences, by using a novel similarity assessment theory. To exemplify the advantages of such an approach, we have experimentally evaluated the system and compared its performance to the performance of non-flexible version of TA3 and to other machine learning algorithms on several domains. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
1,358
test
1-hop neighbor's text information: A dataset decomposition approach to data mining and machine discovery: We present a novel data mining approach based on decomposition. In order to analyze a given dataset, the method decomposes it to a hierarchy of smaller and less complex datasets that can be analyzed independently. The method is experimentally evaluated on a real-world housing loans allocation dataset, showing that the decomposition can (1) discover meaningful intermediate concepts, (2) decompose a relatively complex dataset to datasets that are easy to analyze and comprehend, and (3) derive a classifier of high classification accuracy. We also show that human interaction has a positive effect on both the comprehensibility and classification accuracy. 1-hop neighbor's text information: Machine learning by function decomposition. : We present a new machine learning method that, given a set of training examples, induces a definition of the target concept in terms of a hierarchy of intermediate concepts and their definitions. This effectively decomposes the problem into smaller, less complex problems. The method is inspired by the Boolean function decomposition approach to the design of digital circuits. To cope with high time complexity of finding an optimal decomposition, we propose a suboptimal heuristic algorithm. The method, implemented in program HINT (HIerarchy Induction Tool), is experimentally evaluated using a set of artificial and real-world learning problems. It is shown that the method performs well both in terms of classification accuracy and discovery of meaningful concept hierarchies. Target text information: Constructing intermediate concepts by decomposition of real functions. : In learning from examples it is often useful to expand an attribute-vector representation by intermediate concepts. The usual advantage of such structuring of the learning problem is that it makes the learning easier and improves the comprehensibility of induced descriptions. In this paper, we develop a technique for discovering useful intermediate concepts when both the class and the attributes are real-valued. The technique is based on a decomposition method originally developed for the design of switching circuits and recently extended to handle incompletely specified multi-valued functions. It was also applied to machine learning tasks. In this paper, we introduce modifications, needed to decompose real functions and to present them in symbolic form. The method is evaluated on a number of test functions. The results show that the method correctly decomposes fairly complex functions. The decomposition hierarchy does not depend on a given repertoir of basic functions (background knowledge). I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
0
Rule Learning
cora
1,329
test
1-hop neighbor's text information: Power system security margin prediction using radial basis function networks. : This paper presents and evaluates two algorithms for incrementally constructing Radial Basis Function Networks, a class of neural networks which looks more suitable for adtaptive control applications than the more popular backpropagation networks. The first algorithm has been derived by a previous method developed by Fritzke, while the second one has been inspired by the CART algorithm developed by Breiman for generation regression trees. Both algorithms proved to work well on a number of tests and exhibit comparable performances. An evaluation on the standard case study of the Mackey-Glass temporal series is reported. 1-hop neighbor's text information: Reinforcement learning for planning and control. : 1-hop neighbor's text information: Three-Dimensional Object Recognition Using an Unsupervised BCM Network: The Usefulness of Distinguishing Features: We propose an object recognition scheme based on a method for feature extraction from gray level images that corresponds to recent statistical theory, called projection pursuit, and is derived from a biologically motivated feature extracting neuron. To evaluate the performance of this method we use a set of very detailed psychophysical 3D object recognition experiments (Bulthoff and Edelman, 1992). Target text information: A Theory of Networks for Approximation and Learning, : Learning an input-output mapping from a set of examples, of the type that many neural networks have been constructed to perform, can be regarded as synthesizing an approximation of a multi-dimensional function, that is solving the problem of hy-persurface reconstruction. From this point of view, this form of learning is closely related to classical approximation techniques, such as generalized splines and regularization theory. This paper considers the problems of an exact representation and, in more detail, of the approximation of linear and nonlinear mappings in terms of simpler functions of fewer variables. Kolmogorov's theorem concerning the representation of functions of several variables in terms of functions of one variable turns out to be almost irrelevant in the context of networks for learning. We develop a theoretical framework for approximation based on regularization techniques that leads to a class of three-layer networks that we call Generalized Radial Basis Functions (GRBF), since they are mathematically related to the well-known Radial Basis Functions, mainly used for strict interpolation tasks. GRBF networks are not only equivalent to generalized splines, but are also closely related to pattern recognition methods such as Parzen windows and potential functions and to several neural network algorithms, such as Kanerva's associative memory, backpropagation and Kohonen's topology preserving map. They also have an interesting interpretation in terms of prototypes that are synthesized and optimally combined during the learning stage. The paper introduces several extensions and applications of the technique and discusses intriguing analogies with neurobiological data. c fl Massachusetts Institute of Technology, 1994 This paper describes research done within the Center for Biological Information Processing, in the Department of Brain and Cognitive Sciences, and at the Artificial Intelligence Laboratory. This research is sponsored by a grant from the Office of Naval Research (ONR), Cognitive and Neural Sciences Division; by the Artificial Intelligence Center of Hughes Aircraft Corporation; by the Alfred P. Sloan Foundation; by the National Science Foundation. Support for the A. I. Laboratory's artificial intelligence research is provided by the Advanced Research Projects Agency of the Department of Defense under Army contract DACA76-85-C-0010, and in part by ONR contract N00014-85-K-0124. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
2,628
val
1-hop neighbor's text information: Bias plus variance decomposition for zero-one loss functions. : We present a bias-variance decomposition of expected misclassification rate, the most commonly used loss function in supervised classification learning. The bias-variance decomposition for quadratic loss functions is well known and serves as an important tool for analyzing learning algorithms, yet no decomposition was offered for the more commonly used zero-one (misclassification) loss functions until the recent work of Kong & Dietterich (1995) and Breiman (1996). Their decomposition suffers from some major shortcomings though (e.g., potentially negative variance), which our decomposition avoids. We show that, in practice, the naive frequency-based estimation of the decomposition terms is by itself biased and show how to correct for this bias. We illustrate the decomposition on various algorithms and datasets from the UCI repository. 1-hop neighbor's text information: W.S. Boosting the Margin: A New Explanation for the Effectiveness of Voting Methods. : One of the surprising recurring phenomena observed in experiments with boosting is that the test error of the generated classifier usually does not increase as its size becomes very large, and often is observed to decrease even after the training error reaches zero. In this paper, we show that this phenomenon is related to the distribution of margins of the training examples with respect to the generated voting classification rule, where the margin of an example is simply the difference between the number of correct votes and the maximum number of votes received by any incorrect label. We show that techniques used in the analysis of Vapnik's support vector classifiers and of neural networks with small weights can be applied to voting methods to relate the margin distribution to the test error. We also show theoretically and experimentally that boosting is especially effective at increasing the margins of the training examples. Finally, we compare our explanation to those based on the bias-variance decomposition. 1-hop neighbor's text information: Bias, variance and prediction error for classification rules. : We study the notions of bias and variance for classification rules. Following Efron (1978) we develop a decomposition of prediction error into its natural components. Then we derive bootstrap estimates of these components and illustrate how they can be used to describe the error behaviour of a classifier in practice. In the process we also obtain a bootstrap estimate of the error of a "bagged" classifier. Target text information: MAJORITY VOTE CLASSIFIERS: THEORY AND APPLICATIONS: I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
434
test
1-hop neighbor's text information: Strategy Learning with Multilayer Connectionist Represent ations. : Results are presented that demonstrate the learning and fine-tuning of search strategies using connectionist mechanisms. Previous studies of strategy learning within the symbolic, production-rule formalism have not addressed fine-tuning behavior. Here a two-layer connectionist system is presented that develops its search from a weak to a task-specific strategy and fine-tunes its performance. The system is applied to a simulated, real-time, balance-control task. We compare the performance of one-layer and two-layer networks, showing that the ability of the two-layer network to discover new features and thus enhance the original representation is critical to solving the balancing task. 1-hop neighbor's text information: Optimal attitude control of satellites by artificial neural networks: a pilot study. : A pilot study is described on the practical application of artificial neural networks. The limit cycle of the attitude control of a satellite is selected as the test case. One of the sources of the limit cycle is a position dependent error in the observed attitude. A Reinforcement Learning method is selected, which is able to adapt a controller such that a cost function is optimised. An estimate of the cost function is learned by a neural `critic'. In our approach, the estimated cost function is directly represented as a function of the parameters of a linear controller. The critic is implemented as a CMAC network. Results from simulations show that the method is able to find optimal parameters without unstable behaviour. In particular in the case of large discontinuities in the attitude measurements, the method shows a clear improvement compared to the conventional approach: the RMS attitude error decreases approximately 30%. 1-hop neighbor's text information: Neuronlike adaptive elements that can solve difficult learning control problems. : Miller, G. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. The Psychological Review, 63(2):81-97. Schmidhuber, J. (1990b). Towards compositional learning with dynamic neural networks. Technical Report FKI-129-90, Technische Universitat Munchen, Institut fu Informatik. Servan-Schreiber, D., Cleermans, A., and McClelland, J. (1988). Encoding sequential structure in simple recurrent networks. Technical Report CMU-CS-88-183, Carnegie Mellon University, Computer Science Department. Target text information: NEUROCONTROL BY REINFORCEMENT LEARNING: Reinforcement learning (RL) is a model-free tuning and adaptation method for control of dynamic systems. Contrary to supervised learning, based usually on gradient descent techniques, RL does not require any model or sensitivity function of the process. Hence, RL can be applied to systems that are poorly understood, uncertain, nonlinear or for other reasons untractable with conventional methods. In reinforcement learning, the overall controller performance is evaluated by a scalar measure, called reinforcement. Depending on the type of the control task, reinforcement may represent an evaluation of the most recent control action or, more often, of an entire sequence of past control moves. In the latter case, the RL system learns how to predict the outcome of each individual control action. This prediction is then used to adjust the parameters of the controller. The mathematical background of RL is closely related to optimal control and dynamic programming. This paper gives a comprehensive overview of the RL methods and presents an application to the attitude control of a satellite. Some well known applications from the literature are reviewed as well. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
5
Reinforcement Learning
cora
577
test
1-hop neighbor's text information: Waveshrink with semi-soft shrinkage functions. : Donoho and Johnstone's WaveShrink procedure has proven valuable for signal de-noising and non-parametric regression. WaveShrink is based on the principle of shrinking wavelet coefficients towards zero to remove noise. WaveShrink has very broad asymptotic near-optimality properties. In this paper, we introduce a new shrinkage scheme, semisoft, which generalizes hard and soft shrinkage. We study the properties of the shrinkage functions, and demonstrate that semisoft shrinkage offers advantages over both hard shrinkage (uniformly smaller risk and less sensitivity to small perturbations in the data) and soft shrinkage (smaller bias and overall L 2 risk). We also construct approximate pointwise confidence intervals for WaveShrink and address the problem of threshold selection. Target text information: Understanding waveshrink: Variance and bias estimation. : Research Report I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
2,220
test
1-hop neighbor's text information: Computational modeling of spatial attention: 1-hop neighbor's text information: Lending Direction to Neural Networks: We present a general formulation for a network of stochastic directional units. This formulation is an extension of the Boltzmann machine in which the units are not binary, but take on values on a cyclic range, between 0 and 2 radians. This measure is appropriate to many domains, representing cyclic or angular values, e.g., wind direction, days of the week, phases of the moon. The state of each unit in a Directional-Unit Boltzmann Machine (DUBM) is described by a complex variable, where the phase component specifies a direction; the weights are also complex variables. We associate a quadratic energy function, and corresponding probability, with each DUBM configuration. The conditional distribution of a unit's stochastic state is a circular version of the Gaussian probability distribution, known as the von Mises distribution. In a mean-field approximation to a stochastic dubm, the phase component of a unit's state represents its mean direction, and the magnitude component specifies the degree of certainty associated with this direction. This combination of a value and a certainty provides additional representational power in a unit. We present a proof that the settling dynamics for a mean-field DUBM cause convergence to a free energy minimum. Finally, we describe a learning algorithm and simulations that demonstrate a mean-field DUBM's ability to learn interesting mappings. fl To appear in: Neural Networks. 1-hop neighbor's text information: L 0 |The First Four Years Abstract A summary of the progress and plans of: Target text information: "Learning to Segment Images Using Dynamic Feature Binding," : Despite the fact that complex visual scenes contain multiple, overlapping objects, people perform object recognition with ease and accuracy. One operation that facilitates recognition is an early segmentation process in which features of objects are grouped and labeled according to which object they belong. Current computational systems that perform this operation are based on predefined grouping heuristics. We describe a system called MAGIC that learns how to group features based on a set of presegmented examples. In many cases, MAGIC discovers grouping heuristics similar to those previously proposed, but it also has the capability of finding nonintuitive structural regularities in images. Grouping is performed by a relaxation network that attempts to dynamically bind related features. Features transmit a complex-valued signal (amplitude and phase) to one another; binding can thus be represented by phase locking related features. MAGIC's training procedure is a generalization of recurrent back propagation to complex-valued units. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
2,479
test
1-hop neighbor's text information: Speech recognition with dynamic Bayesian networks. : Dynamic Bayesian networks (DBNs) are a useful tool for representing complex stochastic processes. Recent developments in inference and learning in DBNs allow their use in real-world applications. In this paper, we apply DBNs to the problem of speech recognition. The factored state representation enabled by DBNs allows us to explicitly represent long-term articulatory and acoustic context in addition to the phonetic-state information maintained by hidden Markov models (HMMs). Furthermore, it enables us to model the short-term correlations among multiple observation streams within single time-frames. Given a DBN structure capable of representing these long- and short-term correlations, we applied the EM algorithm to learn models with up to 500,000 parameters. The use of structured DBN models decreased the error rate by 12 to 29% on a large-vocabulary isolated-word recognition task, compared to a discrete HMM; it also improved significantly on other published results for the same task. This is the first successful application of DBNs to a large-scale speech recognition problem. Investigation of the learned models indicates that the hidden state variables are strongly correlated with acoustic properties of the speech signal. 1-hop neighbor's text information: Probabilistic independence networks for hidden Markov probability models. : Graphical techniques for modeling the dependencies of random variables have been explored in a variety of different areas including statistics, statistical physics, artificial intelligence, speech recognition, image processing, and genetics. Formalisms for manipulating these models have been developed relatively independently in these research communities. In this paper we explore hidden Markov models (HMMs) and related structures within the general framework of probabilistic independence networks (PINs). The paper contains a self-contained review of the basic principles of PINs. It is shown that the well-known forward-backward (F-B) and Viterbi algorithms for HMMs are special cases of more general inference algorithms for arbitrary PINs. Furthermore, the existence of inference and estimation algorithms for more general graphical models provides a set of analysis tools for HMM practitioners who wish to explore a richer class of HMM structures. Examples of relatively complex models to handle sensor fusion and coarticulation in speech recognition are introduced and treated within the graphical model framework to illustrate the advantages of the general approach. This report describes research done at the Department of Information and Computer Science, University of California, Irvine, the Jet Propulsion Laboratory, California Institute of Technology, Microsoft Research, the Center for Biological and Computational Learning, and the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. The authors can be contacted as [email protected], [email protected], and [email protected]. Support for CBCL is provided in part by a grant from the NSF (ASC-9217041). Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Dept. of Defense. MIJ gratefully acknowledges discussions with Steffen Lauritzen on the application of the IPF algorithm to UPINs. 1-hop neighbor's text information: Space-efficient inference in dynamic probabilistic networks. : Dynamic probabilistic networks (DPNs) are a useful tool for modeling complex stochastic processes. The simplest inference task in DPNs is monitoring | that is, computing a posterior distribution for the state variables at each time step given all observations up to that time. Recursive, constant-space algorithms are well-known for monitoring in DPNs and other models. This paper is concerned with hindsight | that is, computing a posterior distribution given both past and future observations. Hindsight is an essential subtask of learning DPN models from data. Existing algorithms for hindsight in DPNs use O(SN ) space and time, where N is the total length of the observation sequence and S is the state space size for each time step. They are therefore impractical for hindsight in complex models with long observation sequences. This paper presents an O(S log N ) space, O(SN log N ) time hindsight algorithm. We demonstrates the effectiveness of the algorithm in two real-world DPN learning problems. We also discuss the possibility of an O(S)-space, O(SN )-time algorithm. Target text information: Compositional modeling with dpns. : I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
1,848
test
1-hop neighbor's text information: A heuristic approach to the discovery of macro-operators. : The negative effect is naturally more significant in the more complex domain. The graph for the simple domain crosses the 0 line earlier than the complex domain. That means that learning starts to be useful with weight greater than 0.6 for the simple domain and 0.7 for the complex domain. As we relax the optimality requirement more s i g n i f i c a n t l y ( w i t h a W = 0.8), macro usage in the more complex domain becomes more advantageous. The purpose of the research described in this paper is to identify the parameters that effects deductive learning and to perform experiments systematically in order to understand the nature of those effects. The goal of this paper is to demonstrate the methodology of performing parametric experimental study of deductive learning. The example here include the study of two parameters: the point on the satisficing-optimizing scale that is used during the search carried out during problem solving time and during learning time. We showed that A*, which looks for optimal solutions, cannot benefit from macro learning but as the strategy comes closer to best-first (satisficing search), the utility of macros increases. We also demonstrated that deductive learners that learn offline by solving training problems are sensitive to the type of search used during the learning. We showed that in general optimizing search is best for learning. It generates macros that increase the quality solutions regardless of the search method used during problem solving. It also improves the efficiency for problem solvers that require a high level of optimality. The only drawback in using optimizing search is the increase in learning resources spent. We are aware of the fact that the results described here are not very surprising. The goal of the parametric study is not necessarily to find exciting results, but to obtain results, sometimes even previously known, in a controlled experimental environment. The work described here is only part of our research plan. We are currently in the process of extensive experimentation with all the parameters described here and also with others. We also intend to test the validity of the conclusions reached during the study by repeating some of the tests in several of the commonly known search problems. We hope that such systematic experimentation will help the research community to better understand the process of deductive learning and will serve as a demonstration of the experimental methodology that should be used in machine learning research. 1-hop neighbor's text information: Some studies in machine learning using the game of Checkers. : Target text information: The role of forgetting in learning. : This paper is a discussion of the relationship between learning and forgetting. An analysis of the economics of learning is carried out and it is argued that knowledge can sometimes have a negative value. A series of experiments involving a program which learns to traverse state spaces is described. It is shown that most of the knowledge acquired is of negative value even though it is correct and was acquired solving similar problems. It is shown that the value of the knowledge depends on what else is known and that random forgetting can sometimes lead to substantial improvements in performance. It is concluded that research into knowledge acquisition should take seriously the possibility that knowledge may sometimes be harmful. The view is taken that learning and forgetting are complementary processes which construct and maintain useful representations of experience. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
5
Reinforcement Learning
cora
762
test
1-hop neighbor's text information: Learning with unreliable boundary queries. : We introduce a model for learning from examples and membership queries in situations where the boundary between positive and negative examples is somewhat ill-defined. In our model, queries near the boundary of a target concept may receive incorrect or "don't care" responses, and the distribution of examples has zero probability mass on the boundary region. The motivation behind our model is that in many cases the boundary between positive and negative examples is complicated or "fuzzy." However, one may still hope to learn successfully, because the typical examples that one sees do not come from that region. We present several positive results in this new model. We show how to learn the intersection of two arbitrary halfspaces when membership queries near the boundary may be answered incorrectly. Our algorithm is an extension of an algorithm of Baum [7, 6] that learns the intersection of two halfspaces whose bounding planes pass through the origin in the PAC-with-membership-queries model. We also describe algorithms for learning several subclasses of monotone DNF formulas. 1-hop neighbor's text information: Can pac learning algorithms tolerate random attribute noise? Algorithmica, : This paper studies the robustness of pac learning algorithms when the instance space is f0; 1g n , and the examples are corrupted by purely random noise affecting only the instances (and not the labels). In the past, conflicting results on this subject have been obtained|the "best agreement" rule can only tolerate small amounts of noise, yet in some cases large amounts of noise can be tolerated. We show that the truth lies somewhere between these two alternatives. For uniform attribute noise, in which each attribute is flipped independently at random with the same probability, we present an algorithm that pac learns monomials for any (unknown) noise rate less than 1=2. Contrasting this positive result, we show that product random attribute noise, where each attribute i is flipped randomly and independently with its own probability p i , is nearly as harmful as malicious noise|no algorithm can tolerate more than a very small amount of such noise. fl Supported in part by a GE Foundation Junior Faculty Grant and NSF grant CCR-9110108. Part of this research was conducted while the author was at the M.I.T. Laboratory for Computer Science and supported by NSF grant DCR-8607494 and a grant from the Siemens Corporation. Net address: [email protected]. 1-hop neighbor's text information: Learning k-term DNF formulas with an incomplete membership oracle. : We consider the problem of learning k-term DNF formulas using equivalence queries and incomplete membership queries as defined by Angluin and Slonim. We demonstrate that this model can be applied to non-monotone classes. Namely, we describe a polynomial-time algorithm that exactly identifies a k-term DNF formula with a k-term DNF hypothesis using incomplete membership queries and equivalence queries from the class of DNF formulas. Target text information: Learning from Incomplete Boundary Queries Using Split Graphs and Hypergraphs (Extended Abstract): We consider learnability with membership queries in the presence of incomplete information. In the incomplete boundary query model introduced by Blum et al. [7], it is assumed that membership queries on instances near the boundary of the target concept may receive a "don't know" answer. We show that zero-one threshold functions are efficiently learnable in this model. The learning algorithm uses split graphs when the boundary region has radius 1, and their generalization to split hypergraphs (for which we give a split-finding algorithm) when the boundary region has constant radius greater than 1. We use a notion of indistinguishability of concepts that is appropriate for this model. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
388
val
1-hop neighbor's text information: Evolving sensors in environments of controlled complexity. : 1 . Sensors represent a crucial link between the evolutionary forces shaping a species' relationship with its environment, and the individual's cognitive abilities to behave and learn. We report on experiments using a new class of "latent energy environments" (LEE) models to define environments of carefully controlled complexity which allow us to state bounds for random and optimal behaviors that are independent of strategies for achieving the behaviors. Using LEE's analytic basis for defining environments, we then use neural networks (NNets) to model individuals and a steady - state genetic algorithm to model an evolutionary process shaping the NNets, in particular their sensors. Our experiments consider two types of "contact" and "ambient" sensors, and variants where the NNets are not allowed to learn, learn via error correction from internal prediction, and via reinforcement learning. We find that predictive learning, even when using a larger repertoire of the more sophisticated ambient sensors, provides no advantage over NNets unable to learn. However, reinforcement learning using a small number of crude contact sensors does provide a significant advantage. Our analysis of these results points to a tradeoff between the genetic "robustness" of sensors and their informativeness to a learning system. 1-hop neighbor's text information: From complex environments to complex behaviors. Adaptive Behavior, : Adaptation of ecological systems to their environments is commonly viewed through some explicit fitness function defined a priori by the experimenter, or measured a posteriori by estimations based on population size and/or reproductive rates. These methods do not capture the role of environmental complexity in shaping the selective pressures that control the adaptive process. Ecological simulations enabled by computational tools such as the Latent Energy Environments (LEE) model allow us to characterize more closely the effects of environmental complexity on the evolution of adaptive behaviors. LEE is described in this paper. Its motivation arises from the need to vary complexity in controlled and predictable ways, without assuming the relationship of these changes to the adaptive behaviors they engender. This goal is achieved through a careful characterization of environments in which different forms of "energy" are well-defined. A genetic algorithm using endogenous fitness and local selection is used to model the evolutionary process. Individuals in the population are modeled by neural networks with simple sensory-motor systems, and variations in their behaviors are related to interactions with varying environments. We outline the results of three experiments that analyze different sources of environmental complexity and their effects on the collective behaviors of evolving populations. 1-hop neighbor's text information: Cliff (1993). "Issues in evolutionary robotics," From Animals to Animats 2 (Ed. : A version of this paper appears in: Proceedings of SAB92, the Second International Conference on Simulation of Adaptive Behaviour J.-A. Meyer, H. Roitblat, and S. Wilson, editors, MIT Press Bradford Books, Cambridge, MA, 1993. Target text information: Environmental Effects on Minimal Behaviors in the Minimat World: The structure of an environment affects the behaviors of the organisms that have evolved in it. How is that structure to be described, and how can its behavioral consequences be explained and predicted? We aim to establish initial answers to these questions by simulating the evolution of very simple organisms in simple environments with different structures. Our artificial creatures, called "minimats," have neither sensors nor memory and behave solely by picking amongst the actions of moving, eating, reproducing, and sitting, according to an inherited probability distribution. Our simulated environments contain only food (and multiple minimats) and are structured in terms of their spatial and temporal food density and the patchiness with which the food appears. Changes in these environmental parameters affect the evolved behaviors of minimats in different ways, and all three parameters are of importance in describing the minimat world. One of the most useful behavioral strategies that evolves is "looping" movement, which allows minimats-despite their lack of internal state-to match their behavior to the temporal (and spatial) structure of their environment. Ultimately we find that minimats construct their own environments through their individual behaviors, making the study of the impact of global environment structure on individual behavior much more complex. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
235
test
1-hop neighbor's text information: : Most connectionist modeling assumes noise-free inputs. This assumption is often violated. This paper introduces the idea of clearning, of simultaneously cleaning the data and learning the underlying structure. The cleaning step can be viewed as top-down processing (where the model modifies the data), and the learning step can be viewed as bottom-up processing (where the data modifies the model). Clearning is used in conjunction with standard pruning. This paper discusses the statistical foundation of clearning, gives an interpretation in terms of a mechanical model, describes how to obtain both point predictions and conditional densities for the output, and shows how the resulting model can be used to discover properties of the data otherwise not accessible (such as the signal-to-noise ratio of the inputs). This paper uses clearning to predict foreign exchange rates, a noisy time series problem with well-known benchmark performances. On the out-of-sample 1993-1994 test period, clearning obtains an annualized return on investment above 30%, significantly better than an otherwise identical network. The final ultra-sparse network with 36 remaining non-zero input-to-hidden weights (of the 1035 initial weights between 69 inputs and 15 hidden units) is very robust against overfitting. This small network also lends itself to interpretation. 1-hop neighbor's text information: NONPARAMETRIC SELECTION OF INPUT VARIABLES FOR CONNECTIONIST LEARNING: 1-hop neighbor's text information: The Observer-Observation Dilemma in Neuro-Forecasting: Reliable Models From Unreliable Data Through CLEARNING: This paper introduces the idea of clearning, of simultaneously cleaning data and learning the underlying structure. The cleaning step can be viewed as top-down processing (the model modifies the data), and the learning step can be viewed as bottom-up processing (where the data modifies the model). After discussing the statistical foundation of the proposed method from a maximum likelihood perspective, we apply clearning to a notoriously hard problem where benchmark performances are very well known: the prediction of foreign exchange rates. On the difficult 1993-1994 test period, clearning in conjunction with pruning yields an annualized return between 35 and 40% (out-of-sample), significantly better than an otherwise identical network trained without cleaning. The network was started with 69 inputs and 15 hidden units and ended up with only 39 non-zero weights between inputs and hidden units. The resulting ultra-sparse final architectures obtained with clearning and pruning are immune against overfitting, even on very noisy problems since the cleaned data allow for a simpler model. Apart from the very competitive performance, clearning gives insight into the data: we show how to estimate the overall signal-to-noise ratio of each input variable, and we show that error estimates for each pattern can be used to detect and remove outliers, and to replace missing or corrupted data by cleaned values. Clearning can be used in any nonlinear regression or classification problem. Target text information: Predicting probability distributions: A connectionist approach. : Most traditional prediction techniques deliver the mean of the probability distribution (a single point). For multimodal processes, instead of predicting the mean of the probability distribution, it is important to predict the full distribution. This article presents a new connectionist method to predict the conditional probability distribution in response to an input. The main idea is to transform the problem from a regression to a classification problem. The conditional probability distribution network can perform both direct predictions and iterated predictions, a task which is specific for time series problems. We compare our method to fuzzy logic and discuss important differences, and also demonstrate the architecture on two time series. The first is the benchmark laser series used in the Santa Fe competition, a deterministic chaotic system. The second is a time series from a Markov process which exhibits structure on two time scales. The network produces multimodal predictions for this series. We compare the predictions of the network with a nearest-neighbor predictor and find that the conditional probability network is more than twice as likely a model. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
2,354
test
1-hop neighbor's text information: Learning Viewpoint Invariant Representations of Faces in an Attractor Network: In natural visual experience, different views of an object tend to appear in close temporal proximity as an animal manipulates the object or navigates around it. We investigated the ability of an attractor network to acquire view invariant visual representations by associating first neighbors in a pattern sequence. The pattern sequence contains successive views of faces of ten individuals as they change pose. Under the network dynamics developed by Griniasty, Tsodyks & Amit (1993), multiple views of a given subject fall into the same basin of attraction. We use an independent component (ICA) representation of the faces for the input patterns (Bell & Sejnowski, 1995). The ICA representation has advantages over the principal component representation (PCA) for viewpoint-invariant recognition both with and without the attractor network, suggesting that ICA is a better representation than PCA for object recognition. 1-hop neighbor's text information: Learning Viewpoint Invariant Face Representations from Visual Experience by Temporal Association: In natural visual experience, different views of an object or face tend to appear in close temporal proximity. A set of simulations is presented which demonstrate how viewpoint invariant representations of faces can be developed from visual experience by capturing the temporal relationships among the input patterns. The simulations explored the interaction of temporal smoothing of activity signals with Hebbian learning (Foldiak, 1991) in both a feed-forward system and a recurrent system. The recurrent system was a generalization of a Hopfield network with a lowpass temporal filter on all unit activities. Following training on sequences of graylevel images of faces as they changed pose, multiple views of a given face fell into the same basin of attraction, and the system acquired representations of faces that were approximately viewpoint invariant. 1-hop neighbor's text information: Implicit learning in 3D object recognition: The importance of temporal context: A novel architecture and set of learning rules for cortical self-organization is proposed. The model is based on the idea that multiple information channels can modulate one another's plasticity. Features learned from bottom-up information sources can thus be influenced by those learned from contextual pathways, and vice versa. A maximum likelihood cost function allows this scheme to be implemented in a biologically feasible, hierarchical neural circuit. In simulations of the model, we first demonstrate the utility of temporal context in modulating plasticity. The model learns a representation that categorizes people's faces according to identity, independent of viewpoint, by taking advantage of the temporal continuity in image sequences. In a second set of simulations, we add plasticity to the contextual stream and explore variations in the architecture. In this case, the model learns a two-tiered representation, starting with a coarse view-based clustering and proceeding to a finer clustering of more specific stimulus features. This model provides a tenable account of how people may perform 3D object recognition in a hierarchical, bottom-up fashion. Target text information: "A self-organizing multiple-view representation of 3-D objects," : We explore representation of 3D objects in which several distinct 2D views are stored for each object. We demonstrate the ability of a two-layer network of thresholded summation units to support such representations. Using unsupervised Hebbian relaxation, the network learned to recognize ten objects from different viewpoints. The training process led to the emergence of compact representations of the specific input views. When tested on novel views of the same objects, the network exhibited a substantial generalization capability. In simulated psychophysical experiments, the network's behavior was qualitatively similar to that of human subjects. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
1,984
test
1-hop neighbor's text information: Learning in the presence of malicious errors, : In this paper we study an extension of the distribution-free model of learning introduced by Valiant [23] (also known as the probably approximately correct or PAC model) that allows the presence of malicious errors in the examples given to a learning algorithm. Such errors are generated by an adversary with unbounded computational power and access to the entire history of the learning algorithm's computation. Thus, we study a worst-case model of errors. Our results include general methods for bounding the rate of error tolerable by any learning algorithm, efficient algorithms tolerating nontrivial rates of malicious errors, and equivalences between problems of learning with errors and standard combinatorial optimization problems. 1-hop neighbor's text information: Toward efficient agnostic learning. : In this paper we initiate an investigation of generalizations of the Probably Approximately Correct (PAC) learning model that attempt to significantly weaken the target function assumptions. The ultimate goal in this direction is informally termed agnostic learning, in which we make virtually no assumptions on the target function. The name derives from the fact that as designers of learning algorithms, we give up the belief that Nature (as represented by the target function) has a simple or succinct explanation. We give a number of positive and negative results that provide an initial outline of the possibilities for agnostic learning. Our results include hardness results for the most obvious generalization of the PAC model to an agnostic setting, an efficient and general agnostic learning method based on dynamic programming, relationships between loss functions for agnostic learning, and an algorithm for a learning problem that involves hidden variables. 1-hop neighbor's text information: Efficient distribution-free learning of probabilistic concepts. : In this paper we investigate a new formal model of machine learning in which the concept (boolean function) to be learned may exhibit uncertain or probabilistic behavior|thus, the same input may sometimes be classified as a positive example and sometimes as a negative example. Such probabilistic concepts (or p-concepts) may arise in situations such as weather prediction, where the measured variables and their accuracy are insufficient to determine the outcome with certainty. We adopt from the Valiant model of learning [27] the demands that learning algorithms be efficient and general in the sense that they perform well for a wide class of p-concepts and for any distribution over the domain. In addition to giving many efficient algorithms for learning natural classes of p-concepts, we study and develop in detail an underlying theory of learning p-concepts. Target text information: Learning Switching Concepts: We consider learning in situations where the function used to classify examples may switch back and forth between a small number of different concepts during the course of learning. We examine several models for such situations: oblivious models in which switches are made independent of the selection of examples, and more adversarial models in which a single adversary controls both the concept switches and example selection. We show relationships between the more benign models and the p-concepts of Kearns and Schapire, and present polynomial-time algorithms for learning switches between two k-DNF formulas. For the most adversarial model, we present a model of success patterned after the popular competitive analysis used in studying on-line algorithms. We describe a randomized query algorithm for such adversarial switches between two monotone disjunctions that is "1-competitive" in that the total number of mistakes plus queries is with high probability bounded by the number of switches plus some fixed polynomial in n (the number of variables). We also use notions described here to provide sufficient conditions under which learning a p-concept class "with a decision rule" implies being able to learn the class "with a model of probability." I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
267
test
1-hop neighbor's text information: A Theory of Networks for Approximation and Learning, : Learning an input-output mapping from a set of examples, of the type that many neural networks have been constructed to perform, can be regarded as synthesizing an approximation of a multi-dimensional function, that is solving the problem of hy-persurface reconstruction. From this point of view, this form of learning is closely related to classical approximation techniques, such as generalized splines and regularization theory. This paper considers the problems of an exact representation and, in more detail, of the approximation of linear and nonlinear mappings in terms of simpler functions of fewer variables. Kolmogorov's theorem concerning the representation of functions of several variables in terms of functions of one variable turns out to be almost irrelevant in the context of networks for learning. We develop a theoretical framework for approximation based on regularization techniques that leads to a class of three-layer networks that we call Generalized Radial Basis Functions (GRBF), since they are mathematically related to the well-known Radial Basis Functions, mainly used for strict interpolation tasks. GRBF networks are not only equivalent to generalized splines, but are also closely related to pattern recognition methods such as Parzen windows and potential functions and to several neural network algorithms, such as Kanerva's associative memory, backpropagation and Kohonen's topology preserving map. They also have an interesting interpretation in terms of prototypes that are synthesized and optimally combined during the learning stage. The paper introduces several extensions and applications of the technique and discusses intriguing analogies with neurobiological data. c fl Massachusetts Institute of Technology, 1994 This paper describes research done within the Center for Biological Information Processing, in the Department of Brain and Cognitive Sciences, and at the Artificial Intelligence Laboratory. This research is sponsored by a grant from the Office of Naval Research (ONR), Cognitive and Neural Sciences Division; by the Artificial Intelligence Center of Hughes Aircraft Corporation; by the Alfred P. Sloan Foundation; by the National Science Foundation. Support for the A. I. Laboratory's artificial intelligence research is provided by the Advanced Research Projects Agency of the Department of Defense under Army contract DACA76-85-C-0010, and in part by ONR contract N00014-85-K-0124. 1-hop neighbor's text information: A practical Bayesian framework for backpropagation networks. : A quantitative and practical Bayesian framework is described for learning of mappings in feedforward networks. The framework makes possible: (1) objective comparisons between solutions using alternative network architectures; (2) objective stopping rules for network pruning or growing procedures; (3) objective choice of magnitude and type of weight decay terms or additive regularisers (for penalising large weights, etc.); (4) a measure of the effective number of well-determined parameters in a model; (5) quantified estimates of the error bars on network parameters and on network output; (6) objective comparisons with alternative learning and interpolation models such as splines and radial basis functions. The Bayesian `evidence' automatically embodies `Occam's razor,' penalising over-flexible and over-complex models. The Bayesian approach helps detect poor underlying assumptions in learning models. For learning models well matched to a problem, a good correlation between generalisation ability This paper makes use of the Bayesian framework for regularisation and model comparison described in the companion paper `Bayesian interpolation' (MacKay, 1991a). This framework is due to Gull and Skilling (Gull, 1989a). and the Bayesian evidence is obtained. Target text information: : Draft A Brief Introduction to Neural Networks Richard D. De Veaux Lyle H. Ungar Williams College University of Pennsylvania Abstract Artificial neural networks are being used with increasing frequency for high dimensional problems of regression or classification. This article provides a tutorial overview of neural networks, focusing on back propagation networks as a method for approximating nonlinear multivariable functions. We explain, from a statistician's vantage point, why neural networks might be attractive and how they compare to other modern regression techniques. KEYWORDS: nonparametric regression; function approximation; backpropagation. 1 Introduction Networks that mimic the way the brain works; computer programs that actually LEARN patterns; forecasting without having to know statistics. These are just some of the many claims and attractions of artificial neural networks. Neural networks (we will henceforth drop the term artificial, unless we need to distinguish them from biological neural networks) seem to be everywhere these days, and at least in their advertising, are able to do all that statistics can do without all the fuss and bother of having to do anything except buy a piece of software. Neural networks have been successfully used for many different applications including robotics, chemical process control, speech recognition, optical character recognition, credit card fraud detection, interpretation of chemical spectra and vision for autonomous navigation of vehicles. (Pointers to the literature are given at the end of this article.) In this article we will attempt to explain how one particular type of neural network, feedforward networks with sigmoidal activation functions ("backpropagation networks") actually works, how it is "trained", and how it compares with some more well known statistical techniques. As an example of why someone would want to use a neural network, consider the problem of recognizing hand written ZIP codes on letters. This is a classification problem, where the 1 I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
770
test
1-hop neighbor's text information: A Theory of Networks for Approximation and Learning, : Learning an input-output mapping from a set of examples, of the type that many neural networks have been constructed to perform, can be regarded as synthesizing an approximation of a multi-dimensional function, that is solving the problem of hy-persurface reconstruction. From this point of view, this form of learning is closely related to classical approximation techniques, such as generalized splines and regularization theory. This paper considers the problems of an exact representation and, in more detail, of the approximation of linear and nonlinear mappings in terms of simpler functions of fewer variables. Kolmogorov's theorem concerning the representation of functions of several variables in terms of functions of one variable turns out to be almost irrelevant in the context of networks for learning. We develop a theoretical framework for approximation based on regularization techniques that leads to a class of three-layer networks that we call Generalized Radial Basis Functions (GRBF), since they are mathematically related to the well-known Radial Basis Functions, mainly used for strict interpolation tasks. GRBF networks are not only equivalent to generalized splines, but are also closely related to pattern recognition methods such as Parzen windows and potential functions and to several neural network algorithms, such as Kanerva's associative memory, backpropagation and Kohonen's topology preserving map. They also have an interesting interpretation in terms of prototypes that are synthesized and optimally combined during the learning stage. The paper introduces several extensions and applications of the technique and discusses intriguing analogies with neurobiological data. c fl Massachusetts Institute of Technology, 1994 This paper describes research done within the Center for Biological Information Processing, in the Department of Brain and Cognitive Sciences, and at the Artificial Intelligence Laboratory. This research is sponsored by a grant from the Office of Naval Research (ONR), Cognitive and Neural Sciences Division; by the Artificial Intelligence Center of Hughes Aircraft Corporation; by the Alfred P. Sloan Foundation; by the National Science Foundation. Support for the A. I. Laboratory's artificial intelligence research is provided by the Advanced Research Projects Agency of the Department of Defense under Army contract DACA76-85-C-0010, and in part by ONR contract N00014-85-K-0124. 1-hop neighbor's text information: Learning the peg-into-hole assembly with a connectionist reinforcement technique. : The paper presents a learning controller that is capable of increasing insertion speed during consecutive peg-into-hole operations, without increasing the contact force level. Our aim is to find a better relationship between measured forces and the controlled velocity, without using a complicated (human generated) model. We followed a connectionist approach. Two learning phases are distinguished. First the learning controller is trained (or initialised) in a supervised way by a suboptimal task frame controller. Then a reinforcement learning phase follows. The controller consists of two networks: (1) the policy network and (2) the exploration network. On-line robotic exploration plays a crucial role in obtaining a better policy. Optionally, this architecture can be extended with a third network: the reinforcement network. The learning controller is implemented on a CAD-based contact force simulator. In contrast with most other related work, the experiments are simulated in 3D with 6 degrees of freedom. Performance of a peg-into-hole task is measured in insertion time and average/maximum force level. The fact that a better performance can be obtained in this way, demonstrates the importance of model-free learning techniques for repetitive robotic assembly tasks. The paper presents the approach and simulation results. Keywords: robotic assembly, peg-into-hole, artificial neural networks, reinforcement learning. Target text information: Learning Controllers from Examples a motivation for searching alternative, empirical techniques for generating controllers.: Today there is a great interest in discovering methods that allow a faster design and development of real-time control software. Control theory helps when linear controllers have to be developed but it does not support the generation In this paper, it is discussed how Machine Learning has been applied to the Function, and Locally Receptive Field Function Approximators. Three integrated learning algorithms, two of which are original, are described and then tried on two experimental test cases. The first test case is provided by an industrial robot KUKA IR-361 engaged into the "peg-into-hole" task, while the second is a classical prediction task on the Mackey-Glass chaotic series. From the experimental comparison, it appears that both Fuzzy Controllers and RBFNs synthesised from examples are excellent approximators, and that they can be even more accurate than MLPs. of non-linear controllers, which in many cases (such as in compliant motion control) I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
437
val
1-hop neighbor's text information: A bootstrap evaluation of the effect of data splitting on financial time series, : This article exposes problems of the commonly used technique of splitting the available data into training, validation, and test sets that are held fixed, warns about drawing too strong conclusions from such static splits, and shows potential pitfalls of ignoring variability across splits. Using a bootstrap or resampling method, we compare the uncertainty in the solution stemming from the data splitting with neural network specific uncertainties (parameter initialization, choice of number of hidden units, etc.). We present two results on data from the New York Stock Exchange. First, the variation due to different resamplings is significantly larger than the variation due to different network conditions. This result implies that it is important to not over-interpret a model (or an ensemble of models) estimated on one specific split of the data. Second, on each split, the neural network solution with early stopping is very close to a linear model; no significant nonlinearities are extracted. 1-hop neighbor's text information: "Modeling volatility using state space models", : In time series problems, noise can be divided into two categories: dynamic noise which drives the process, and observational noise which is added in the measurement process, but does not influence future values of the system. In this framework, empirical volatilities (the squared relative returns of prices) exhibit a significant amount of observational noise. To model and predict their time evolution adequately, we estimate state space models that explicitly include observational noise. We obtain relaxation times for shocks in the logarithm of volatility ranging from three weeks (for foreign exchange) to three to five months (for stock indices). In most cases, a two-dimensional hidden state is required to yield residuals that are consistent with white noise. We compare these results with ordinary autoregressive models (without a hidden state) and find that autoregressive models underestimate the relaxation times by about two orders of magnitude due to their ignoring the distinction between observational and dynamic noise. This new interpretation of the dynamics of volatility in terms of relaxators in a state space model carries over to stochastic volatility models and to GARCH models, and is useful for several problems in finance, including risk management and the pricing of derivative securities. 1-hop neighbor's text information: Nonlinear trading models through Sharpe Ratio maximization. : Working Paper IS-97-005, Leonard N. Stern School of Business, New York University. In: Decision Technologies for Financial Engineering (Proceedings of the Fourth International Conference on Neural Networks in the Capital Markets, NNCM-96), pp. 3-22. Edited by A.S.Weigend, Y.S.Abu-Mostafa, and A.-P.N.Refenes. Singapore: World Scientific, 1997. http://www.stern.nyu.edu/~aweigend/Research/Papers/SharpeRatio While many trading strategies are based on price prediction, traders in financial markets are typically interested in risk-adjusted performance such as the Sharpe Ratio, rather than price predictions themselves. This paper introduces an approach which generates a nonlinear strategy that explicitly maximizes the Sharpe Ratio. It is expressed as a neural network model whose output is the position size between a risky and a risk-free asset. The iterative parameter update rules are derived and compared to alternative approaches. The resulting trading strategy is evaluated and analyzed on both computer-generated data and real world data (DAX, the daily German equity index). Trading based on Sharpe Ratio maximization compares favorably to both profit optimization and probability matching (through cross-entropy optimization). The results show that the goal of optimizing out-of-sample risk-adjusted profit can be achieved with this nonlinear approach. Target text information: TO IMPROVE FORECASTING: Working Paper IS-97-007, Leonard N. Stern School of Business, New York University. In: Journal of Computational Intelligence in Finance 6 (1998) 14-23. (Special Issue on "Improving Generalization of Nonlinear Financial Forecasting Models".) http://www.stern.nyu.edu/~aweigend/Research/Papers/InteractionLayer Abstract. Predictive models for financial data are often based on a large number of plausible inputs that are potentially nonlinearly combined to yield the conditional expectation of a target, such as a daily return of an asset. This paper introduces a new architecture for this task: On the output side, we predict dynamical variables such as first derivatives and curvatures on different time spans. These are subsequently combined in an interaction output layer to form several estimates of the variable of interest. Those estimates are then averaged to yield the final prediction. Independently from this idea, on the input side, we propose a new internal preprocessing layer connected with a diagonal matrix of positive weights to a layer of squashing functions. These weights adapt for each input individually and learn to squash outliers in the input. We apply these two ideas to the real world example of the daily predictions of the German stock index DAX (Deutscher Aktien Index), and compare the results to a network with a single output. The new six layer architecture is more stable in training due to two facts: (1) More information is flowing back from the outputs to the input in the backward pass; (2) The constraint of predicting first and second derivatives focuses the learning on the relevant variables for the dynamics. The architectures are compared from both the training perspective (squared errors, robust errors), and from the trading perspective (annualized returns, percent correct, Sharpe ratio). I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
673
test
1-hop neighbor's text information: Exponentially many local minima for single neurons. : We show that for a single neuron with the logistic function as the transfer function the number of local minima of the error function based on the square loss can grow exponentially in the dimension. 1-hop neighbor's text information: Statistical evaluation of neural network experiments: Minimum requirements and current practice. : 1-hop neighbor's text information: What size neural network gives optimal generalization? Convergence properties of backpropagation. : Technical Report UMIACS-TR-96-22 and CS-TR-3617 Institute for Advanced Computer Studies University of Maryland College Park, MD 20742 Abstract One of the most important aspects of any machine learning paradigm is how it scales according to problem size and complexity. Using a task with known optimal training error, and a pre-specified maximum number of training updates, we investigate the convergence of the backpropagation algorithm with respect to a) the complexity of the required function approximation, b) the size of the network in relation to the size required for an optimal solution, and c) the degree of noise in the training data. In general, for a) the solution found is worse when the function to be approximated is more complex, for b) oversized networks can result in lower training and generalization error in certain cases, and for c) the use of committee or ensemble techniques can be more beneficial as the level of noise in the training data is increased. For the experiments we performed, we do not obtain the optimal solution in any case. We further support the observation that larger networks can produce better training and generalization error using a face recognition example where a network with many more parameters than training points generalizes better than smaller networks. Target text information: On the Distribution of Performance from Multiple Neural Network Trials, On the Distribution of Performance: Andrew D. Back was with the Department of Electrical and Computer Engineering, University of Queensland. St. Lucia, Australia. He is now with the Brain Information Processing Group, Frontier Research Program, RIKEN, The Institute of Physical and Chemical Research, 2-1 Hirosawa, Wako-shi, Saitama 351-01, Japan Abstract The performance of neural network simulations is often reported in terms of the mean and standard deviation of a number of simulations performed with different starting conditions. However, in many cases, the distribution of the individual results does not approximate a Gaussian distribution, may not be symmetric, and may be multimodal. We present the distribution of results for practical problems and show that assuming Gaussian distributions can significantly affect the interpretation of results, especially those of comparison studies. For a controlled task which we consider, we find that the distribution of performance is skewed towards better performance for smoother target functions and skewed towards worse performance I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
249
test
1-hop neighbor's text information: Some studies in machine learning using the game of Checkers. : 1-hop neighbor's text information: Learning to predict by the methods of temporal differences. : This article introduces a class of incremental learning procedures specialized for prediction|that is, for using past experience with an incompletely known system to predict its future behavior. Whereas conventional prediction-learning methods assign credit by means of the difference between predicted and actual outcomes, the new methods assign credit by means of the difference between temporally successive predictions. Although such temporal-difference methods have been used in Samuel's checker player, Holland's bucket brigade, and the author's Adaptive Heuristic Critic, they have remained poorly understood. Here we prove their convergence and optimality for special cases and relate them to supervised-learning methods. For most real-world prediction problems, temporal-difference methods require less memory and less peak computation than conventional methods; and they produce more accurate predictions. We argue that most problems to which supervised learning is currently applied are really prediction problems of the sort to which temporal-difference methods can be applied to advantage. 1-hop neighbor's text information: The optimal number of learning samples and hidden units in function approximation with a feedforward network. : This paper presents a methodology to estimate the optimal number of learning samples and the number of hidden units needed to obtain a desired accuracy of a function approximation by a feedforward network. The representation error and the generalization error, components of the total approximation error are analyzed and the approximation accuracy of a feedforward network is investigated as a function of the number of hidden units and the number of learning samples. Based on the asymptotical behavior of the approximation error, an asymptotical model of the error function (AMEF) is introduced of which the parameters can be determined experimentally. An alternative model of the error function, which include theoretical results about general bounds of approximation, is also analyzed. In combination with knowledge about the computational complexity of the learning rule an optimal learning set size and number of hidden units can be found resulting in a minimum computation time for a given desired precision of the approximation. This approach was applied to optimize the learning of the camera-robot mapping of a visually guided robot arm and a complex logarithm function approximation. Target text information: TD Learning of Game Evaluation Functions with Hierarchical Neural Architectures. : I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
1,574
test
1-hop neighbor's text information: Objective function formulation of the BCM theory of visual cortical plasticity: Statistical connections, stability conditions. : In this paper, we present an objective function formulation of the BCM theory of visual cortical plasticity that permits us to demonstrate the connection between the unsupervised BCM learning procedure and various statistical methods, in particular, that of Projection Pursuit. This formulation provides a general method for stability analysis of the fixed points of the theory and enables us to analyze the behavior and the evolution of the network under various visual rearing conditions. It also allows comparison with many existing unsupervised methods. This model has been shown successful in various applications such as phoneme and 3D object recognition. We thus have the striking and possibly highly significant result that a biological neuron is performing a sophisticated statistical procedure. 1-hop neighbor's text information: Generalization to local remappings of the visuomotor coordinate transformation: 1-hop neighbor's text information: A Theory of Networks for Approximation and Learning, : Learning an input-output mapping from a set of examples, of the type that many neural networks have been constructed to perform, can be regarded as synthesizing an approximation of a multi-dimensional function, that is solving the problem of hy-persurface reconstruction. From this point of view, this form of learning is closely related to classical approximation techniques, such as generalized splines and regularization theory. This paper considers the problems of an exact representation and, in more detail, of the approximation of linear and nonlinear mappings in terms of simpler functions of fewer variables. Kolmogorov's theorem concerning the representation of functions of several variables in terms of functions of one variable turns out to be almost irrelevant in the context of networks for learning. We develop a theoretical framework for approximation based on regularization techniques that leads to a class of three-layer networks that we call Generalized Radial Basis Functions (GRBF), since they are mathematically related to the well-known Radial Basis Functions, mainly used for strict interpolation tasks. GRBF networks are not only equivalent to generalized splines, but are also closely related to pattern recognition methods such as Parzen windows and potential functions and to several neural network algorithms, such as Kanerva's associative memory, backpropagation and Kohonen's topology preserving map. They also have an interesting interpretation in terms of prototypes that are synthesized and optimally combined during the learning stage. The paper introduces several extensions and applications of the technique and discusses intriguing analogies with neurobiological data. c fl Massachusetts Institute of Technology, 1994 This paper describes research done within the Center for Biological Information Processing, in the Department of Brain and Cognitive Sciences, and at the Artificial Intelligence Laboratory. This research is sponsored by a grant from the Office of Naval Research (ONR), Cognitive and Neural Sciences Division; by the Artificial Intelligence Center of Hughes Aircraft Corporation; by the Alfred P. Sloan Foundation; by the National Science Foundation. Support for the A. I. Laboratory's artificial intelligence research is provided by the Advanced Research Projects Agency of the Department of Defense under Army contract DACA76-85-C-0010, and in part by ONR contract N00014-85-K-0124. Target text information: Observation on cortical mechanisms for object recognition and learning. : This paper sketches several aspects of a hypothetical cortical architecture for visual object recognition, based on a recent computational model. The scheme relies on modules for learning from examples, such as Hyperbf-like networks, as its basic components. Such models are not intended to be precise theories of the biological circuitry but rather to capture a class of explanations we call Memory-Based Models (MBM) that contains sparse population coding, memory-based recognition and codebooks of prototypes. Unlike the sigmoidal units of some artificial neural networks, the units of MBMs are consistent with the usual description of cortical neurons as tuned to multidimensional optimal stimuli. We will describe how an example of MBM may be realized in terms of cortical circuitry and biophysical mechanisms, consistent with psychophysical and physiological data. A number of predictions, testable with physiological techniques, are made. This memo describes research done within the Center for Biological and Computational Learning in the Department of Brain and Cognitive Sciences and at the Artificial Intelligence Laboratory at the Massachusetts Institute of Technology. This research is sponsored by grants from the Office of Naval Research under contracts N00014-92-J-1879 and N00014-93-1-0385; and by a grant from the National Science Foundation under contract ASC-9217041 (this award includes funds from ARPA provided under the HPCC program). Additional support is provided by the North Atlantic Treaty Organization, ATR Audio and Visual Perception Research Laboratories, Mitsubishi Electric Corporation, Sumitomo Metal Industries, and Siemens AG. Support for the A.I. Laboratory's artificial intelligence research is provided by ARPA contract N00014-91-J-4038. Tomaso Poggio is supported by the Uncas and Helen Whitaker Chair at MIT's Whitaker College. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
2,081
test
1-hop neighbor's text information: "Modeling volatility using state space models", : In time series problems, noise can be divided into two categories: dynamic noise which drives the process, and observational noise which is added in the measurement process, but does not influence future values of the system. In this framework, empirical volatilities (the squared relative returns of prices) exhibit a significant amount of observational noise. To model and predict their time evolution adequately, we estimate state space models that explicitly include observational noise. We obtain relaxation times for shocks in the logarithm of volatility ranging from three weeks (for foreign exchange) to three to five months (for stock indices). In most cases, a two-dimensional hidden state is required to yield residuals that are consistent with white noise. We compare these results with ordinary autoregressive models (without a hidden state) and find that autoregressive models underestimate the relaxation times by about two orders of magnitude due to their ignoring the distinction between observational and dynamic noise. This new interpretation of the dynamics of volatility in terms of relaxators in a state space model carries over to stochastic volatility models and to GARCH models, and is useful for several problems in finance, including risk management and the pricing of derivative securities. 1-hop neighbor's text information: TO IMPROVE FORECASTING: Working Paper IS-97-007, Leonard N. Stern School of Business, New York University. In: Journal of Computational Intelligence in Finance 6 (1998) 14-23. (Special Issue on "Improving Generalization of Nonlinear Financial Forecasting Models".) http://www.stern.nyu.edu/~aweigend/Research/Papers/InteractionLayer Abstract. Predictive models for financial data are often based on a large number of plausible inputs that are potentially nonlinearly combined to yield the conditional expectation of a target, such as a daily return of an asset. This paper introduces a new architecture for this task: On the output side, we predict dynamical variables such as first derivatives and curvatures on different time spans. These are subsequently combined in an interaction output layer to form several estimates of the variable of interest. Those estimates are then averaged to yield the final prediction. Independently from this idea, on the input side, we propose a new internal preprocessing layer connected with a diagonal matrix of positive weights to a layer of squashing functions. These weights adapt for each input individually and learn to squash outliers in the input. We apply these two ideas to the real world example of the daily predictions of the German stock index DAX (Deutscher Aktien Index), and compare the results to a network with a single output. The new six layer architecture is more stable in training due to two facts: (1) More information is flowing back from the outputs to the input in the backward pass; (2) The constraint of predicting first and second derivatives focuses the learning on the relevant variables for the dynamics. The architectures are compared from both the training perspective (squared errors, robust errors), and from the trading perspective (annualized returns, percent correct, Sharpe ratio). Target text information: A bootstrap evaluation of the effect of data splitting on financial time series, : This article exposes problems of the commonly used technique of splitting the available data into training, validation, and test sets that are held fixed, warns about drawing too strong conclusions from such static splits, and shows potential pitfalls of ignoring variability across splits. Using a bootstrap or resampling method, we compare the uncertainty in the solution stemming from the data splitting with neural network specific uncertainties (parameter initialization, choice of number of hidden units, etc.). We present two results on data from the New York Stock Exchange. First, the variation due to different resamplings is significantly larger than the variation due to different network conditions. This result implies that it is important to not over-interpret a model (or an ensemble of models) estimated on one specific split of the data. Second, on each split, the neural network solution with early stopping is very close to a linear model; no significant nonlinearities are extracted. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
2,365
test
1-hop neighbor's text information: Generalized update: Belief change in dynamic settings. : Belief revision and belief update have been proposed as two types of belief change serving different purposes. Belief revision is intended to capture changes of an agent's belief state reflecting new information about a static world. Belief update is intended to capture changes of belief in response to a changing world. We argue that both belief revision and belief update are too restrictive; routine belief change involves elements of both. We present a model for generalized update that allows updates in response to external changes to inform the agent about its prior beliefs. This model of update combines aspects of revision and update, providing a more realistic characterization of belief change. We show that, under certain assumptions, the original update postulates are satisfied. We also demonstrate that plain revision and plain update are special cases of our model, in a way that formally verifies the intuition that revision is suitable for static belief change. 1-hop neighbor's text information: An event-based abductive model of update. : The Katsuno and Mendelzon (KM) theory of belief update has been proposed as a reasonable model for revising beliefs about a changing world. However, the semantics of update relies on information which is not readily available. We describe an alternative semantical view of update in which observations are incorporated into a belief set by: a) explaining the observation in terms of a set of plausible events that might have caused that observation; and b) predicting further consequences of those explanations. We also allow the possibility of conditional explanations. We show that this picture naturally induces an update operator conforming to the KM postulates under certain assumptions. However, we argue that these assumptions are not always reasonable, and they restrict our ability to integrate update with other forms of revision when reasoning about action. fl Some parts of this report appeared in preliminary form as An Event-Based Abductive Model of Update, Proc. of Tenth Canadian Conf. on in AI, Banff, Alta., (1994). 1-hop neighbor's text information: Rank-based systems: A simple approach to belief revision, belief update, and reasoning about evidence and actions. : We describe a ranked-model semantics for if-then rules admitting exceptions, which provides a coherent framework for many facets of evidential and causal reasoning. Rule priorities are automatically extracted form the knowledge base to facilitate the construction and retraction of plausible beliefs. To represent causation, the formalism incorporates the principle of Markov shielding which imposes a stratified set of independence constraints on rankings of interpretations. We show how this formalism resolves some classical problems associated with specificity, prediction and abduction, and how it offers a natural way of unifying belief revision, belief update, and reasoning about actions. Target text information: Abduction as belief revision. : We propose a model of abduction based on the revision of the epistemic state of an agent. Explanations must be sufficient to induce belief in the sentence to be explained (for instance, some observation), or ensure its consistency with other beliefs, in a manner that adequately accounts for factual and hypothetical sentences. Our model will generate explanations that nonmonotonically predict an observation, thus generalizing most current accounts, which require some deductive relationship between explanation and observation. It also provides a natural preference ordering on explanations, defined in terms of normality or plausibility. To illustrate the generality of our approach, we reconstruct two of the key paradigms for model-based diagnosis, abductive and consistency-based diagnosis, within our framework. This reconstruction provides an alternative semantics for both and extends these systems to accommodate our predictive explanations and semantic preferences on explanations. It also illustrates how more general information can be incorporated in a principled manner. fl Some parts of this paper appeared in preliminary form as Abduction as Belief Revision: A Model of Preferred Explanations, Proc. of Eleventh National Conf. on Artificial Intelligence (AAAI-93), Washington, DC, pp.642-648 (1993). I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
2,318
test
1-hop neighbor's text information: CLONES: A Connectionist Layerd Object-oriented NEtwork Simulator. : CLONES is a object-oriented library for constructing, training and utilizing layered connectionist networks. The CLONES library contains all the object classes needed to write a simulator with a small amount of added source code (examples are included). The size of experimental ANN programs is greatly reduced by using an object-oriented library; at the same time these programs are easier to read, write and evolve. The library includes database, network behavior and training procedures that can be customized by the user. It is designed to run efficiently on data parallel computers (such as the RAP [6] and SPERT [1]) as well as uniprocessor workstations. While efficiency and portability to parallel computers are the primary goals, there are several secondary design goals: 3. allow heterogeneous algorithms and training procedures to be interconnected and trained together. Within these constraints we attempt to maximize the variety of artificial neural net work algorithms that can be supported. 1-hop neighbor's text information: Learning topology-preserving maps using self-supervised backpropagation. : Self-supervised backpropagation is an unsupervised learning procedure for feedfor-ward networks, where the desired output vector is identical with the input vector. For backpropagation, we are able to use powerful simulators running on parallel machines. Topology-preserving maps, on the other hand, can be developed by a variant of the competitive learning procedure. However, in a degenerate case, self-supervised backpropagation is a version of competitive learning. A simple extension of the cost function of backpropagation leads to a competitive version of self-supervised backpropagation, which can be used to produce topographic maps. We demonstrate the approach applied to the Traveling Salesman Problem (TSP). The algorithm was implemented using the backpropagation simulator (CLONES) on a parallel machine (RAP). Target text information: Software for ANN training on a ring array processor. : The design and implementation of software for the Ring Array Processor (RAP), a high performance parallel computer, involved development for three hardware platforms: Sun SPARC workstations, Heurikon MC68020 boards running the VxWorks real-time operating system, and Texas Instruments TMS320C30 DSPs. The RAP now runs in Sun workstations under UNIX and in a VME based system using VxWorks. A flexible set of tools has been provided both to the RAP user and programmer. Primary emphasis has been placed on improving the efficiency of layered artificial neural network algorithms. This was done by providing a library of assembly language routines, some of which use node-custom compilation. An object-oriented RAP interface in C++ is provided that allows programmers to incorporate the RAP as a computational server into their own UNIX applications. For those not wishing to program in C++, a command interpreter has been built that provides interactive and shell-script style RAP manipulation. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
2,477
val
1-hop neighbor's text information: (1994) "PFSA Modelling of Behavioural Sequences by Evolutionary Programming" in Stonier, R.J. : Behavioural observations can often be described as a sequence of symbols drawn from a finite alphabet. However the inductive inference of such strings by any automated technique to produce models of the data is a nontrivial task. This paper considers modelling of behavioural data using probabilistic finite state automata (PFSAs). There are a number of information-theoretic techniques for evaluating possible hypotheses. The measure used in this paper is the Minimum Message Length (MML) of Wallace. Although attempts have been made to construct PFSA models by incremental addition of substrings using heuristic rules and the MML to give the lowest information cost, the resultant models cannot be shown to be globally optimal. Fogel's Evolutionary Programming can produce globally optimal PFSA models by evolving data structures of arbitrary complexity without the requirement to encode the PFSA into binary strings as in Genetic Algorithms. However, evaluation of PFSAs during the evolution process by the MML of the PFSA alone is not possible since there will be symbols which cannot be consumed by a partially correct solution. It is suggested that the addition of a "can't consume'' symbol to the symbol alphabet obviates this difficulty. The addition of this null symbol to the alphabet also permits the evolution of explanatory models which need not explain all of the data, a useful property to avoid overfitting noisy data. Results are given for a test set for which the optimal pfsa model is known and for a set of eye glance data derived from an instrument panel simulator. Target text information: Assessment of candidate pfsa models induced from symbol datasets: The induction of the optimal finite state machine explanation from symbol strings is known to be at least NP-complete. However, satisfactory approximately optimal explanations may be found by the use of Evolutionary Programming. It has been shown that an information theoretic measure of finite state machine explanations can be used as the fitness function required for the evaluation of candidate explanations during the search for a near-optimal explanation. It is not obvious from the measure which class of explanation will be favoured over others during the search. By empirical studies it is possible to gain some insight into the dimensions the measure is optimising. In general, for probabilistic finite state machines, explanations assessed by a minimum message length estimator with the minimum number of transitions will be favoured over other explanations. The information measure will also favour explanations with uneven distributions of frequencies on transitions from a node suggesting that repeated sequences in symbol strings will be preferred as an explanation. Approximate bounds for acceptance of explanations and the length of string required for induction to be successful are also derived by considerations of the simplest possible and random explanations and their information measure. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
448
val
1-hop neighbor's text information: Phonetic classification of TIMIT segments preprocessed with lyon\'s cochlear model using a supervised/unsupervised hybrid neural network. : We report results on vowel and stop consonant recognition with tokens extracted from the TIMIT database. Our current system differs from others doing similar tasks in that we do not use any specific time normalization techniques. We use a very detailed biologically motivated input representation of the speech tokens - Lyon's cochlear model as implemented by Slaney [20]. This detailed, high dimensional representation, known as a cochleagram, is classified by either a back-propagation or by a hybrid supervised/unsupervised neural network classifier. The hybrid network is composed of a biologically motivated unsupervised network and a supervised back-propagation network. This approach produces results comparable to those obtained by others without the addition of time normalization. 1-hop neighbor's text information: On the Combination of Supervised and Unsupervised Learning reducing the overall error measure of a classifier.: 1-hop neighbor's text information: Objective function formulation of the BCM theory of visual cortical plasticity: Statistical connections, stability conditions. : In this paper, we present an objective function formulation of the BCM theory of visual cortical plasticity that permits us to demonstrate the connection between the unsupervised BCM learning procedure and various statistical methods, in particular, that of Projection Pursuit. This formulation provides a general method for stability analysis of the fixed points of the theory and enables us to analyze the behavior and the evolution of the network under various visual rearing conditions. It also allows comparison with many existing unsupervised methods. This model has been shown successful in various applications such as phoneme and 3D object recognition. We thus have the striking and possibly highly significant result that a biological neuron is performing a sophisticated statistical procedure. Target text information: Combining exploratory projection pursuit and projection pursuit regression with application to neural networks, 1991. : We present a novel classification and regression method that combines exploratory projection pursuit (unsupervised training) with projection pursuit regression (supervised training), to yield a new family of cost/complexity penalty terms. Some improved generalization properties are demonstrated on real world problems. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
1,469
test
1-hop neighbor's text information: CLONES: A Connectionist Layerd Object-oriented NEtwork Simulator. : CLONES is a object-oriented library for constructing, training and utilizing layered connectionist networks. The CLONES library contains all the object classes needed to write a simulator with a small amount of added source code (examples are included). The size of experimental ANN programs is greatly reduced by using an object-oriented library; at the same time these programs are easier to read, write and evolve. The library includes database, network behavior and training procedures that can be customized by the user. It is designed to run efficiently on data parallel computers (such as the RAP [6] and SPERT [1]) as well as uniprocessor workstations. While efficiency and portability to parallel computers are the primary goals, there are several secondary design goals: 3. allow heterogeneous algorithms and training procedures to be interconnected and trained together. Within these constraints we attempt to maximize the variety of artificial neural net work algorithms that can be supported. Target text information: A Symbolic Complexity Analysis of Connectionist Algorithms for Distributed-Memory Machines: This paper attempts to rigorously determine the computation and communication requirements of connectionist algorithms running on a distributed-memory machine. The strategy involves (1) specifying key connectionist algorithms in a high-level object-oriented language, (2) extracting their running times as polynomials, and (3) analyzing these polynomials to determine the algorithms' space and time complexity. Results are presented for various implementations of the back-propagation algorithm [4]. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
716
test
1-hop neighbor's text information: On estimation of a probability density function and mode. : To apply the algorithm for classification we assign each class a separate set of codebook Gaussians. Each set is only trained with patterns from a single class. After having trained the codebook Gaussians, each set provides an estimate of the probability function of one class; just as with Parzen window estimation, we take as the estimate of the pattern distribution the average of all Gaussians in the set. Classification of a pattern may now be done by calculating the probability of each class at the respective sample point, and assigning to the pattern the class with the highest probability. Hence the whole codebook plays a role in the classification of patterns. This is not the case with regular classification schemes using codebooks. We have tested the classification scheme on several classification tasks including the two spiral problem. We compared our algorithm to various other classification algorithms and it came out second; the best algorithm for the applications is the Parzen window estimation. However, the computing time and memory for Parzen window estimation are excessive when compared to our algorithm, and hence, in practical situations, our algorithm is to be preferred. We have developed a fast algorithm which combines attractive properties of both Parzen window estimation and vector quantization. The scale parameter is tuned adaptively and, therefore, is not set in an ad hoc manner. It allows a classification strategy in which all the codebook vectors are taken into account. This yields better results than the standard vector quantization techniques. An interesting topic for further research is to use radially non-symmetric Gaussians. 1-hop neighbor's text information: A Fast Non-Parametric Density Estimation Algorithm: Non-parametric density estimation is the problem of approximating the values of a probability density function, given samples from the associated distribution. Non-parametric estimation finds applications in discriminant analysis, cluster analysis, and flow calculations based on Smoothed Particle Hydrodynamics. Usual estimators make use of kernel functions, and require on the order of n 2 arithmetic operations to evaluate the density at n sample points. We describe a sequence of special weight functions which requires almost linear number of operations in n for the same computation. Target text information: Efficient Nonparametric Estimation of Probability Density Functions. : Accurate and fast estimation of probability density functions is crucial for satisfactory computational performance in many scientific problems. When the type of density is known a priori, then the problem becomes statistical estimation of parameters from the observed values. In the non-parametric case, usual estimators make use of kernel functions. If X j ; j = 1; 2; : : : ; n is a sequence of i.i.d. random variables with estimated probability density function f n , in the kernel method the computation of the values f n (X 1 ); f n (X 2 ); : : : ; f n (X n ) requires O(n 2 ) operations, since each kernel needs to be evaluated at every X j . We propose a sequence of special weight functions for the non-parametric estimation of f which requires almost linear time: if m is a slowly growing function that increases without bound with n, our method requires only O(m 2 n) arithmetic operations. We derive conditions for convergence under a number of metrics, which turn out to be similar to those required for the convergence of kernel based methods. We also discuss experiments on different distributions and compare the efficiency and the accuracy of our computations with kernel based estimators for various values of n and m. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
2,162
test
1-hop neighbor's text information: The origins of Inductive Logic Programming: A prehistoric tale. : This paper traces the development of the main ideas that have led to the present state of knowledge in Inductive Logic Programming. The story begins with research in psychology on the subject of human concept learning. Results from this research influenced early efforts in Artificial Intelligence which combined with the formal methods of inductive inference to evolve into the present discipline of Inductive Logic Programming. Inductive Logic Programming is often considered to be a young discipline. However, it has its roots in research dating back nearly 40 years. This paper traces the development of ideas beginning in psychology and the effect they had on concept learning research in Artificial Intelligence. Independent of any requirement for a psychological basis, formal methods of inductive inference were developed. These separate streams eventually gave rise to Inductive Logic Programming. This account is not entirely unbiased. More attention is given to the work of those researchers who most influenced my own interest in machine learning. Being a retrospective paper, I do not attempt to describe recent developments in ILP. This account only includes research prior to 1991 the year in which the term Inductive Logic Programming was first used (Muggleton, 1991). This is the reason for the subtitle A Prehistoric Tale. The major headings in the paper are taken from the names of periods in the evolution of life on Earth. 1-hop neighbor's text information: Learning acyclic first-order horn sentences from entailment. : In this paper, we consider learning first-order Horn programs from entailment. In particular, we show that any subclass of first-order acyclic Horn programs with constant arity is exactly learnable from equivalence and entailment membership queries provided it allows a polynomial-time subsumption procedure and satisfies some closure conditions. One consequence of this is that first-order acyclic determinate Horn programs with constant arity are exactly learnable from equiv alence and entailment membership queries. 1-hop neighbor's text information: Inductive Logic Programming: A new research area, Inductive Logic Programming, is presently emerging. While inheriting various positive characteristics of the parent subjects of Logic Programming and Machine Learning, it is hoped that the new area will overcome many of the limitations of its forebears. The background to present developments within this area is discussed and various goals and aspirations for the increasing body of researchers are identified. Inductive Logic Programming needs to be based on sound principles from both Logic and Statistics. On the side of statistical justification of hypotheses we discuss the possible relationship between Algorithmic Complexity theory and Probably-Approximately-Correct (PAC) Learning. In terms of logic we provide a unifying framework for Muggleton and Buntine's Inverse Resolution (IR) and Plotkin's Relative Least General Generali-sation (RLGG) by rederiving RLGG in terms of IR. This leads to a discussion of the feasibility of extending the RLGG framework to allow for the invention of new predicates, previously discussed only within the context of IR. Target text information: Learning concepts by asking questions. In R.S. : Tw o important issues in machine learning are explored: the role that memory plays in acquiring new concepts; and the extent to which the learner can take an active part in acquiring these concepts. This chapter describes a program, called Marvin, which uses concepts it has learned previously to learn new concepts. The program forms hypotheses about the concept being learned and tests the hypotheses by asking the trainer questions. Learning begins when the trainer shows Marvin an example of the concept to be learned. The program determines which objects in the example belong to concepts stored in the memory. A description of the new concept is formed by using the information obtained from the memory to generalize the description of the training example. The generalized description is tested when the program constructs new examples and shows these to the trainer, asking if they belong to the target concept. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
2,362
test
1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming. 1-hop neighbor's text information: Tracing the Behavior of Genetic Algorithms Using Expected Values of Bit and Walsh Products: We consider two methods for tracing genetic algorithms. The first method is based on the expected values of bit products and the second method on the expected values of Walsh products. We treat proportional selection, mutation and uniform and one-point crossover. As applications, we obtain results on stable points and fitness of schemata. Target text information: Convergence analysis of canonical genetic algorithms. : This paper analyzes the convergence properties of the canonical genetic algorithm (CGA) with mutation, crossover and proportional reproduction applied to static optimization problems. It is proved by means of homogeneous finite Markov chain analysis that a CGA will never converge to the global optimum regardless of the initialization, crossover operator and objective function. But variants of CGAs that always maintain the best solution in the population, either before or after selection, are shown to converge to the global optimum due to the irreducibility property of the underlying original nonconvergent CGA. These results are discussed with respect to the schema theorem. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
2,048
test
1-hop neighbor's text information: Neural nets as systems models and controllers. : This paper briefly surveys some recent results relevant 1-hop neighbor's text information: Identification and control of nonlinear systems using neural network models: Design and stability analysis. : Report 91-09-01 September 1991 (revised) May 1994 1-hop neighbor's text information: Some canonical properties of nonlinear systems, in Robust Control of Linear Systems and Nonlinear Control, M.A. Kaashoek, : This paper surveys some well-known facts as well as some recent developments on the topic of stabilization of nonlinear systems. Target text information: Feedback stabilization using two-hidden-layer nets. : This paper compares the representational capabilities of one hidden layer and two hidden layer nets consisting of feedforward interconnections of linear threshold units. It is remarked that for certain problems two hidden layers are required, contrary to what might be in principle expected from the known approximation theorems. The differences are not based on numerical accuracy or number of units needed, nor on capabilities for feature extraction, but rather on a much more basic classification into "direct" and "inverse" problems. The former correspond to the approximation of continuous functions, while the latter are concerned with approximating one-sided inverses of continuous functions |and are often encountered in the context of inverse kinematics determination or in control questions. A general result is given showing that nonlinear control systems can be stabilized using two hidden layers, but not in general using just one. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
1,901
test
1-hop neighbor's text information: Combining neural and symbolic learning to revise probabilistic rule bases. : This paper describes Rapture | a system for revising probabilistic knowledge bases that combines connectionist and symbolic learning methods. Rapture uses a modified version of backpropagation to refine the certainty factors of a probabilistic rule base and it uses ID3's information-gain heuristic to add new rules. Results on refining three actual expert knowledge bases demonstrate that this combined approach generally performs better than previous methods. 1-hop neighbor's text information: Theory refinement combining analytical and empirical methods. : This article describes a comprehensive approach to automatic theory revision. Given an imperfect theory, the approach combines explanation attempts for incorrectly classified examples in order to identify the failing portions of the theory. For each theory fault, correlated subsets of the examples are used to inductively generate a correction. Because the corrections are focused, they tend to preserve the structure of the original theory. Because the system starts with an approximate domain theory, in general fewer training examples are required to attain a given level of performance (classification accuracy) compared to a purely empirical system. The approach applies to classification systems employing a propositional Horn-clause theory. The system has been tested in a variety of application domains, and results are presented for problems in the domains of molecular biology and plant disease diagnosis. 1-hop neighbor's text information: Automated refinement of first-order horn-clause domain theories. : Knowledge acquisition is a difficult, error-prone, and time-consuming task. The task of automatically improving an existing knowledge base using learning methods is addressed by the class of systems performing theory refinement. This paper presents a system, Forte (First-Order Revision of Theories from Examples), which refines first-order Horn-clause theories by integrating a variety of different revision techniques into a coherent whole. Forte uses these techniques within a hill-climbing framework, guided by a global heuristic. It identifies possible errors in the theory and calls on a library of operators to develop possible revisions. The best revision is implemented, and the process repeats until no further revisions are possible. Operators are drawn from a variety of sources, including propositional theory refinement, first-order induction, and inverse resolution. Forte is demonstrated in several domains, including logic programming and qualitative modelling. Target text information: Revising Bayesian networks parameters using backpropagation. : The problem of learning Bayesian networks with hidden variables is known to be a hard problem. Even the simpler task of learning just the conditional probabilities on a Bayesian network with hidden variables is hard. In this paper, we present an approach that learns the conditional probabilities on a Bayesian network with hidden variables by transforming it into a multi-layer feedforward neural network (ANN). The conditional probabilities are mapped onto weights in the ANN, which are then learned using standard backpropagation techniques. To avoid the problem of exponentially large ANNs, we focus on Bayesian networks with noisy-or and noisy-and nodes. Experiments on real world classification problems demonstrate the effectiveness of our technique. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
1,168
val
1-hop neighbor's text information: Using dirichlet mixture priors to derive hidden Markov models for protein families. : A Bayesian method for estimating the amino acid distributions in the states of a hidden Markov model (HMM) for a protein family or the columns of a multiple alignment of that family is introduced. This method uses Dirichlet mixture densities as priors over amino acid distributions. These mixture densities are determined from examination of previously constructed HMMs or multiple alignments. It is shown that this Bayesian method can improve the quality of HMMs produced from small training sets. Specific experiments on the EF-hand motif are reported, for which these priors are shown to produce HMMs with higher likelihood on unseen data, and fewer false positives and false negatives in a database search task. Target text information: Optimal Alignments in Linear Space using Automaton-derived Cost Functions (Extended Abstract) Submitted to CPM'96: In a previous paper [SM95], we showed how finite automata could be used to define objective functions for assessing the quality of an alignment of two (or more) sequences. In this paper, we show some results of using such cost functions. We also show how to extend Hischberg's linear space algorithm [Hir75] to this setting, thus generalizing a result of Myers and Miller [MM88b]. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
82
test
1-hop neighbor's text information: Embodiment and manipulation learning process for a humanoid hand. : Target text information: Robust Sound Localization: An Application of an Auditory Perception System for a Humanoid Robot, : I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
2,187
val
1-hop neighbor's text information: Evolving Optimal Populations with XCS Classifier Systems, : 1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming. 1-hop neighbor's text information: Some studies in machine learning using the game of Checkers. : Target text information: A Package if Domain Independent Subroutines for Implementing Classifier Systems in Arbitrary, User-Defined Environments." Logic of Computers Group, : I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
1,460
test
1-hop neighbor's text information: Martinez (1993). Using Precepts to Augment Training Set Learning. : are used in turn to approximate A. Empirical studies show that good results can be achieved with TSL [8, 11]. However, TSL has several drawbacks. Training set learners (e.g., backpropagation) are typically slow as they may require many passes over the training set. Also, there is no guarantee that, given an arbitrary training set, the system will find enough good critical features to get a reasonable approximation of A. Moreover, the number of features to be searched is exponential in the number of inputs, and TSL becomes computationally expensive [1]. Finally, the scarcity of interesting positive theoretical results suggests the difficulty of learning without sufficient a priori knowledge. The goal of learning systems is to generalize. Generalization is commonly based on the set of critical features the system has available. Training set learners typically extract critical features from a random set of examples. While this approach is attractive, it suffers from the exponential growth of the number of features to be searched. We propose to extend it by endowing the system with some a priori knowledge, in the form of precepts. Advantages of the augmented system are speedup, improved generalization, and greater parsimony. This paper presents a precept-driven learning algorithm. Its main features include: 1) distributed implementation, 2) bounded learning and execution times, and 3) ability to handle both correct and incorrect precepts. Results of simulations on real-world data demonstrate promise. This paper presents precept-driven learning (PDL). PDL is intended to overcome some of TSL's weaknesses. In PDL, the training set is augmented by a small set of precepts. A pair p = (i, o) in I O is called an example. A precept is an example in which some of the i-entries (inputs) are set to the special value don't-care. An input whose value is not don't-care is said to be asserted. If i has no effect on the value of the output. The use of the special value don't-care is therefore as a shorthand. A pair containing don't-care inputs represents as many examples as the product of the sizes of the input domains of its don't-care inputs. 1. Introduction Target text information: An Incremental Learning Model for Commonsense Reasoning. : I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
1,893
test
1-hop neighbor's text information: Back, "Face recognition: a convolutional neural network approach," : Faces represent complex, multidimensional, meaningful visual stimuli and developing a computational model for face recognition is difficult [42]. We present a hybrid neural network solution which compares favorably with other methods. The system combines local image sampling, a self-organizing map neural network, and a convolutional neural network. The self-organizing map provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the convolutional neural network provides for partial invariance to translation, rotation, scale, and deformation. The convolutional network extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen-Loeve transform in place of the self-organizing map, and a multi-layer perceptron in place of the convolutional network. The Karhunen-Loeve transform performs almost as well (5.3% error versus 3.8%). The multi-layer perceptron performs very poorly (40% error versus 3.8%). The method is capable of rapid classification, requires only fast, approximate normalization and preprocessing, and consistently exhibits better classification performance than the eigenfaces approach [42] on the database considered as the number of images per person in the training database is varied from 1 to 5. With 5 images per person the proposed method and eigenfaces result in 3.8% and 10.5% error respectively. The recognizer provides a measure of confidence in its output and classification error approaches zero when rejecting as few as 10% of the examples. We use a database of 400 images of 40 individuals which contains quite a high degree of variability in expression, pose, and facial details. We analyze computational complexity and discuss how new classes could be added to the trained recognizer. 1-hop neighbor's text information: Improving the performance of radial basis function networks by learning center locations. : 1-hop neighbor's text information: A Theory of Networks for Approximation and Learning, : Learning an input-output mapping from a set of examples, of the type that many neural networks have been constructed to perform, can be regarded as synthesizing an approximation of a multi-dimensional function, that is solving the problem of hy-persurface reconstruction. From this point of view, this form of learning is closely related to classical approximation techniques, such as generalized splines and regularization theory. This paper considers the problems of an exact representation and, in more detail, of the approximation of linear and nonlinear mappings in terms of simpler functions of fewer variables. Kolmogorov's theorem concerning the representation of functions of several variables in terms of functions of one variable turns out to be almost irrelevant in the context of networks for learning. We develop a theoretical framework for approximation based on regularization techniques that leads to a class of three-layer networks that we call Generalized Radial Basis Functions (GRBF), since they are mathematically related to the well-known Radial Basis Functions, mainly used for strict interpolation tasks. GRBF networks are not only equivalent to generalized splines, but are also closely related to pattern recognition methods such as Parzen windows and potential functions and to several neural network algorithms, such as Kanerva's associative memory, backpropagation and Kohonen's topology preserving map. They also have an interesting interpretation in terms of prototypes that are synthesized and optimally combined during the learning stage. The paper introduces several extensions and applications of the technique and discusses intriguing analogies with neurobiological data. c fl Massachusetts Institute of Technology, 1994 This paper describes research done within the Center for Biological Information Processing, in the Department of Brain and Cognitive Sciences, and at the Artificial Intelligence Laboratory. This research is sponsored by a grant from the Office of Naval Research (ONR), Cognitive and Neural Sciences Division; by the Artificial Intelligence Center of Hughes Aircraft Corporation; by the Alfred P. Sloan Foundation; by the National Science Foundation. Support for the A. I. Laboratory's artificial intelligence research is provided by the Advanced Research Projects Agency of the Department of Defense under Army contract DACA76-85-C-0010, and in part by ONR contract N00014-85-K-0124. Target text information: (1994) "Evaluation of Pattern Classifiers for Fingerprint and OCR Applications," : In this paper we evaluate the classification accuracy of four statistical and three neural network classifiers for two image based pattern classification problems. These are fingerprint classification and optical character recognition (OCR) for isolated handprinted digits. The evaluation results reported here should be useful for designers of practical systems for these two important commercial applications. For the OCR problem, the Karhunen-Loeve (K-L) transform of the images is used to generate the input feature set. Similarly for the fingerprint problem, the K-L transform of the ridge directions is used to generate the input feature set. The statistical classifiers used were Euclidean minimum distance, quadratic minimum distance, normal, and k-nearest neighbor. The neural network classifiers used were multilayer perceptron, radial basis function, and probabilistic. The OCR data consisted of 7,480 digit images for training and 23,140 digit images for testing. The fingerprint data consisted of 2,000 training and 2,000 testing images. In addition to evaluation for accuracy, the multilayer perceptron and radial basis function networks were evaluated for size and generalization capability. For the evaluated datasets the best accuracy obtained for either problem was provided by the probabilistic neural network, where the minimum classification error was 2.5% for OCR and 7.2% for fingerprints. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
1,668
test
1-hop neighbor's text information: "Evolution in Time and Space: The Parallel Genetic Algorithm." In Foundations of Genetic Algorithms, : The parallel genetic algorithm (PGA) uses two major modifications compared to the genetic algorithm. Firstly, selection for mating is distributed. Individuals live in a 2-D world. Selection of a mate is done by each individual independently in its neighborhood. Secondly, each individual may improve its fitness during its lifetime by e.g. local hill-climbing. The PGA is totally asynchronous, running with maximal efficiency on MIMD parallel computers. The search strategy of the PGA is based on a small number of active and intelligent individuals, whereas a GA uses a large population of passive individuals. We will investigate the PGA with deceptive problems and the traveling salesman problem. We outline why and when the PGA is succesful. Abstractly, a PGA is a parallel search with information exchange between the individuals. If we represent the optimization problem as a fitness landscape in a certain configuration space, we see, that a PGA tries to jump from two local minima to a third, still better local minima, by using the crossover operator. This jump is (probabilistically) successful, if the fitness landscape has a certain correlation. We show the correlation for the traveling salesman problem by a configuration space analysis. The PGA explores implicitly the above correlation. 1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming. Target text information: S.F. Commonality and Genetic Algorithms. : I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
2,000
test
1-hop neighbor's text information: A yardstick for the evaluation of case-based classifiers. : This paper proposes that the generalisation capabilities of a case-based reasoning system can be evaluated by comparison with a `rote-learning' algorithm which uses a very simple generalisation strategy. Two such algorithms are defined, and expressions for their classification accuracy are derived as a function of the size of training sample. A series of experiments using artificial and `natural' data sets is described in which the learning curve for a case-based learner is compared with those for the apparently trivial rote-learning learning algorithms. The results show that in a number of `plausible' situations, the learning curves for a simple case-based learner and the `majority' rote-learner can barely be distinguished, although a domain is demonstrated where favourable performance from the case-based learner is observed. This suggests that the maxim of case-based reasoning that `similar problems have similar solutions' may be useful as the basis of a generalisation strategy only in selected domains. 1-hop neighbor's text information: PAC analyses of a `similarity learning\' IBL al-gorithm. : 1-hop neighbor's text information: Formalising the knowledge content of case memory systems. : Discussions of case-based reasoning often reflect an implicit assumption that a case memory system will become better informed, i.e. will increase in knowledge, as more cases are added to the case-base. This paper considers formalisations of this `knowledge content' which are a necessary preliminary to more rigourous analysis of the performance of case-based reasoning systems. In particular we are interested in modelling the learning aspects of case-based reasoning in order to study how the performance of a case-based reasoning system changes as it accumlates problem-solving experience. The current paper presents a `case-base semantics' which generalises recent formalisations of case-based classification. Within this framework, the paper explores various issues in assuring that these sematics are well-defined, and illustrates how the knowledge content of the case memory system can be seen to reside in both the chosen similarity measure and in the cases of the case-base. Target text information: Towards a Theory of Optimal Similarity Measures way of learning a similarity measure from the: The effectiveness of a case-based reasoning system is known to depend critically on its similarity measure. However, it is not clear whether there are elusive and esoteric similarity measures which might improve the performance of a case-based reasoner if substituted for the more commonly used measures. This paper therefore deals with the problem of choosing the best similarity measure, in the limited context of instance-based learning of classifications of a discrete example space. We consider both `fixed' similarity measures and `learnt' ones. In the former case, we give a definition of a similarity measure which we believe to be `optimal' w.r.t. the current prior distribution of target concepts and prove its optimality within a restricted class of similarity measures. We then show how this `optimal' similarity measure is instantiated by some specific prior distributions, and conclude that a very simple similarity measure is as good as any other in these cases. In a further section, we then show how our definition leads naturally to a conjecture about the I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
475
test