content
stringlengths 633
9.91k
| label
stringclasses 7
values | category
stringclasses 7
values | dataset
stringclasses 1
value | node_id
int64 0
2.71k
| split
stringclasses 3
values |
|---|---|---|---|---|---|
1-hop neighbor's text information: Chain graphs for learning. :
1-hop neighbor's text information: DYNAMIC CONDITIONAL INDEPENDENCE MODELS AND MARKOV CHAIN MONTE CARLO METHODS:
1-hop neighbor's text information: Decomposable Graphical Gaussian Model Determination. : We propose a methodology for Bayesian model determination in decomposable graphical Gaussian models. To achieve this aim we consider a hyper inverse Wishart prior distribution on the concentration matrix for each given graph. To ensure compatibility across models, such prior distributions are obtained by marginalisation from the prior conditional on the complete graph. We explore alternative structures for the hyperparameters of the latter, and their consequences for the model. Model determination is carried out by implementing a reversible jump MCMC sampler. In particular, the dimension-changing move we propose involves adding or dropping an edge from the graph. We characterise the set of moves which preserve the decomposability of the graph, giving a fast algorithm for maintaining the junction tree representation of the graph at each sweep. As state variable, we propose to use the incomplete variance-covariance matrix, containing only the elements for which the corresponding element of the inverse is nonzero. This allows all computations to be performed locally, at the clique level, which is a clear advantage for the analysis of large and complex data-sets. Finally, the statistical and computational performance of the procedure is illustrated by means of both artificial and real multidimensional data-sets.
Target text information: Graphical Models in Applied Multivariate Statistics. :
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
6
|
Probabilistic Methods
|
cora
| 1,125
|
val
|
1-hop neighbor's text information: Robust performance and adaptation using receding horizon H 1 control of time varying systems. : In this paper we construct suboptimal H 1 controllers which satisfy a new robust performance condition, using the receding horizon technique. A method is described for the synthesis of H 1 controllers online, making use of the exact plant model only on a finite interval extending into the future. Inequalities based on the two Riccati differential equation solution to the finite horizon H 1 problem are derived, and the resulting freedom is exploited to construct H 1 controllers which have a closed loop induced norm less than a prespecified value for all plants within a set, which is described in terms of the future variation of the plant. Dual results, with a possible adaptive interpretation, are also constructed.
Target text information: A game theoretic approach to moving horizon control. : A control law is constructed for a linear time varying system by solving a two player zero sum differential game on a moving horizon, the game being that which is used to construct an H 1 controller on a finite horizon. Conditions are given under which this controller results in a stable system and satisfies an infinite horizon H 1 norm bound. A risk sensitive formulation is used to provide a state estimator in the observation feedback case.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
5
|
Reinforcement Learning
|
cora
| 1,185
|
test
|
1-hop neighbor's text information: Markov games as a framework for multi-agent reinforcement learning. : In the Markov decision process (MDP) formalization of reinforcement learning, a single adaptive agent interacts with an environment defined by a probabilistic transition function. In this solipsistic view, secondary agents can only be part of the environment and are therefore fixed in their behavior. The framework of Markov games allows us to widen this view to include multiple adaptive agents with interacting or competing goals. This paper considers a step in this direction in which exactly two agents with diametrically opposed goals share an environment. It describes a Q-learning-like algorithm for finding optimal policies and demonstrates its application to a simple two-player game in which the optimal policy is probabilistic.
1-hop neighbor's text information: Multi-agent reinforcement learning: independent vs. : Intelligent human agents exist in a cooperative social environment that facilitates learning. They learn not only by trial-and-error, but also through cooperation by sharing instantaneous information, episodic experience, and learned knowledge. The key investigations of this paper are, "Given the same number of reinforcement learning agents, will cooperative agents outperform independent agents who do not communicate during learning?" and "What is the price for such cooperation?" Using independent agents as a benchmark, cooperative agents are studied in following ways: (1) sharing sensation, (2) sharing episodes, and (3) sharing learned policies. This paper shows that (a) additional sensation from another agent is beneficial if it can be used efficiently, (b) sharing learned policies or episodes among agents speeds up learning at the cost of communication, and (c) for joint tasks, agents engaging in partnership can significantly outperform independent agents although they may learn slowly in the beginning. These tradeoffs are not just limited to multi-agent reinforcement learning.
1-hop neighbor's text information: Learning to use selective attention and short-term memory in sequential tasks. : This paper presents U-Tree, a reinforcement learning algorithm that uses selective attention and short-term memory to simultaneously address the intertwined problems of large perceptual state spaces and hidden state. By combining the advantages of work in instance-based (or memory-based) learning and work with robust statistical tests for separating noise from task structure, the method learns quickly, creates only task-relevant state distinctions, and handles noise well. U-Tree uses a tree-structured representation, and is related to work on Prediction Suffix Trees [ Ron et al., 1994 ] , Parti-game [ Moore, 1993 ] , G-algorithm [ Chap-man and Kaelbling, 1991 ] , and Variable Resolution Dynamic Programming [ Moore, 1991 ] . It builds on Utile Suffix Memory [ McCallum, 1995c ] , which only used short-term memory, not selective perception. The algorithm is demonstrated solving a highway driving task in which the agent weaves around slower and faster traffic. The agent uses active perception with simulated eye movements. The environment has hidden state, time pressure, stochasticity, over 21,000 world states and over 2,500 percepts. From this environment and sensory system, the agent uses a utile distinction test to build a tree that represents depth-three memory where necessary, and has just 143 internal statesfar fewer than the 2500 3 states that would have resulted from a fixed-sized history-window ap proach.
Target text information: Using Communication to Reduce Locality in Distributed Multi-Agent Learning. : This paper attempts to bridge the fields of machine learning, robotics, and distributed AI. It discusses the use of communication in reducing the undesirable effects of locality in fully distributed multi-agent systems with multiple agents/robots learning in parallel while interacting with each other. Two key problems, hidden state and credit assignment, are addressed by applying local undirected broadcast communication in a dual role: as sensing and as reinforcement. The methodology is demonstrated on two multi-robot learning experiments. The first describes learning a tightly-coupled coordination task with two robots, the second a loosely-coupled task with four robots learning social rules. Communication is used to share sensory data to overcome hidden state and reinforcement to overcome the credit assignment problem between the agents and to bridge the gap between local and global payoff. 1
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
5
|
Reinforcement Learning
|
cora
| 2,615
|
val
|
1-hop neighbor's text information: Segmenting Time Series using Gated Experts with Simulated Annealing: Many real-world time series are multi-stationary, where the underlying data generating process (DGP) switches between different stationary subprocesses, or modes of operation. An important problem in modeling such systems is to discover the underlying switching process, which entails identifying the number of subprocesses and the dynamics of each subprocess. For many time series, this problem is ill-defined, since there are often no obvious means to distinguish the different subprocesses. We discuss the use of nonlinear gated experts to perform the segmentation and system identification of the time series. Unlike standard gated experts methods, however, we use concepts from statistical physics to enhance the segmentation for high-noise problems where only a few experts are required.
1-hop neighbor's text information: Analysis of Drifting Dynamics with Neural Network Hidden Markov Models: We present a method for the analysis of nonstationary time series with multiple operating modes. In particular, it is possible to detect and to model both a switching of the dynamics and a less abrupt, time consuming drift from one mode to another. This is achieved in two steps. First, an unsupervised training method provides prediction experts for the inherent dynamical modes. Then, the trained experts are used in a hidden Markov model that allows to model drifts. An application to physiological wake/sleep data demonstrates that analysis and modeling of real-world time series can be improved when the drift paradigm is taken into account.
1-hop neighbor's text information: Nonlinear Prediction of Chaotic Time Series. : A novel method for regression has been recently proposed by V. Vapnik et al. [8, 9]. The technique, called Support Vector Machine (SVM), is very well founded from the mathematical point of view and seems to provide a new insight in function approximation. We implemented the SVM and tested it on the same data base of chaotic time series that was used in [1] to compare the performances of different approximation techniques, including polynomial and rational approximation, local polynomial techniques, Radial Basis Functions, and Neural Networks. The SVM performs better than the approaches presented in [1]. We also study, for a particular time series, the variability in performance with respect to the few free parameters of SVM.
Target text information: Annealed competition of experts for a segmentation and classification of switching dynamics. : We present a method for the unsupervised segmentation of data streams originating from different unknown sources which alternate in time. We use an architecture consisting of competing neural networks. Memory is included in order to resolve ambiguities of input-output relations. In order to obtain maximal specialization, the competition is adiabatically increased during training. Our method achieves almost perfect identification and segmentation in the case of switching chaotic dynamics where input manifolds overlap and input-output relations are ambiguous. Only a small dataset is needed for the training proceedure. Applications to time series from complex systems demonstrate the potential relevance of our approach for time series analysis and short-term prediction.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 1,080
|
val
|
1-hop neighbor's text information: W.S. Boosting the Margin: A New Explanation for the Effectiveness of Voting Methods. : One of the surprising recurring phenomena observed in experiments with boosting is that the test error of the generated classifier usually does not increase as its size becomes very large, and often is observed to decrease even after the training error reaches zero. In this paper, we show that this phenomenon is related to the distribution of margins of the training examples with respect to the generated voting classification rule, where the margin of an example is simply the difference between the number of correct votes and the maximum number of votes received by any incorrect label. We show that techniques used in the analysis of Vapnik's support vector classifiers and of neural networks with small weights can be applied to voting methods to relate the margin distribution to the test error. We also show theoretically and experimentally that boosting is especially effective at increasing the margins of the training examples. Finally, we compare our explanation to those based on the bias-variance decomposition.
1-hop neighbor's text information: Cost-Sensitive Classification: Empirical Evaluation of a Hybrid Genetic Decision Tree Induction Algorithm. : This paper introduces ICET, a new algorithm for costsensitive classification. ICET uses a genetic algorithm to evolve a population of biases for a decision tree induction algorithm. The fitness function of the genetic algorithm is the average cost of classification when using the decision tree, including both the costs of tests (features, measurements) and the costs of classification errors. ICET is compared here with three other algorithms for costsensitive classification EG2, CS-ID3, and IDX and also with C4.5, which classifies without regard to cost. The five algorithms are evaluated empirically on five real-world medical datasets. Three sets of experiments are performed. The first set examines the baseline performance of the five algorithms on the five datasets and establishes that ICET performs significantly better than its competitors. The second set tests the robustness of ICET under a variety of conditions and shows that ICET maintains its advantage. The third set looks at ICETs search in bias space and discovers a way to improve the search.
1-hop neighbor's text information: Experiments with a New Boosting Algorithm. : In an earlier paper, we introduced a new boosting algorithm called AdaBoost which, theoretically, can be used to significantly reduce the error of any learning algorithm that consistently generates classifiers whose performance is a little better than random guessing. We also introduced the related notion of a pseudo-loss which is a method for forcing a learning algorithm of multi-label concepts to concentrate on the labels that are hardest to discriminate. In this paper, we describe experiments we carried out to assess how well AdaBoost with and without pseudo-loss, performs on real learning problems. We performed two sets of experiments. The first set compared boosting to Breiman's bagging method when used to aggregate various classifiers (including decision trees and single attribute-value tests). We compared the performance of the two methods on a collection of machine-learning benchmarks. In the second set of experiments, we studied in more detail the performance of boosting using a nearest-neighbor classifier on an OCR problem.
Target text information: Boosting Trees for Cost-Sensitive Classifications: This paper explores two boosting techniques for cost-sensitive tree classification in the situation where misclassification costs change very often. Ideally, one would like to have only one induction, and use the induced model for different misclassification costs. Thus, it demands robustness of the induced model against cost changes. Combining multiple trees gives robust predictions against this change. We demonstrate that ordinary boosting combined with the minimum expected cost criterion to select the prediction class is a good solution under this situation. We also introduce a variant of the ordinary boosting procedure which utilizes the cost information during training. We show that the proposed technique performs better than the ordinary boosting in terms of misclassification cost. However, this technique requires to induce a set of new trees every time the cost changes. Our empirical investigation also reveals some interesting behavior of boosting decision trees for cost-sensitive classification.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
4
|
Theory
|
cora
| 194
|
test
|
1-hop neighbor's text information: The Structure-Mapping Engine: Algorithms and Examples. : This paper describes the Structure-Mapping Engine (SME), a program for studying analogical processing. SME has been built to explore Gentner's Structure-mapping theory of analogy, and provides a "tool kit" for constructing matching algorithms consistent with this theory. Its flexibility enhances cognitive simulation studies by simplifying experimentation. Furthermore, SME is very efficient, making it a useful component in machine learning systems as well. We review the Structure-mapping theory and describe the design of the engine. We analyze the complexity of the algorithm, and demonstrate that most of the steps are polynomial, typically bounded by O (N 2 ). Next we demonstrate some examples of its operation taken from our cognitive simulation studies and work in machine learning. Finally, we compare SME to other analogy programs and discuss several areas for future work. This paper appeared in Artificial Intelligence, 41, 1989, pp 1-63. For more information, please contact [email protected]
Target text information: Adapting Abstract Knowledge: For a case-based reasoner to use its knowledge flexibly, it must be equipped with a powerful case adapter. A case-based reasoner can only cope with variation in the form of the problems it is given to the extent that its cases in memory can be efficiently adapted to fit a wide range of new situations. In this paper, we address the task of adapting abstract knowledge about planning to fit specific planning situations. First we show that adapting abstract cases requires reconciling incommensurate representations of planning situations. Next, we describe a representation system, a memory organization, and an adaptation process tailored to this requirement. Our approach is implemented in brainstormer, a planner that takes abstract advice.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
2
|
Case Based
|
cora
| 845
|
train
|
1-hop neighbor's text information: Scaling reinforcement learning algorithms by learning variable temporal resolution models. : The close connection between reinforcement learning (RL) algorithms and dynamic programming algorithms has fueled research on RL within the machine learning community. Yet, despite increased theoretical understanding, RL algorithms remain applicable to simple tasks only. In this paper I use the abstract framework afforded by the connection to dynamic programming to discuss the scaling issues faced by RL researchers. I focus on learning agents that have to learn to solve multiple structured RL tasks in the same environment. I propose learning abstract environment models where the abstract actions represent "intentions" of achieving a particular state. Such models are variable temporal resolution models because in different parts of the state space the abstract actions span different number of time steps. The operational definitions of abstract actions can be learned incrementally using repeated experience at solving RL tasks. I prove that under certain conditions solutions to new RL tasks can be found by using simu lated experience with abstract actions alone.
1-hop neighbor's text information: Improving generalization for temporal difference learning: : We provide analytical expressions governing changes to the bias and variance of the lookup table estimators provided by various Monte Carlo and temporal difference value estimation algorithms with o*ine updates over trials in absorbing Markov reward processes. We have used these expressions to develop software that serves as an analysis tool: given a complete description of a Markov reward process, it rapidly yields an exact mean-square-error curve, the curve one would get from averaging together sample mean-square-error curves from an infinite number of learning trials on the given problem. We use our analysis tool to illustrate classes of mean-square-error curve behavior in a variety of example reward processes, and we show that although the various temporal difference algorithms are quite sensitive to the choice of step-size and eligibility-trace parameters, there are values of these parameters that make them similarly competent, and generally good.
1-hop neighbor's text information: TD models: modeling the world at a mixture of time scales. : Temporal-difference (TD) learning can be used not just to predict rewards, as is commonly done in reinforcement learning, but also to predict states, i.e., to learn a model of the world's dynamics. We present theory and algorithms for intermixing TD models of the world at different levels of temporal abstraction within a single structure. Such multi-scale TD models can be used in model-based reinforcement-learning architectures and dynamic programming methods in place of conventional Markov models. This enables planning at higher and varied levels of abstraction, and, as such, may prove useful in formulating methods for hierarchical or multi-level planning and reinforcement learning. In this paper we treat only the prediction problem|that of learning a model and value function for the case of fixed agent behavior. Within this context, we establish the theoretical foundations of multi-scale models and derive TD algorithms for learning them. Two small computational experiments are presented to test and illustrate the theory. This work is an extension and generalization of the work of Singh (1992), Dayan (1993), and Sutton & Pinette (1985).
Target text information: Multi-time Models for Temporally: Planning Abstract Planning and learning at multiple levels of temporal abstraction is a key problem for artificial intelligence. In this paper we summarize an approach to this problem based on the mathematical framework of Markov decision processes and reinforcement learning. Current model-based reinforcement learning is based on one-step models that cannot represent common-sense higher-level actions, such as going to lunch, grasping an object, or flying to Denver. This paper generalizes prior work on temporally abstract models (Sutton, 1995b) and extends it from the prediction setting to include actions, control, and planning. We introduce a more general form of temporally abstract model, the multi-time model, and establish its suitability for planning and learning by virtue of its relationship to Bellman equations. This paper summarizes the theoretical framework of multi-time models and illustrates their potential ad The need for hierarchical and abstract planning is a fundamental problem in AI (see, e.g., Sacerdoti, 1977; Laird et al., 1986; Korf, 1985; Kaelbling, 1993; Dayan & Hinton, 1993). Model-based reinforcement learning offers a possible solution to the problem of integrating planning with real-time learning and decision-making (Peng & Williams, 1993, Moore & Atkeson, 1993; Sutton and Barto, in press). However, current model-based reinforcement learning is based on one-step models that cannot represent common-sense, higher-level actions. Modeling such actions requires the ability to handle different, interrelated levels of temporal abstraction. A new approach to modeling at multiple time scales was introduced by Sutton (1995b) based on prior work by Singh (1992), Dayan (1993b), and Sutton and Pinette (1985). This approach enables models of the environment at different temporal scales to be intermixed, producing temporally abstract models. However, that work was concerned only with predicting the environment. This paper summarizes vantages in a gridworld planning task.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
5
|
Reinforcement Learning
|
cora
| 831
|
test
|
1-hop neighbor's text information: A hypothesis-driven constructive induction approach to expanding neural networks. : With most machine learning methods, if the given knowledge representation space is inadequate then the learning process will fail. This is also true with methods using neural networks as the form of the representation space. To overcome this limitation, an automatic construction method for a neural network is proposed. This paper describes the BP-HCI method for a hypothesis-driven constructive induction in a neural network trained by the backpropagation algorithm. The method searches for a better representation space by analyzing the hypotheses generated in each step of an iterative learning process. The method was applied to ten problems, which include, in particular, exclusive-or, MONK2, parity-6BIT and inverse parity-6BIT problems. All problems were successfully solved with the same initial set of parameters; the extension of representation space was no more than necessary extension for each problem.
1-hop neighbor's text information: Parity: the problem that won\'t go away. : It is well-known that certain learning methods (e.g., the perceptron learning algorithm) cannot acquire complete, parity mappings. But it is often overlooked that state-of-the-art learning methods such as C4.5 and backpropagation cannot generalise from incomplete parity mappings. The failure of such methods to generalise on parity mappings may be sometimes dismissed on the grounds that it is `impossible' to generalise over such mappings, or that parity problems are mathematical constructs having little to do with real-world learning. However, this paper argues that such a dismissal is unwarranted. It shows that parity mappings are hard to learn because they are statistically neutral and that statistical neutrality is a property which we should expect to encounter frequently in real-world contexts. It also shows that the generalization failure on parity mappings occurs even when large, minimally incomplete mappings are used for training purposes, i.e., when claims about the impossibility of generalization are particularly suspect.
1-hop neighbor's text information: constructive induction of M-of-N concepts for discriminators in decision trees. : We discuss an approach to constructing composite features during the induction of decision trees. The composite features correspond to m-of-n concepts. There are three goals of this research. First, we explore a family of greedy methods for building m-of-n concepts (one of which, GS, is described in this paper). Second, we show how these concepts can be formed as internal nodes of decision trees, serving as a bias to the learner. Finally, we evaluate the method on several artificially generated and naturally occurring data sets to determine the effects of this bias.
Target text information: Discovering Representation Space Transformations for Learning Concept Descriptions Combining DNF and M-of-N Rules, Workshop on Constructive Induction and Change of Representation, : This paper addresses a class of learning problems that require a construction of descriptions that combine both M-of-N rules and traditional Disjunctive Normal form (DNF) rules. The presented method learns such descriptions, which we call conditional M-of-N rules, using the hypothesis-driven constructive induction approach. In this approach, the representation space is modified according to patterns discovered in the iteratively generated hypotheses. The need for the M-of-N rules is detected by observing "exclusive-or" or "equivalence" patterns in the hypotheses. These patterns indicate symmetry relations among pairs of attributes. Symmetrical attributes are combined into maximal symmetry classes. For each symmetry class, the method constructs a "counting attribute" that adds a new dimension to the representation space. The search for hypothesis in iteratively modified representation spaces is done by the standard AQ inductive rule learning algorithm. It is shown that the proposed method is capable of solving problems that would be very difficult to tackle by any of the traditional symbolic learning methods.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
0
|
Rule Learning
|
cora
| 1,029
|
train
|
1-hop neighbor's text information: Generative Learning Structures for Generalized Connectionist Networks. : Massively parallel networks of relatively simple computing elements offer an attractive and versatile framework for exploring a variety of learning structures and processes for intelligent systems. This paper briefly summarizes some popular learning structures and processes used in such networks. It outlines a range of potentially more powerful alternatives for pattern-directed inductive learning in such systems. It motivates and develops a class of new learning algorithms for massively parallel networks of simple computing elements. We call this class of learning processes generative for they offer a set of mechanisms for constructive and adaptive determination of the network architecture the number of processing elements and the connectivity among them as a function of experience. Generative learning algorithms attempt to overcome some of the limitations of some approaches to learning in networks that rely on modification of weights on the links within an otherwise fixed network topology e.g., rather slow learning and the need for an a-priori choice of a network architecture. Several alternative designs as well as a range of control structures and processes which can be used to regulate the form and content of internal representations learned by such networks are examined. Empirical results from the study of some generative learning algorithms are briefly summarized and several extensions and refinements of such algorithms, and directions for future research are outlined.
1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming.
1-hop neighbor's text information: A Framework for Combining Symbolic and Neural Learning. In: Artificial Intelligence and Neural Networks: Steps Toward Principled Integration. Honavar, : Technical Report 1123, Computer Sciences Department, University of Wisconsin - Madison, Nov. 1992 ABSTRACT This article describes an approach to combining symbolic and connectionist approaches to machine learning. A three-stage framework is presented and the research of several groups is reviewed with respect to this framework. The first stage involves the insertion of symbolic knowledge into neural networks, the second addresses the refinement of this prior knowledge in its neural representation, while the third concerns the extraction of the refined symbolic knowledge. Experimental results and open research issues are discussed. A shorter version of this paper will appear in Machine Learning.
Target text information: Symbolic and Subsymbolic Learning for Vision: Some Possibilities: Robust, flexible and sufficiently general vision systems such as those for recognition and description of complex 3-dimensional objects require an adequate armamentarium of representations and learning mechanisms. This paper briefly analyzes the strengths and weaknesses of different learning paradigms such as symbol processing systems, connectionist networks, and statistical and syntactic pattern recognition systems as possible candidates for providing such capabilities and points out several promising directions for integrating multiple such paradigms in a synergistic fashion towards that goal.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 408
|
test
|
1-hop neighbor's text information: Wavelet Tresholding via a Bayesian Approach. : We discuss a Bayesian formalism which gives rise to a type of wavelet threshold estimation in non-parametric regression. A prior distribution is imposed on the wavelet coefficients of the unknown response function, designed to capture the sparseness of wavelet expansion common to most applications. For the prior specified, the posterior median yields a thresholding procedure. Our prior model for the underlying function can be adjusted to give functions falling in any specific Besov space. We establish a relation between the hyperparameters of the prior model and the parameters of those Besov spaces within which realizations from the prior will fall. Such a relation gives insight into the meaning of the Besov space parameters. Moreover, the established relation makes it possible in principle to incorporate prior knowledge about the function's regularity properties into the prior model for its wavelet coefficients. However, prior knowledge about a function's regularity properties might be hard to elicit; with this in mind, we propose a standard choise of prior hyperparameters that works well in our examples. Several simulated examples are used to illustrate our method, and comparisons are made with other thresholding methods. We also present an application to a data set collected in an anaesthesiological study.
1-hop neighbor's text information: M (1992a). Minimax risk over ` p -balls for l q loss. : Consider estimating the mean vector from data N n (; 2 I) with l q norm loss, q 1, when is known to lie in an n-dimensional l p ball, p 2 (0; 1). For large n, the ratio of minimax linear risk to minimax risk can be arbitrarily large if p < q. Obvious exceptions aside, the limiting ratio equals 1 only if p = q = 2. Our arguments are mostly indirect, involving a reduction to a univariate Bayes minimax problem. When p < q, simple non-linear co-ordinatewise threshold rules are asymptotically minimax at small signal-to-noise ratios, and within a bounded factor of asymptotic minimaxity in general. Our results are basic to a theory of estimation in Besov spaces
1-hop neighbor's text information: I.M.: Adapting to unknown smoothness via wavelet shrinkage. : We attempt to recover a function of unknown smoothness from noisy, sampled data. We introduce a procedure, SureShrink, which suppresses noise by thresholding the empirical wavelet coefficients. The thresholding is adaptive: a threshold level is assigned to each dyadic resolution level by the principle of minimizing the Stein Unbiased Estimate of Risk (Sure) for threshold estimates. The computational effort of the overall procedure is order N log(N ) as a function of the sample size N. SureShrink is smoothness-adaptive: if the unknown function contains jumps, the reconstruction (essentially) does also; if the unknown function has a smooth piece, the reconstruction is (essentially) as smooth as the mother wavelet will allow. The procedure is in a sense optimally smoothness-adaptive: it is near-minimax simultaneously over a whole interval of the Besov scale; the size of this interval depends on the choice of mother wavelet. We know from a previous paper by the authors that traditional smoothing methods kernels, splines, and orthogonal series estimates - even with optimal choices of the smoothing parameter, would be unable to perform in a near-minimax way over many spaces in the Besov scale. Acknowledgements. The first author was supported at U.C. Berkeley by NSF DMS 88-10192, by NASA Contract NCA2-488, and by a grant from ATT Foundation. The second author was supported in part by NSF grants DMS 84-51750, 86-00235, and NIH PHS grant GM21215-12, and by a grant from ATT Foundation.
Target text information: Minimax Bayes, asymptotic minimax and sparse wavelet priors. In Statistical Decision Theory and Related Topics, V, : Pinsker(1980) gave a precise asymptotic evaluation of the minimax mean squared error of estimation of a signal in Gaussian noise when the signal is known a priori to lie in a compact ellipsoid in Hilbert space. This `Minimax Bayes' method can be applied to a variety of global non-parametric estimation settings with parameter spaces far from ellipsoidal. For example it leads to a theory of exact asymptotic minimax estimation over norm balls in Besov and Triebel spaces using simple co-ordinatewise estimators and wavelet bases. This paper outlines some features of the method common to several applications. In particular, we derive new results on the exact asymptotic minimax risk over weak ` p balls in R n as n ! 1, and also for a class of `local' estimators on the Triebel scale. By its very nature, the method reveals the structure of asymptotically least favorable distributions. Thus we may simulate `least favorable' sample paths. We illustrate this for estimation of a signal in Gaussian white noise over norm balls in certain Besov spaces. In wavelet bases, when p < 2, the least favorable priors are sparse, and the resulting sample paths strikingly different from those observed in Pinsker's ellipsoidal setting (p = 2). Acknowledgements. I am grateful for many conversations with David Donoho and Carl Taswell, and to a referee for helpful comments. This work was supported in part by NSF grants DMS 84-51750, 9209130, and NIH PHS grant GM21215-12.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
6
|
Probabilistic Methods
|
cora
| 2,218
|
test
|
1-hop neighbor's text information: Giles P.C., and Collingwood, "Finite state machines and recurrent neural networks -automata and dynamical systems approaches", :
1-hop neighbor's text information: Using Prior Knowledge in a NNDPA to Learn Context-Free Languages, : Although considerable interest has been shown in language inference and automata induction using recurrent neural networks, success of these models has mostly been limited to regular languages. We have previously demonstrated that Neural Network Pushdown Automaton (NNPDA) model is capable of learning deterministic context-free languages (e.g., a n b n and parenthesis languages) from examples. However, the learning task is computationally intensive. In this paper we discuss some ways in which a priori knowledge about the task and data could be used for efficient learning. We also observe that such knowledge is often an experimental prerequisite for learning nontrivial languages (eg. a n b n cb m a m ).
1-hop neighbor's text information: Kalman, An Adaptive Neural Network Parser, : We inv estigate the applicability of an adaptive neural network to problems with time-dependent input by demonstrating that a deterministic parser for natural language inputs of significant syntactic complexity can be developed using recurrent connectionist architectures. The traditional stacking mechanism, known to be necessary for proper treatment of context-free languages in symbolic systems, is absent from the design, having been subsumed by recurrency in the network.
Target text information: "Learning context-free grammars: Limitations of a recurrent neural network with an external stack memory," : This work describes an approach for inferring Deterministic Context-free (DCF) Grammars in a Connectionist paradigm using a Recurrent Neural Network Pushdown Automaton (NNPDA). The NNPDA consists of a recurrent neural network connected to an external stack memory through a common error function. We show that the NNPDA is able to learn the dynamics of an underlying pushdown automaton from examples of grammatical and non-grammatical strings. Not only does the network learn the state transitions in the automaton, it also learns the actions required to control the stack. In order to use continuous optimization methods, we develop an analog stack which reverts to a discrete stack by quantization of all activations, after the network has learned the transition rules and stack actions. We further show an enhancement of the network's learning capabilities by providing hints. In addition, an initial comparative study of simulations with first, second and third order recurrent networks has shown that the increased degree of freedom in a higher order networks improve generalization but not necessarily learning speed.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 1,134
|
test
|
1-hop neighbor's text information: Local Feedforward Networks: Interference in neural networks occurs when learning in one area of the input space causes unlearning in another area. Networks that are less susceptible to interference are called spatially local networks. These networks are often used in neurocontrol, in online applications, where, because of the real time nature of the task, interference is often a problem. Although there are heuristics as to what makes a network local, there is no theoretical framework for measuring localization. This paper provides a formal definition of interference and localization that will allow us to measure a network's local properties. These definitions will be useful in developing learning algorithms that make networks more local. This may lead to faster learning over the entire input domain.
1-hop neighbor's text information: Adaptive Wavelet Control of Nonlinear Systems: This paper considers the design and analysis of adaptive wavelet control algorithms for uncertain nonlinear dynamical systems. The Lyapunov synthesis approach is used to develop a state-feedback adaptive control scheme based on nonlinearly parametrized wavelet network models. Semi-global stability results are obtained under the key assumption that the system uncertainty satisfies a "matching" condition. The localization properties of adaptive networks are discussed and formal definitions of interference and localization measures are proposed.
Target text information: "An analytical framework for local feedforward networks," : Although feedforward neural networks are well suited to function approximation, in some applications networks experience problems when learning a desired function. One problem is interference which occurs when learning in one area of the input space causes unlearning in another area. Networks that are less susceptible to interference are referred to as spatially local networks. To understand these properties, a theoretical framework, consisting of a measure of interference and a measure of network localization, is developed that incorporates not only the network weights and architecture but also the learning algorithm. Using this framework to analyze sigmoidal multi-layer perceptron (MLP) networks that employ the back-prop learning algorithm, we address a familiar misconception that sigmoidal networks are inherently non-local by demonstrating that given a sufficiently large number of adjustable parameters, sigmoidal MLPs can be made arbitrarily local while retaining the ability to represent any continuous function on a compact domain.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 2,514
|
test
|
1-hop neighbor's text information: On the complexity of conditional logics. : Conditional logics, introduced by Lewis and Stalnaker, have been utilized in artificial intelligence to capture a broad range of phenomena. In this paper we examine the complexity of several variants discussed in the literature. We show that, in general, deciding satisfiability is PSPACE-complete for formulas with arbitrary conditional nesting and NP-complete for formulas with bounded nesting of conditionals. However, we provide several exceptions to this rule. Of particular note are results showing that (a) when assuming uniformity (i.e., that all worlds agree on what worlds are possible), the decision problem becomes EXPTIME-complete even for formulas with bounded nesting, and (b) when assuming absoluteness (i.e., that all worlds agree on all conditional statements), the decision problem is NP-complete for for mulas with arbitrary nesting.
Target text information: Updates and Counterfactuals: We study the problem of combining updates |a special instance of theory change| and counterfactual conditionals in propositional knowledgebases. Intuitively, an update means that the world described by the knowledgebase has changed. This is opposed to revisions |another instance of theory change| where our knowledge about a static world changes. A counterfactual implication is a statement of the form `If A were the case, then B would also be the case', where the negation of A may be derivable from our current knowledge. We present a decidable logic, called VCU 2 , that has both update and counterfactual implication as connectives in the object language. Our update operator is a generalization of operators previously proposed and studied in the literature. We show that our operator satisfies certain postulates set forth for any reasonable update. The logic VCU 2 is an extension of D. K. Lewis' logic VCU for counterfactual conditionals. The semantics of VCU 2 is that of a multimodal propositional calculus, and is based on possible worlds. The infamous Ramsey Rule becomes a derivation rule in our sound and complete axiomatization. We then show that Gardenfors' Triviality Theorem, about the impossibility to combine theory change and counterfactual conditionals via the Ramsey Rule, does not hold in our logic. It is thus seen that the Triviality Theorem applies only to revision operators, not to updates. fl A preliminary version of this paper was presented at the Second International Conference on Principles of Knowledge Representation and Reasoning, Cambridge, Massachusetts, April 22-25, 1991. The work was partially performed while the author was visiting the Department of Computer Science at the University of Toronto.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
6
|
Probabilistic Methods
|
cora
| 838
|
test
|
1-hop neighbor's text information: Using Bayesian networks for incorporating probabilistic a priori knowledge into Boltzmann machines. : We present a method for automatically determining the structure and the connection weights of a Boltzmann machine corresponding to a given Bayesian network representation of a probability distribution on a set of discrete variables. The resulting Boltzmann machine structure can be implemented efficiently on massively parallel hardware, since the structure can be divided into two separate clusters where all the nodes in one cluster can be updated simultaneously. The updating process of the Boltzmann machine approximates a Gibbs sampling process of the original Bayesian network in the sense that the Boltzmann machine converges to the same final state as the Gibbs sampler does. The mapping from a Bayesian network to a Boltzmann machine can be seen as a method for incorporating probabilistic a priori information into a neural network architecture, which can then be trained further with existing learning algorithms.
1-hop neighbor's text information: Induction of Recursive Bayesian Classifiers In Proc. : We present an algorithm for inducing Bayesian networks using feature selection. The algorithm selects a subset of attributes that maximizes predictive accuracy prior to the network learning phase, thereby incorporating a bias for small networks that retain high predictive accuracy. We compare the behavior of this selective Bayesian network classifier with that of (a) Bayesian network classifiers that incorporate all attributes, (b) selective and non-selective naive Bayesian classifiers, and (c) the decision-tree algorithm C4.5. With respect to (a), we show that our approach generates networks that are computationally simpler to evaluate but display comparable predictive accuracy. With respect to (b), we show that the selective Bayesian network classifier performs significantly better than both versions of the naive Bayesian classifier on almost all databases studied, and hence is an enhancement of the naive method. With respect to (c), we show that the selective Bayesian network classifier displays comparable behavior.
1-hop neighbor's text information: Learning in neural networks with Bayesian prototypes. : Given a set of samples of a probability distribution on a set of discrete random variables, we study the problem of constructing a good approximative neural network model of the underlying probability distribution. Our approach is based on an unsupervised learning scheme where the samples are first divided into separate clusters, and each cluster is then coded as a single vector. These Bayesian prototype vectors consist of conditional probabilities representing the attribute-value distribution inside the corresponding cluster. Using these prototype vectors, it is possible to model the underlying joint probability distribution as a simple Bayesian network (a tree), which can be realized as a feedforward neural network capable of probabilistic reasoning. In this framework, learning means choosing the size of the prototype set, partitioning the samples into the corresponding clusters, and constructing the cluster prototypes. We describe how the prototypes can be determined, given a partition of the samples, and present a method for evaluating the likelihood of the corresponding Bayesian tree. We also present a greedy heuristic for searching through the space of different partition schemes with different numbers of clusters, aiming at an optimal approximation of the probability distribution.
Target text information: Learning Bayesian Prototype Trees by Simulated Annealing: Given a set of samples of an unknown probability distribution, we study the problem of constructing a good approximative Bayesian network model of the probability distribution in question. This task can be viewed as a search problem, where the goal is to find a maximal probability network model, given the data. In this work, we do not make an attempt to learn arbitrarily complex multi-connected Bayesian network structures, since such resulting models can be unsuitable for practical purposes due to the exponential amount of time required for the reasoning task. Instead, we restrict ourselves to a special class of simple tree-structured Bayesian networks called Bayesian prototype trees, for which a polynomial time algorithm for Bayesian reasoning exists. We show how the probability of a given Bayesian prototype tree model can be evaluated, given the data, and how this evaluation criterion can be used in a stochastic simulated annealing algorithm for searching the model space. The simulated annealing algorithm provably finds the maximal probability model, provided that a sufficient amount of time is used.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
6
|
Probabilistic Methods
|
cora
| 324
|
test
|
1-hop neighbor's text information: Regression shrinkage and selection via the lasso. : We propose a new method for estimation in linear models. The "lasso" minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant. Because of the nature of this constraint it tends to produce some coefficients that are exactly zero and hence gives interpretable models. Our simulation studies suggest that the lasso enjoys some of the favourable properties of both subset selection and ridge regression. It produces interpretable models like subset selection and exhibits the stability of ridge regression. There is also an interesting relationship with recent work in adaptive function estimation by Donoho and Johnstone. The lasso idea is quite general and can be applied in a variety of statistical models: extensions to generalized regression models and tree-based models are briefly described.
1-hop neighbor's text information: A practical Bayesian framework for backpropagation networks. : A quantitative and practical Bayesian framework is described for learning of mappings in feedforward networks. The framework makes possible: (1) objective comparisons between solutions using alternative network architectures; (2) objective stopping rules for network pruning or growing procedures; (3) objective choice of magnitude and type of weight decay terms or additive regularisers (for penalising large weights, etc.); (4) a measure of the effective number of well-determined parameters in a model; (5) quantified estimates of the error bars on network parameters and on network output; (6) objective comparisons with alternative learning and interpolation models such as splines and radial basis functions. The Bayesian `evidence' automatically embodies `Occam's razor,' penalising over-flexible and over-complex models. The Bayesian approach helps detect poor underlying assumptions in learning models. For learning models well matched to a problem, a good correlation between generalisation ability This paper makes use of the Bayesian framework for regularisation and model comparison described in the companion paper `Bayesian interpolation' (MacKay, 1991a). This framework is due to Gull and Skilling (Gull, 1989a). and the Bayesian evidence is obtained.
1-hop neighbor's text information: Adaptive noise injection for input relevance determination. : In this paper we consider the application of training with noise in multi-layer perceptron to input variables relevance determination. Noise injection is modified in order to penalize irrelevant features. The proposed algorithm is attractive as it requires the tuning of a single parameter. This parameter controls the penalization of the inputs together with the complexity of the model. After the presentation of the method, experimental evidences are given on simulated data sets.
Target text information: Least Absolute Shrinkage is Equivalent to Quadratic Penalization: Adaptive ridge is a special form of ridge regression, balancing the quadratic penalization on each parameter of the model. This paper shows the equivalence between adaptive ridge and lasso (least absolute shrinkage and selection operator). This equivalence states that both procedures produce the same estimate. Least absolute shrinkage can thus be viewed as a particular quadratic penalization. From this observation, we derive an EM algorithm to compute the lasso solution. We finally present a series of applications of this type of algorithm in regres sion problems: kernel regression, additive modeling and neural net training.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 599
|
test
|
1-hop neighbor's text information: A parallel island model genetic algorithm for the multiprocessor scheduling problem. : In this paper we compare the performance of a serial and a parallel island model Genetic Algorithm for solving the Multiprocessor Scheduling Problem. We show results using fixed and scaled problems both using and not using migration. We have found that in addition to providing a speedup through the use of parallel processing, the parallel island model GA with migration finds better quality solutions than the serial GA.
1-hop neighbor's text information: A User Friendly Workbench for Order-Based Genetic Algorithm Research, : Over the years there has been several packages developed that provide a workbench for genetic algorithm (GA) research. Most of these packages use the generational model inspired by GENESIS. A few have adopted the steady-state model used in Genitor. Unfortunately, they have some deficiencies when working with order-based problems such as packing, routing, and scheduling. This paper describes LibGA, which was developed specifically for order-based problems, but which also works easily with other kinds of problems. It offers an easy to use `user-friendly' interface and allows comparisons to be made between both generational and steady-state genetic algorithms for a particular problem. It includes a variety of genetic operators for reproduction, crossover, and mutation. LibGA makes it easy to use these operators in new ways for particular applications or to develop and include new operators. Finally, it offers the unique new feature of a dynamic generation gap.
1-hop neighbor's text information: A heuristic for improved genetic bin packing. : University of Tulsa Technical Report UTULSA-MCS-93-8, May, 1993. Submitted to Information Processing Letters, May, 1993.
Target text information: Reducing disruption of superior building blocks in genetic algorithms. :
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
3
|
Genetic Algorithms
|
cora
| 2,234
|
test
|
1-hop neighbor's text information: (1995) Cellular Encoding Applied to Neurocontrol Proc. : Neural networks are trained for balancing 1 and 2 poles attached to a cart on a fixed track. For one variant of the single pole system, only pole angle and cart position variables are supplied as inputs; the network must learn to compute velocities. All of the problems are solved using a fixed architecture and using a new version of cellular encoding that evolves an application specific architecture with real-valued weights. The learning times and generalization capabilities are compared for neural networks developed using both methods. After a post processing simplification, topologies produced by cellular encoding were very simple and could be analyzed. Architectures with no hidden units were produced for the single pole and the two pole problem when velocity information is supplied as an input. Moreover, these linear solutions display good generalization. For all the control problems, cellular encoding can automatically generate architectures whose complexity and structure reflect the features of the problem to solve.
1-hop neighbor's text information: Automatic Definition of Modular Neural Networks. Adaptive Behavior, :
1-hop neighbor's text information: Discovery of symbolic, neuro-symbolic and neural networks with parallel distributed genetic programming. : Technical Report: CSRP-96-14 August 1996 Abstract Genetic Programming is a method of program discovery consisting of a special kind of genetic algorithm capable of operating on parse trees representing programs and an interpreter which can run the programs being optimised. This paper describes Parallel Distributed Genetic Programming (PDGP), a new form of genetic programming which is suitable for the development of parallel programs in which symbolic and neural processing elements can be combined a in free and natural way. PDGP is based on a graph-like representation for parallel programs which is manipulated by crossover and mutation operators which guarantee the syntactic correctness of the offspring. The paper describes these operators and reports some results obtained with the exclusive-or problem.
Target text information: "Adding Learning to the Cellular development of Neural Networks: Evolution and the Baldwin Effect," : This paper compares the efficiency of two encoding schemes for Artificial Neural Networks optimized by evolutionary algorithms. Direct Encoding encodes the weights for an a priori fixed neural network architecture. Cellular Encoding encodes both weights and the architecture of the neural network. In previous studies, Direct Encoding and Cellular Encoding have been used to create neural networks for balancing 1 and 2 poles attached to a cart on a fixed track. The poles are balanced by a controller that pushes the cart to the left or the right. In some cases velocity information about the pole and cart is provided as an input; in other cases the network must learn to balance a single pole without velocity information. A careful study of the behavior of these systems suggests that it is possible to balance a single pole with velocity information as an input and without learning to compute the velocity. A new fitness function is introduced that forces the neural network to compute the velocity. By using this new fitness function and tuning the syntactic constraints used with cellular encoding, we achieve a tenfold speedup over our previous study and solve a more difficult problem: balancing two poles when no information about the velocity is provided as input.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
3
|
Genetic Algorithms
|
cora
| 1,268
|
test
|
1-hop neighbor's text information: Learning Decision Trees from Decision Rules:: A method and initial results from a comparative study ABSTRACT A standard approach to determining decision trees is to learn them from examples. A disadvantage of this approach is that once a decision tree is learned, it is difficult to modify it to suit different decision making situations. Such problems arise, for example, when an attribute assigned to some node cannot be measured, or there is a significant change in the costs of measuring attributes or in the frequency distribution of events from different decision classes. An attractive approach to resolving this problem is to learn and store knowledge in the form of decision rules, and to generate from them, whenever needed, a decision tree that is most suitable in a given situation. An additional advantage of such an approach is that it facilitates building compact decision trees , which can be much simpler than the logically equivalent conventional decision trees (by compact trees are meant decision trees that may contain branches assigned a set of values , and nodes assigned derived attributes, i.e., attributes that are logical or mathematical functions of the original ones). The paper describes an efficient method, AQDT-1, that takes decision rules generated by an AQ-type learning system (AQ15 or AQ17), and builds from them a decision tree optimizing a given optimality criterion. The method can work in two modes: the standard mode , which produces conventional decision trees, and compact mode, which produces compact decision trees. The preliminary experiments with AQDT-1 have shown that the decision trees generated by it from decision rules (conventional and compact) have outperformed those generated from examples by the well-known C4.5 program both in terms of their simplicity and their predictive accuracy.
1-hop neighbor's text information: LEARNING FOR DECISION MAKING: The FRD Approach and a Comparative Study Machine Learning and Inference Laboratory: This paper concerns the issue of what is the best form for learning, representing and using knowledge for decision making. The proposed answer is that such knowledge should be learned and represented in a declarative form. When needed for decision making, it should be efficiently transferred to a procedural form that is tailored to the specific decision making situation. Such an approach combines advantages of the declarative representation, which facilitates learning and incremental knowledge modification, and the procedural representation, which facilitates the use of knowledge for decision making. This approach also allows one to determine decision structures that may avoid attributes that unavailable or difficult to measure in any given situation. Experimental investigations of the system, FRD-1, have demonstrated that decision structures obtained via the declarative route often have not only higher predictive accuracy but are also are simpler than those learned directly from facts.
1-hop neighbor's text information: R.S. and Imam, I.F. On Learning Decision Structures. : A decision structure is an acyclic graph that specifies an order of tests to be applied to an object (or a situation) to arrive at a decision about that object. and serves as a simple and powerful tool for organizing a decision process. This paper proposes a methodology for learning decision structures that are oriented toward specific decision making situations. The methodology consists of two phases: 1determining and storing declarative rules describing the decision process, 2deriving online a decision structure from the rules. The first step is performed by an expert or by an AQ-based inductive learning program that learns decision rules from examples of decisions (AQ15 or AQ17). The second step transforms the decision rules to a decision structure that is most suitable for the given decision making situation. The system, AQDT-2, implementing the second step, has been applied to a problem in construction engineering. In the experiments, AQDT-2 outperformed all other programs applied to the same problem in terms of the accuracy and the simplicity of the generated decision structures. Key words: machine learning, inductive learning, decision structures, decision rules, attribute selection.
Target text information: The Estimation of Probabilities in Attribute Selection Measures for Decision Structure Induction in Proceeding of the European Summer School on Machine Learning, : In this paper we analyze two well-known measures for attribute selection in decision tree induction, informativity and gini index. In particular, we are interested in the influence of different methods for estimating probabilities on these two measures. The results of experiments show that different measures, which are obtained by different probability estimation methods, determine the preferential order of attributes in a given node. Therefore, they determine the structure of a constructed decision tree. This feature can be very beneficial, especially in real-world applications where several different trees are often required.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
0
|
Rule Learning
|
cora
| 1,000
|
test
|
1-hop neighbor's text information: Learning to predict reading frames in E. coli DNA sequences. : Two fundamental problems in analyzing DNA sequences are (1) locating the regions of a DNA sequence that encode proteins, and (2) determining the reading frame for each region. We investigate using artificial neural networks (ANNs) to find coding regions, determine reading frames, and detect frameshift errors in E. coli DNA sequences. We describe our adaptation of the approach used by Uberbacher and Mural to identify coding regions in human DNA, and we compare the performance of ANNs to several conventional methods for predicting reading frames. Our experiments demonstrate that ANNs can outperform these conventional approaches.
1-hop neighbor's text information: R.M. Cameron-Jones (1994), Exploring a Framework for Instance Based Learning and Naive Bayesian Classifiers, : The relative performance of different methods for classifier learning varies across domains. Some recent Instance Based Learning (IBL) methods, such as IB1-MVDM* 10 , use similarity measures based on conditional class probabilities. These probabilities are a key component of Naive Bayes methods. Given this commonality of approach, it is of interest to consider how the differences between the two methods are linked to their relative performance in different domains. Here we interpret Naive Bayes in an IBL like framework, identifying differences between Naive Bayes and IB1-MVDM* in this framework. Experiments on variants of IB1-MVDM* that lie between it and Naive Bayes in the framework are conducted on sixteen domains. The results strongly suggest that the relative performance of Naive Bayes and IB1-MVDM* is linked to the extent to which each class can be satisfactorily represented by a single instance in the IBL framework. However, this is not the only factor that appears significant.
1-hop neighbor's text information: In defense of C4.5: Notes on learning one-level decision trees, : We discuss the implications of Holte's recently-published article, which demonstrated that on the most commonly used data very simple classification rules are almost as accurate as decision trees produced by Quinlan's C4.5. We consider, in particular, what is the significance of Holte's results for the future of top-down induction of decision trees. To an extent, Holte questioned the sense of further research on multilevel decision tree learning. We go in detail through all the parts of Holte's study. We try to put the results into perspective. We argue that the (in absolute terms) small difference in accuracy between 1R and C4.5 that was witnessed by Holte is still significant. We claim that C4.5 possesses additional accuracy-related advantages over 1R. In addition we discuss the representativeness of the databases used by Holte. We compare empirically the optimal accuracies of multilevel and one-level decision trees and observe some significant differences. We point out several deficien cies of limited-complexity classifiers.
Target text information: Learning to represent codons: A challenge problem for constructive induction. : The ability of an inductive learning system to find a good solution to a given problem is dependent upon the representation used for the features of the problem. Systems that perform constructive induction are able to change their representation by constructing new features. We describe an important, real-world problem finding genes in DNA that we believe offers an interesting challenge to constructive-induction researchers. We report experiments that demonstrate that: (1) two different input representations for this task result in significantly different generalization performance for both neural networks and decision trees; and (2) both neural and symbolic methods for constructive induction fail to bridge the gap between these two representations. We believe that this real-world domain provides an interesting challenge problem for constructive induction because the relationship between the two representations is well known, and because the representational shift involved in construct ing the better representation is not imposing.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 1,681
|
test
|
1-hop neighbor's text information: Learning to predict by the methods of temporal differences. : This article introduces a class of incremental learning procedures specialized for prediction|that is, for using past experience with an incompletely known system to predict its future behavior. Whereas conventional prediction-learning methods assign credit by means of the difference between predicted and actual outcomes, the new methods assign credit by means of the difference between temporally successive predictions. Although such temporal-difference methods have been used in Samuel's checker player, Holland's bucket brigade, and the author's Adaptive Heuristic Critic, they have remained poorly understood. Here we prove their convergence and optimality for special cases and relate them to supervised-learning methods. For most real-world prediction problems, temporal-difference methods require less memory and less peak computation than conventional methods; and they produce more accurate predictions. We argue that most problems to which supervised learning is currently applied are really prediction problems of the sort to which temporal-difference methods can be applied to advantage.
1-hop neighbor's text information: A comparison of direct and model-based reinforcement learning. : This paper compares direct reinforcement learning (no explicit model) and model-based reinforcement learning on a simple task: pendulum swing up. We find that in this task model-based approaches support reinforcement learning from smaller amounts of training data and efficient handling of changing goals.
1-hop neighbor's text information: Integrated Architectures for Learning, Planning and Reacting Based on Approximating Dynamic Programming, : This paper extends previous work with Dyna, a class of architectures for intelligent systems based on approximating dynamic programming methods. Dyna architectures integrate trial-and-error (reinforcement) learning and execution-time planning into a single process operating alternately on the world and on a learned model of the world. In this paper, I present and show results for two Dyna architectures. The Dyna-PI architecture is based on dynamic programming's policy iteration method and can be related to existing AI ideas such as evaluation functions and universal plans (reactive systems). Using a navigation task, results are shown for a simple Dyna-PI system that simultaneously learns by trial and error, learns a world model, and plans optimal routes using the evolving world model. The Dyna-Q architecture is based on Watkins's Q-learning, a new kind of reinforcement learning. Dyna-Q uses a less familiar set of data structures than does Dyna-PI, but is arguably simpler to implement and use. We show that Dyna-Q architectures are easy to adapt for use in changing environments.
Target text information: Least-Squares Temporal Difference Learning: Submitted to NIPS-98 TD() is a popular family of algorithms for approximate policy evaluation in large MDPs. TD() works by incrementally updating the value function after each observed transition. It has two major drawbacks: it makes inefficient use of data, and it requires the user to manually tune a stepsize schedule for good performance. For the case of linear value function approximations and = 0, the Least-Squares TD (LSTD) algorithm of Bradtke and Barto [5] eliminates all stepsize parameters and improves data efficiency. This paper extends Bradtke and Barto's work in three significant ways. First, it presents a simpler derivation of the LSTD algorithm. Second, it generalizes from = 0 to arbitrary values of ; at the extreme of = 1, the resulting algorithm is shown to be a practical formulation of supervised linear regression. Third, it presents a novel, intuitive interpretation of LSTD as a model-based reinforcement learning technique.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
5
|
Reinforcement Learning
|
cora
| 141
|
test
|
1-hop neighbor's text information: On the sample complexity of weak learning. : While most theoretical work in machine learning has focused on the complexity of learning, recently there has been increasing interest in formally studying the complexity of teaching. In this paper we study the complexity of teaching by considering a variant of the on-line learning model in which a helpful teacher selects the instances. We measure the complexity of teaching a concept from a given concept class by a combinatorial measure we call the teaching dimension. Informally, the teaching dimension of a concept class is the minimum number of instances a teacher must reveal to uniquely identify any target concept chosen from the class. fl A preliminary version of this paper appeared in the Proceedings of the Fourth Annual Workshop on Computational Learning Theory, pages 303-314. August 1991. Most of this research was carried out while both authors were at MIT Laboratory for Computer Science with support provided by ARO Grant DAAL03-86-K-0171, DARPA Contract N00014-89-J-1988, NSF Grant CCR-88914428, and a grant from the Siemens Corporation. S. Goldman is currently supported in part by a G.E. Foundation Junior Faculty Grant and NSF Grant CCR-9110108.
1-hop neighbor's text information: Learning conjunctions of Horn clauses. :
1-hop neighbor's text information: "The Power of Self-Directed Learning", : This paper studies self-directed learning, a variant of the on-line learning model in which the learner selects the presentation order for the instances. We give tight bounds on the complexity of self-directed learning for the concept classes of monomials, k-term DNF formulas, and orthogonal rectangles in f0; 1; ; n1g d . These results demonstrate that the number of mistakes under self-directed learning can be surprisingly small. We then prove that the model of self-directed learning is more powerful than all other commonly used on-line and query learning models. Next we explore the relationship between the complexity of self-directed learning and the Vapnik-Chervonenkis dimension. Finally, we explore a relationship between Mitchell's version space algorithm and the existence of self-directed learning algorithms that make few mistakes. fl Supported in part by a GE Foundation Junior Faculty Grant and NSF Grant CCR-9110108. Part of this research was conducted while the author was at the M.I.T. Laboratory for Computer Science and supported by NSF grant DCR-8607494 and a grant from the Siemens Corporation. Net address: [email protected].
Target text information: Teaching a Smarter Learner: We introduce a formal model of teaching in which the teacher is tailored to a particular learner, yet the teaching protocol is designed so that no collusion is possible. Not surprisingly, such a model remedies the non-intuitive aspects of other models in which the teacher must successfully teach any consistent learner. We prove that any class that can be exactly identified by a deterministic polynomial-time algorithm with access to a very rich set of example-based queries is teachable by a computationally unbounded teacher and a polynomial-time learner. In addition, we present other general results relating this model of teaching to various previous results. We also consider the problem of designing teacher/learner pairs in which both the teacher and learner are polynomial-time algorithms and describe teacher/learner pairs for the classes of 1-decision lists and Horn sentences.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
4
|
Theory
|
cora
| 367
|
test
|
1-hop neighbor's text information: Inductive Constraint Logic. : A novel approach to learning first order logic formulae from positive and negative examples is presented. Whereas present inductive logic programming systems employ examples as true and false ground facts (or clauses), we view examples as interpretations which are true or false for the target theory. This viewpoint allows to reconcile the inductive logic programming paradigm with classical attribute value learning in the sense that the latter is a special case of the former. Because of this property, we are able to adapt AQ and CN2 type algorithms in order to enable learning of full first order formulae. However, whereas classical learning techniques have concentrated on concept representations in disjunctive normal form, we will use a clausal representation, which corresponds to a conjuctive normal form where each conjunct forms a constraint on positive examples. This representation duality reverses also the role of positive and negative examples, both in the heuristics and in the algorithm. The resulting theory is incorporated in a system named ICL (Inductive Constraint Logic).
1-hop neighbor's text information: Application of Clausal Discovery to Temporal Databases: Most of KDD applications consider databases as static objects, and however many databases are inherently temporal, i.e., they store the evolution of each object with the passage of time. Thus, regularities about the dynamics of these databases cannot be discovered as the current state might depend in some way on the previous states. To this end, a pre-processing of data is needed aimed at extracting relationships intimately connected to the temporal nature of data that will be make available to the discovery algorithm. The predicate logic language of ILP methods together with the recent advances as to ef ficiency makes them adequate for this task.
1-hop neighbor's text information: Learning with Abduction: We investigate how abduction and induction can be integrated into a common learning framework through the notion of Abductive Concept Learning (ACL). ACL is an extension of Inductive Logic Programming (ILP) to the case in which both the background and the target theory are abductive logic programs and where an abductive notion of entailment is used as the coverage relation. In this framework, it is then possible to learn with incomplete information about the examples by exploiting the hypothetical reasoning of abduction. The paper presents the basic framework of ACL with its main characteristics and illustrates its potential in addressing several problems in ILP such as learning with incomplete information and multiple predicate learning. An algorithm for ACL is developed by suitably extending the top-down ILP method for concept learning and integrating this with an abductive proof procedure for Abductive Logic Programming (ALP). A prototype system has been developed and applied to learning problems with incomplete information. The particular role of integrity constraints in ACL is investigated showing ACL as a hybrid learning framework that integrates the explanatory (discriminant) and descriptive (characteristic) settings of ILP.
Target text information: The ilp description learning problem: Towards a genearl model-leve definition of data mining in ilp. : [email protected], [email protected] Proc. FGML-95, Annual Workshop of the GI Special Interest Group Machine Learning (GI FG 1.1.3), ed. K. Morik and J. Herrmann, Research Report 580, Univ.Dortmund, 1995. Abstract The task of discovering interesting regularities in (large) sets of data (data mining, knowledge discovery) has recently met with increased interest in Machine Learning in general and in Inductive Logic Programming (ILP) in particular. However, while there is a widely accepted definition for the task of concept learning from examples in ILP, definitions for the data mining task have been proposed only recently. In this paper, we examine these so-called "non-monotonic semantics" definitions and show that non-monotonicity is only an incidental property of the data mining learning task, and that this task makes perfect sense without such an assumption. We therefore introduce and define a generalized definition of the data mining task called the ILP description learning problem and discuss its properties and relation to the traditional concept learning (prediction) learning problem. Since our characterization is entirely on the level of models, the definition applies independently of the chosen hypothesis language.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
0
|
Rule Learning
|
cora
| 1,309
|
val
|
1-hop neighbor's text information: Plasticity-Mediated Competitive Learning: Differentiation between the nodes of a competitive learning network is conventionally achieved through competition on the basis of neural activity. Simple inhibitory mechanisms are limited to sparse representations, while decorrelation and factorization schemes that support distributed representations are computation-ally unattractive. By letting neural plasticity mediate the competitive interaction instead, we obtain diffuse, nonadaptive alternatives for fully distributed representations. We use this technique to simplify and improve our binary information gain optimization algorithm for feature extraction (Schraudolph and Sejnowski, 1993); the same approach could be used to improve other learning algorithms.
1-hop neighbor's text information: An Information Maximization Approach to Blind Separation and Blind Deconvolution. : We derive a new self-organising learning algorithm which maximises the information transferred in a network of non-linear units. The algorithm does not assume any knowledge of the input distributions, and is defined here for the zero-noise limit. Under these conditions, information maximisation has extra properties not found in the linear case (Linsker 1989). The non-linearities in the transfer function are able to pick up higher-order moments of the input distributions and perform something akin to true redundancy reduction between units in the output representation. This enables the network to separate statistically independent components in the inputs: a higher-order generalisation of Principal Components Analysis. We apply the network to the source separation (or cocktail party) problem, successfully separating unknown mixtures of up to ten speakers. We also show that a variant on the network architecture is able to perform blind deconvolution (cancellation of unknown echoes and reverberation in a speech signal). Finally, we derive dependencies of information transfer on time delays. We suggest that information max-imisation provides a unifying framework for problems in `blind' signal processing. fl Please send comments to [email protected]. This paper will appear as Neural Computation, 7, 6, 1004-1034 (1995). The reference for this version is: Technical Report no. INC-9501, February 1995, Institute for Neural Computation, UCSD, San Diego, CA 92093-0523.
1-hop neighbor's text information: Unsupervised learning procedures for neural networks. : Technical report CNS-TR-95-1 Center for Neural Systems McMaster University
Target text information: A non-linear information maximisation algorithm that performs blind separation. : A new learning algorithm is derived which performs online stochastic gradient ascent in the mutual information between outputs and inputs of a network. In the absence of a priori knowledge about the `signal' and `noise' components of the input, propagation of information depends on calibrating network non-linearities to the detailed higher-order moments of the input density functions. By incidentally minimising mutual information between outputs, as well as maximising their individual entropies, the network `fac-torises' the input into independent components. As an example application, we have achieved near-perfect separation of ten digitally mixed speech signals. Our simulations lead us to believe that our network performs better at blind separation than the Herault-Jutten network, reflecting the fact that it is derived rigorously from the mutual information objective.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 1,391
|
test
|
1-hop neighbor's text information: A summary of research on parallel genetic algorithms. : IlliGAL Report No. 97003 May 1997
1-hop neighbor's text information: A genetic algorithm for the set partitioning problem. : Genetic algorithms are stochastic search and optimization techniques which can be used for a wide range of applications. This paper addresses the application of genetic algorithms to the graph partitioning problem. Standard genetic algorithms with large populations suffer from lack of efficiency (quite high execution time). A massively parallel genetic algorithm is proposed, an implementation on a SuperNode of Transputers and results of various benchmarks are given. A comparative analysis of our approach with hill-climbing algorithms and simulated annealing is also presented. The experimental measures show that our algorithm gives better results concerning both the quality of the solution and the time needed to reach it.
1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming.
Target text information: "A genetic algorithm for the assembly line balancing problem", : Genetic algorithms are one example of the use of a random element within an algorithm for combinatorial optimization. We consider the application of the genetic algorithm to a particular problem, the Assembly Line Balancing Problem. A general description of genetic algorithms is given, and their specialized use on our test-bed problems is discussed. We carry out extensive computational testing to find appropriate values for the various parameters associated with this genetic algorithm. These experiments underscore the importance of the correct choice of a scaling parameter and mutation rate to ensure the good performance of a genetic algorithm. We also describe a parallel implementation of the genetic algorithm and give some comparisons between the parallel and serial implementations. Both versions of the algorithm are shown to be effective in producing good solutions for problems of this type (with appropriately chosen parameters).
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
3
|
Genetic Algorithms
|
cora
| 2,697
|
test
|
1-hop neighbor's text information: Toward optimal feature selection. : In this paper, we examine a method for feature subset selection based on Information Theory. Initially, a framework for defining the theoretically optimal, but computation-ally intractable, method for feature subset selection is presented. We show that our goal should be to eliminate a feature if it gives us little or no additional information beyond that subsumed by the remaining features. In particular, this will be the case for both irrelevant and redundant features. We then give an efficient algorithm for feature selection which computes an approximation to the optimal feature selection criterion. The conditions under which the approximate algorithm is successful are examined. Empirical results are given on a number of data sets, showing that the algorithm effectively han dles datasets with large numbers of features.
1-hop neighbor's text information: Mining and Model Simplicity: A Case Study in Diagnosis: Proceedings of the 2nd International Conference on Knowledge Discovery and Data Mining (KDD), 1996. The official version of this paper has been published by the American Association for Artificial Intelligence (http://www.aaai.org) c fl 1996, American Association for Artificial Intelligence. All rights reserved. Abstract We describe the results of performing data mining on a challenging medical diagnosis domain, acute abdominal pain. This domain is well known to be difficult, yielding little more than 60% predictive accuracy for most human and machine diagnosticians. Moreover, many researchers argue that one of the simplest approaches, the naive Bayesian classifier, is optimal. By comparing the performance of the naive Bayesian classifier to its more general cousin, the Bayesian network classifier, and to selective Bayesian classifiers with just 10% of the total attributes, we show that the simplest models perform at least as well as the more complex models. We argue that simple models like the selective naive Bayesian classifier will perform as well as more complicated models for similarly complex domains with relatively small data sets, thereby calling into question the extra expense necessary to induce more complex models.
1-hop neighbor's text information: Efficient learning of selective Bayesian network classifiers. : In this paper, we present a computation-ally efficient method for inducing selective Bayesian network classifiers. Our approach is to use information-theoretic metrics to efficiently select a subset of attributes from which to learn the classifier. We explore three conditional, information-theoretic met-rics that are extensions of metrics used extensively in decision tree learning, namely Quin-lan's gain and gain ratio metrics and Man-taras's distance metric. We experimentally show that the algorithms based on gain ratio and distance metric learn selective Bayesian networks that have predictive accuracies as good as or better than those learned by existing selective Bayesian network induction approaches (K2-AS), but at a significantly lower computational cost. We prove that the subset-selection phase of these information-based algorithms has polynomial complexity, as compared to the worst-case exponential time complexity of the corresponding phase in K2-AS.
Target text information: "Induction of selective bayesian classifiers," : In this paper we present a novel induction algorithm for Bayesian networks. This selective Bayesian network classifier selects a subset of attributes that maximizes predictive accuracy prior to the network learning phase, thereby learning Bayesian networks with a bias for small, high-predictive-accuracy networks. We compare the performance of this classifier with selective and non-selective naive Bayesian classifiers. We show that the selective Bayesian network classifier performs significantly better than both versions of the naive Bayesian classifier on almost all databases analyzed, and hence is an enhancement of the naive Bayesian classifier. Relative to the non-selective Bayesian network classifier, our selective Bayesian network classifier generates networks that are computationally simpler to evaluate and that display predictive accuracy comparable to that of Bayesian networks which model all features.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
6
|
Probabilistic Methods
|
cora
| 1,611
|
val
|
1-hop neighbor's text information: D.M. Chiarulli, On-Line Prediction of Multiprocessor Memory Access Patterns, : Technical Report UMIACS-TR-96-59 and CS-TR-3676 Institute for Advanced Computer Studies University of Maryland College Park, MD 20742 Abstract Shared memory multiprocessors require reconfigurable interconnection networks (INs) for scalability. These INs are reconfigured by an IN control unit. However, these INs are often plagued by undesirable reconfiguration time that is primarily due to control latency, the amount of time delay that the control unit takes to decide on a desired new IN configuration. To reduce control latency, a trainable prediction unit (PU) was devised and added to the IN controller. The PUs job is to anticipate and reduce control configuration time, the major component of the control latency. Three different on-line prediction techniques were tested to learn and predict repetitive memory access patterns for three typical parallel processing applications, the 2-D relaxation algorithm, matrix multiply and Fast Fourier Transform. The predictions were then used by a routing control algorithm to reduce control latency by configuring the IN to provide needed memory access paths before they were requested. Three prediction techniques were used and tested: 1). a Markov predictor, 2). a linear predictor and 3). a time delay neural network (TDNN) predictor. As expected, different predictors performed best on different applications, however, the TDNN produced the best overall results.
Target text information: Routing in Optical Multistage Interconnection Networks: a Neural Network Solution, : There has been much interest in using optics to implement computer interconnection networks. However, there has been little discussion of any routing methodologies besides those already used in electronics. In this paper, a neural network routing methodology is proposed that can generate control bits for an optical multistage interconnection network (OMIN). Though we present no optical implementation of this methodology, we illustrate its control for an optical interconnection network. These OMINs may be used as communication media for shared memory, distributed computing systems. The routing methodology makes use of an Artificial Neural Network (ANN) that functions as a parallel computer for generating the routes. The neural network routing scheme may be applied to electrical as well as optical interconnection networks. However, since the ANN can be implemented using optics, this routing approach is especially appealing for an optical computing environment. The parallel nature of the ANN computation may make this routing scheme faster than conventional routing approaches, especially for OMINs that are irregular. Furthermore, the neural network routing scheme is fault-tolerant. Results are shown for generating routes in a 16 fi 16, 3 stage OMIN.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 900
|
test
|
1-hop neighbor's text information: Embedding of a sequential procedure within an evolutionary algorithm for coloring problems in graphs. :
1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming.
1-hop neighbor's text information: Maximizing the robustness of a linear threshold classifier with discrete weights. Network: : Quantization of the parameters of a Perceptron is a central problem in hardware implementation of neural networks using a numerical technology. An interesting property of neural networks used as classifiers is their ability to provide some robustness on input noise. This paper presents efficient learning algorithms for the maximization of the robustness of a Perceptron and especially designed to tackle the combinatorial problem arising from the discrete weights.
Target text information: An evolutionary tabu search algorithm and the NHL scheduling problem, : We present in this paper a new evolutionary procedure for solving general optimization problems that combines efficiently the mechanisms of genetic algorithms and tabu search. In order to explore the solution space properly interaction phases are interspersed with periods of optimization in the algorithm. An adaptation of this search principle to the National Hockey League (NHL) problem is discussed. The hybrid method developed in this paper is well suited for Open Shop Scheduling problems (OSSP). The results obtained appear to be quite satisfactory.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
3
|
Genetic Algorithms
|
cora
| 2,033
|
train
|
1-hop neighbor's text information: Strategy Adaptation by Competing Subpopulations: The breeder genetic algorithm BGA depends on a set of control parameters and genetic operators. In this paper it is shown that strategy adaptation by competing subpopulations makes the BGA more robust and more efficient. Each subpopulation uses a different strategy which competes with other subpopulations. Numerical results are pre sented for a number of test functions.
1-hop neighbor's text information: (1991) Global optimization by means of distributed evolution Genetic Algorithms in Engineering and Computer Science Editor J. : Genetic Algorithms (GAs) are powerful heuristic search strategies based upon a simple model of organic evolution. The basic working scheme of GAs as developed by Holland [Hol75] is described within this paper in a formal way, and extensions based upon the second-level learning principle for strategy parameters as introduced in Evolution Strategies (ESs) are proposed. First experimental results concerning this extension of GAs are also reported.
1-hop neighbor's text information: "A Survey of Evolutionary Strategies," :
Target text information: Self-Adaption in Genetic Algorithms. : In this paper a new approach is presented, which transfers a basic idea from Evolution Strategies (ESs) to GAs. Mutation rates are changed into endogeneous items which are adapting during the search process. First experimental results are presented, which indicate that environment-dependent self-adaptation of appropriate settings for the mutation rate is possible even for GAs.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
3
|
Genetic Algorithms
|
cora
| 2,043
|
val
|
1-hop neighbor's text information: Submitted to the Future Generation Computer Systems special issue on Data Mining. Using Neural Networks: Neural networks have been successfully applied in a wide range of supervised and unsupervised learning applications. Neural-network methods are not commonly used for data-mining tasks, however, because they often produce incomprehensible models and require long training times. In this article, we describe neural-network learning algorithms that are able to produce comprehensible models, and that do not require excessive training times. Specifically, we discuss two classes of approaches for data mining with neural networks. The first type of approach, often called rule extraction, involves extracting symbolic models from trained neural networks. The second approach is to directly learn simple, easy-to-understand networks. We argue that, given the current state of the art, neural-network methods deserve a place in the tool boxes of data-mining specialists.
Target text information: Extracting Comprehensible Models from Trained Neural Networks. : Although they are applicable to a wide array of problems, and have demonstrated good performance on a number of difficult, real-world tasks, neural networks are not usually applied to problems in which comprehensibility of the acquired concepts is important. The concept representations formed by neural networks are hard to understand because they typically involve distributed, nonlinear relationships encoded by a large number of real-valued parameters. To address this limitation, we have been developing algorithms for extracting "symbolic" concept representations from trained neural networks. We first discuss why it is important to be able to understand the concept representations formed by neural networks. We then briefly describe our approach and discuss a number of issues pertaining to comprehensibility that have arisen in our work. Finally, we discuss choices that we have made in our research to date, and open research issues that we have not yet addressed.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 1,598
|
val
|
1-hop neighbor's text information: How to retrieve relevant information?. : The document presents an approach to judging relevance of retrieved information based on a novel approach to similarity assessment. Contrary to other systems, we define relevance measures (context in similarity) at query time. This is necessary if since without a context in similarity one cannot guarantee that similar items will also be relevant.
1-hop neighbor's text information: Supporting flexibility. a case-based reasoning approach. : The AAAI Fall Symposium; Flexible Computation in Intelligent Systems: Results, Issues, and Opportunities. Nov. 9-11, 1996, Cambridge, MA Abstract This paper presents a case-based reasoning system TA3. We address the flexibility of the case-based reasoning process, namely flexible retrieval of relevant experiences, by using a novel similarity assessment theory. To exemplify the advantages of such an approach, we have experimentally evaluated the system and compared its performance to the performance of non-flexible version of TA3 and to other machine learning algorithms on several domains.
1-hop neighbor's text information: Context-based similarity applied to retrieval of relevant cases. : Retrieving relevant cases is a crucial component of case-based reasoning systems. The task is to use user-defined query to retrieve useful information, i.e., exact matches or partial matches which are close to query-defined request according to certain measures. The difficulty stems from the fact that it may not be easy (or it may be even impossible) to specify query requests precisely and completely resulting in a situation known as a fuzzy-querying. It is usually not a problem for small domains, but for a large repositories which store various information (multifunctional information bases or a federated databases), a request specification becomes a bottleneck. Thus, a flexible retrieval algorithm is required, allowing for imprecise query specification and for changing the viewpoint. Efficient database techniques exists for locating exact matches. Finding relevant partial matches might be a problem. This document proposes a context-based similarity as a basis for flexible retrieval. Historical bacground on research in similarity assessment is presented and is used as a motivation for formal definition of context-based similarity. We also describe a similarity-based retrieval system for multifunctinal information bases.
Target text information: A similarity-based retrieval tool for software repositories. : In this paper we present a prototype of a flexible similarity-based retrieval system. Its flexibility is supported by allowing for an imprecisely specified query. Moreover, our algorithm allows for assessing if the retrieved items are relevant in the initial context, specified in the query. The presented system can be used as a supporting tool for a software repository. We also discuss system evaluation with concerns on usefulness, scalability, applicability and comparability. Evaluation of the T A3 system on three domains gives us encouraging results and an integration of TA3 into a real software repository as a retrieval tool is ongoing.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
2
|
Case Based
|
cora
| 1,356
|
test
|
1-hop neighbor's text information: Mapping Bayesian networks to Boltzmann machines. : We study the task of tnding a maximal a posteriori (MAP) instantiation of Bayesian network variables, given a partial value assignment as an initial constraint. This problem is known to be NP-hard, so we concentrate on a stochastic approximation algorithm, simulated annealing. This stochastic algorithm can be realized as a sequential process on the set of Bayesian network variables, where only one variable is allowed to change at a time. Consequently, the method can become impractically slow as the number of variables increases. We present a method for mapping a given Bayesian network to a massively parallel Bolztmann machine neural network architecture, in the sense that instead of using the normal sequential simulated annealing algorithm, we can use a massively parallel stochastic process on the Boltzmann machine architecture. The neural network updating process provably converges to a state which solves a given MAP task.
Target text information: Constructing computationally efficient Bayesian models via unsupervised clustering. In Probabilistic BIBLIOGRAPHY 91 Reasoning and Bayesian Belief Networks, : Given a set of samples of an unknown probability distribution, we study the problem of constructing a good approximative Bayesian network model of the probability distribution in question. This task can be viewed as a search problem, where the goal is to find a maximal probability network model, given the data. In this work, we do not make an attempt to learn arbitrarily complex multi-connected Bayesian network structures, since such resulting models can be unsuitable for practical purposes due to the exponential amount of time required for the reasoning task. Instead, we restrict ourselves to a special class of simple tree-structured Bayesian networks called Bayesian prototype trees, for which a polynomial time algorithm for Bayesian reasoning exists. We show how the probability of a given Bayesian prototype tree model can be evaluated, given the data, and how this evaluation criterion can be used in a stochastic simulated annealing algorithm for searching the model space. The simulated annealing algorithm provably finds the maximal probability model, provided that a sufficient amount of time is used.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
6
|
Probabilistic Methods
|
cora
| 1,288
|
test
|
1-hop neighbor's text information: Constructive belief and rational representation. : It is commonplace in artificial intelligence to divide an agent's explicit beliefs into two parts: the beliefs explicitly represented or manifest in memory, and the implicitly represented or constructive beliefs that are repeatedly reconstructed when needed rather than memorized. Many theories of knowledge view the relation between manifest and constructive beliefs as a logical relation, with the manifest beliefs representing the constructive beliefs through a logic of belief. This view, however, limits the ability of a theory to treat incomplete or inconsistent sets of beliefs in useful ways. We argue that a more illuminating view is that belief is the result of rational representation. In this theory, the agent obtains its constructive beliefs by using its manifest beliefs and preferences to rationally (in the sense of decision theory) choose the most useful conclusions indicated by the manifest beliefs.
1-hop neighbor's text information: Rationality and its Roles in Reasoning (extended version), : The economic theory of rationality promises to equal mathematical logic in its importance for the mechanization of reasoning. We survey the growing literature on how the basic notions of probability, utility, and rational choice, coupled with practical limitations on information and resources, influence the design and analysis of reasoning and representation systems.
Target text information: Impediments to Universal Preference-Based Default Theories: Research on nonmonotonic and default reasoning has identified several important criteria for preferring alternative default inferences. The theories of reasoning based on each of these criteria may uniformly be viewed as theories of rational inference, in which the reasoner selects maximally preferred states of belief. Though researchers have noted some cases of apparent conflict between the preferences supported by different theories, it has been hoped that these special theories of reasoning may be combined into a universal logic of nonmonotonic reasoning. We show that the different categories of preferences conflict more than has been realized, and adapt formal results from social choice theory to prove that every universal theory of default reasoning will violate at least one reasonable principle of rational reasoning. Our results can be interpreted as demonstrating that, within the preferential framework, we cannot expect much improvement on the rigid lexicographic priority mechanisms that have been proposed for conflict resolution.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
6
|
Probabilistic Methods
|
cora
| 70
|
test
|
1-hop neighbor's text information: Vytopil. Design Issues Towards PREENS, a Parallel Research Execution Environment for Neural Systems. : PREENS a Parallel Research Execution Environment for Neural Systems is a distributed neurosimulator, targeted on networks of workstations and transputer systems. As current applications of neural networks often contain large amounts of data and as the neural networks involved in tasks such as vision are very large, high requirements on memory and computational resources are imposed on the target execution platforms. PREENS can be executed in a distributed environment, i.e. tools and neural network simulation programs can be running on any machine connectable via TCP/IP. Using this approach, larger tasks and more data can be examined using an efficient coarse grained parallelism. Furthermore, the design of PREENS allows for neural networks to be running on any high performance MIMD machine such as a trans-puter system. In this paper, the different features and design concepts of PREENS are discussed. These can also be used for other applications, like image processing.
1-hop neighbor's text information: Vuurpijl and Th.E. Schouten. A Scalable Performance Prediction Model for Parallel Neural Network Simulations. : A performance prediction method is presented for indicating the performance range of MIMD parallel processor systems for neural network simulations. The total execution time of a parallel application is modeled as the sum of its calculation and communication times. The method is scalable because based on the times measured on one processor and one communication link, the performance, speedup, and efficiency can be predicted for a larger processor system. It is validated quantitatively by applying it to two popular neural networks, backpropagation and the Kohonen self-organizing feature map, decomposed on a GCel-512, a 512 transputer system. Agreement of the model with the measurements is within 9%.
1-hop neighbor's text information: Rochester Connectionist Simulator. : Specifying, constructing and simulating structured connectionist networks requires significant programming effort. System tools can greatly reduce the effort required, and by providing a conceptual structure within which to work, make large and complex network simulations possible. The Rochester Connectionist Simulator is a system tool designed to aid specification, construction and simulation of connectionist networks. This report describes this tool in detail: the facilities provided and how to use them, as well as details of the implementation. Through this we hope not only to make designing and verifying connectionist networks easier, but also to encourage the development and refinement of connectionist research tools themselves.
Target text information: CONVIS: Action Oriented Control and Visualization of Neural Networks Introduction and Technical Description:
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 239
|
test
|
1-hop neighbor's text information: (1994) "Evaluation of Pattern Classifiers for Fingerprint and OCR Applications," : In this paper we evaluate the classification accuracy of four statistical and three neural network classifiers for two image based pattern classification problems. These are fingerprint classification and optical character recognition (OCR) for isolated handprinted digits. The evaluation results reported here should be useful for designers of practical systems for these two important commercial applications. For the OCR problem, the Karhunen-Loeve (K-L) transform of the images is used to generate the input feature set. Similarly for the fingerprint problem, the K-L transform of the ridge directions is used to generate the input feature set. The statistical classifiers used were Euclidean minimum distance, quadratic minimum distance, normal, and k-nearest neighbor. The neural network classifiers used were multilayer perceptron, radial basis function, and probabilistic. The OCR data consisted of 7,480 digit images for training and 23,140 digit images for testing. The fingerprint data consisted of 2,000 training and 2,000 testing images. In addition to evaluation for accuracy, the multilayer perceptron and radial basis function networks were evaluated for size and generalization capability. For the evaluated datasets the best accuracy obtained for either problem was provided by the probabilistic neural network, where the minimum classification error was 2.5% for OCR and 7.2% for fingerprints.
1-hop neighbor's text information: Human Face Detection in Visual Scenes. : We present a neural network-based face detection system. A retinally connected neural network examines small windows of an image, and decides whether each window contains a face. The system arbitrates between multiple networks to improve performance over a single network. We use a bootstrap algorithm for training, which adds false detections into the training set as training progresses. This eliminates the difficult task of manually selecting non-face training examples, which must be chosen to span the entire space of non-face images. Comparisons with another state-of-the-art face detection system are presented; our system has better performance in terms of detection and false-positive rates.
1-hop neighbor's text information: From data distributions to regularization in invariant learning. : Ideally pattern recognition machines provide constant output when the inputs are transformed under a group G of desired invariances. These invariances can be achieved by enhancing the training data to include examples of inputs transformed by elements of G, while leaving the corresponding targets unchanged. Alternatively the cost function for training can include a regularization term that penalizes changes in the output when the input is transformed under the group. This paper relates the two approaches, showing precisely the sense in which the regularized cost function approximates the result of adding transformed (or distorted) examples to the training data. The cost function for the enhanced training set is equivalent to the sum of the original cost function plus a regularizer. For unbiased models, the regularizer reduces to the intuitively obvious choice - a term that penalizes changes in the output when the inputs are transformed under the group. For infinitesimal transformations, the coefficient of the regularization term reduces to the variance of the distortions introduced into the training data. This correspondence provides a simple bridge between the two approaches.
Target text information: Back, "Face recognition: a convolutional neural network approach," : Faces represent complex, multidimensional, meaningful visual stimuli and developing a computational model for face recognition is difficult [42]. We present a hybrid neural network solution which compares favorably with other methods. The system combines local image sampling, a self-organizing map neural network, and a convolutional neural network. The self-organizing map provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the convolutional neural network provides for partial invariance to translation, rotation, scale, and deformation. The convolutional network extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen-Loeve transform in place of the self-organizing map, and a multi-layer perceptron in place of the convolutional network. The Karhunen-Loeve transform performs almost as well (5.3% error versus 3.8%). The multi-layer perceptron performs very poorly (40% error versus 3.8%). The method is capable of rapid classification, requires only fast, approximate normalization and preprocessing, and consistently exhibits better classification performance than the eigenfaces approach [42] on the database considered as the number of images per person in the training database is varied from 1 to 5. With 5 images per person the proposed method and eigenfaces result in 3.8% and 10.5% error respectively. The recognizer provides a measure of confidence in its output and classification error approaches zero when rejecting as few as 10% of the examples. We use a database of 400 images of 40 individuals which contains quite a high degree of variability in expression, pose, and facial details. We analyze computational complexity and discuss how new classes could be added to the trained recognizer.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 2,639
|
val
|
1-hop neighbor's text information: Learning when reformulation is appropriate for iterative design. : It is well known that search-space reformulation can improve the speed and reliability of numerical optimization in engineering design. We argue that the best choice of reformulation depends on the design goal, and present a technique for automatically constructing rules that map the design goal into a reformulation chosen from a space of possible reformulations. We tested our technique in the domain of racing-yacht-hull design, where each reformulation corresponds to incorporating constraints into the search space. We applied a standard inductive-learning algorithm, C4.5, to a set of training data describing which constraints are active in the optimal design for each goal encountered in a previous design session. We then used these rules to choose an appropriate reformulation for each of a set of test cases. Our experimental results show that using these reformulations improves both the speed and the reliability of design optimization, outperforming competing methods and approaching the best performance possible.
1-hop neighbor's text information: Intelligent model selection for hillclimbing search in computer-aided design. : Models of physical systems can differ according to computational cost, accuracy and precision, among other things. Depending on the problem solving task at hand, different models will be appropriate. Several investigators have recently developed methods of automatically selecting among multiple models of physical systems. Our research is novel in that we are developing model selection techniques specifically suited to computer-aided design. Our approach is based on the idea that artifact performance models for computer-aided design should be chosen in light of the design decisions they are required to support. We have developed a technique called "Gradient Magnitude Model Selection" (GMMS), which embodies this principle. GMMS operates in the context of a hillclimbing search process. It selects the simplest model that meets the needs of the hillclimbing algorithm in which it operates. We are using the domain of sailing yacht design as a testbed for this research. We have implemented GMMS and used it in hillclimbing search to decide between a computationally expensive potential-flow program and an algebraic approximation to analyze the performance of sailing yachts. Experimental tests show that GMMS makes the design process faster than it would be if the most expensive model were used for all design evaluations. GMMS achieves this performance improvement with little or no sacrifice in the quality of the resulting design.
Target text information: A Transformation System for Interactive Reformulation of Design Optimization Strategies: Numerical design optimization algorithms are highly sensitive to the particular formulation of the optimization problems they are given. The formulation of the search space, the objective function and the constraints will generally have a large impact on the duration of the optimization process as well as the quality of the resulting design. Furthermore, the best formulation will vary from one application domain to another, and from one problem to another within a given application domain. Unfortunately, a design engineer may not know the best formulation in advance of attempting to set up and run a design optimization process. In order to attack this problem, we have developed a software environment that supports interactive formulation, testing and reformulation of design optimization strategies. Our system represents optimization strategies in terms of second-order dataflow graphs. Reformulations of strategies are implemented as transformations between dataflow graphs. The system permits the user to interactively generate and search a space of design optimization strategies, and experimentally evaluate their performance on test problems, in order to find a strategy that is suitable for his application domain. The system has been implemented in a domain independent fashion, and is being tested in the domain of racing yacht design.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
3
|
Genetic Algorithms
|
cora
| 298
|
test
|
1-hop neighbor's text information: Some studies in machine learning using the game of Checkers. :
1-hop neighbor's text information: Learning Classification Trees: Algorithms for learning classification trees have had successes in artificial intelligence and statistics over many years. This paper outlines how a tree learning algorithm can be derived using Bayesian statistics. This introduces Bayesian techniques for splitting, smoothing, and tree averaging. The splitting rule is similar to Quinlan's information gain, while smoothing and averaging replace pruning. Comparative experiments with reimplementations of a minimum encoding approach, Quinlan's C4 (Quinlan et al., 1987) and Breiman et al.'s CART (Breiman et al., 1984) show the full Bayesian algorithm can produce Publication: This paper is a final draft submitted for publication to the Statistics and Computing journal; a version with some minor changes appeared in Volume 2, 1992, pages 63-73. more accurate predictions than versions of these other approaches, though pay a computational price.
1-hop neighbor's text information: R.S., & Whitehead, S.D. (1993). Online learning with random representations. : We consider the requirements of online learning|learning which must be done incrementally and in realtime, with the results of learning available soon after each new example is acquired. Despite the abundance of methods for learning from examples, there are few that can be used effectively for online learning, e.g., as components of reinforcement learning systems. Most of these few, including radial basis functions, CMACs, Ko-honen's self-organizing maps, and those developed in this paper, share the same structure. All expand the original input representation into a higher dimensional representation in an unsupervised way, and then map that representation to the final answer using a relatively simple supervised learner, such as a perceptron or LMS rule. Such structures learn very rapidly and reliably, but have been thought either to scale poorly or to require extensive domain knowledge. To the contrary, some researchers (Rosenblatt, 1962; Gallant & Smith, 1987; Kanerva, 1988; Prager & Fallside, 1988) have argued that the expanded representation can be chosen largely at random with good results. The main contribution of this paper is to develop and test this hypothesis. We show that simple random-representation methods can perform as well as nearest-neighbor methods (while being more suited to online learning), and significantly better than backpropagation. We find that the size of the random representation does increase with the dimensionality of the problem, but not unreasonably so, and that the required size can be reduced substantially using unsupervised-learning techniques. Our results suggest that randomness has a useful role to play in online supervised learning and constructive induction.
Target text information: Incremental induction of decision trees. : Technical Report 94-07 February 7, 1994 (updated April 25, 1994) This paper will appear in Proceedings of the Eleventh International Conference on Machine Learning. Abstract This paper presents an algorithm for incremental induction of decision trees that is able to handle both numeric and symbolic variables. In order to handle numeric variables, a new tree revision operator called `slewing' is introduced. Finally, a non-incremental method is given for finding a decision tree based on a direct metric of a candidate tree.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
2
|
Case Based
|
cora
| 56
|
val
|
1-hop neighbor's text information: Soft Computing: the Convergence of Emerging Reasoning Technologies: The term Soft Computing (SC) represents the combination of emerging problem-solving technologies such as Fuzzy Logic (FL), Probabilistic Reasoning (PR), Neural Networks (NNs), and Genetic Algorithms (GAs). Each of these technologies provide us with complementary reasoning and searching methods to solve complex, real-world problems. After a brief description of each of these technologies, we will analyze some of their most useful combinations, such as the use of FL to control GAs and NNs parameters; the application of GAs to evolve NNs (topologies or weights) or to tune FL controllers; and the implementation of FL controllers as NNs tuned by backpropagation-type algorithms.
Target text information: Genetic algorithms for automated tuning of fuzzy controllers: A transportation application. : We describe the design and tuning of a controller for enforcing compliance with a prescribed velocity profile for a rail-based transportation system. This requires following a trajectory, rather than fixed set-points (as in automobiles). We synthesize a fuzzy controller for tracking the velocity profile, while providing a smooth ride and staying within the prescribed speed limits. We use a genetic algorithm to tune the fuzzy controller's performance by adjusting its parameters (the scaling factors and the membership functions) in a sequential order of significance. We show that this approach results in a controller that is superior to the manually designed one, and with only modest computational effort. This makes it possible to customize automated tuning to a variety of different configurations of the route, the terrain, the power configuration, and the cargo.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
3
|
Genetic Algorithms
|
cora
| 1,450
|
val
|
1-hop neighbor's text information: Learning to reason. : We introduce a new framework for the study of reasoning. The Learning (in order) to Reason approach developed here views learning as an integral part of the inference process, and suggests that learning and reasoning should be studied together. The Learning to Reason framework combines the interfaces to the world used by known learning models with the reasoning task and a performance criterion suitable for it. In this framework the intelligent agent is given access to its favorite learning interface, and is also given a grace period in which it can interact with this interface and construct a representation KB of the world W . The reasoning performance is measured only after this period, when the agent is presented with queries ff from some query language, relevant to the world, and has to answer whether W implies ff. The approach is meant to overcome the main computational difficulties in the traditional treatment of reasoning which stem from its separation from the "world". Since the agent interacts with the world when constructing its knowledge representation it can choose a representation that is useful for the task at hand. Moreover, we can now make explicit the dependence of the reasoning performance on the environment the agent interacts with. We show how previous results from learning theory and reasoning fit into this framework and illustrate the usefulness of the Learning to Reason approach by exhibiting new results that are not possible in the traditional setting. First, we give Learning to Reason algorithms for classes of propositional languages for which there are no efficient reasoning algorithms, when represented as a traditional (formula-based) knowledge base. Second, we exhibit a Learning to Reason algorithm for a class of propositional languages that is not known to be learnable in the traditional sense. An earlier version of the paper appears in the Proceedings of the National Conference on Artificial Intelligence, AAAI-94. Roni Khardon was supported by ARO under grant DAAL03-92-G-0115 and by NSF under grant CCR-95-04436. Dan Roth was supported by NSF grant CCR-92-00884 and by DARPA AFOSR-F4962-92-J-0466. Author' present Addresses: Roni Khardon, Division of Applied Sciences, Harvard University, Cambridge, MA 02138 e-mail: [email protected]; Dan Roth, Department of Computer Science, University of Illinois at Urbana-Champaign, 1304 W. Springfield Ave., Urbana, Illinois 61801 e-mail: [email protected] Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or direct commercial advantage and that copies show this notice on the first page or initial screen of a display along with the full citation. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, to redistribute to lists, or to use any component of this work in other works, requires prior specific permission and/or a fee. Permissions may be requested from Publications Dept, ACM Inc., 1515 Broadway, New York, NY 10036 USA, fax +1 (212) 869-0481, or [email protected].
1-hop neighbor's text information: Learning default concepts. : Classical concepts, based on necessary and sufficient defining conditions, cannot classify logically insufficient object descriptions. Many reasoning systems avoid this limitation by using "default concepts" to classify incompletely described objects. This paper addresses the task of learning such default concepts from observational data. We first model the underlying performance task | classifying incomplete examples | as a probabilistic process that passes random test examples through a "blocker" that can hide object attributes from the classifier. We then address the task of learning accurate default concepts from random training examples. After surveying the learning techniques that have been proposed for this task in the machine learning and knowledge representation literatures, and investigating their relative merits, we present a more data-efficient learning technique, developed from well-known statistical principles. Finally, we extend Valiant's pac-learning framework to this context and obtain a number of useful learnability results. Appears in the Proceedings of the Tenth Canadian Conference on Artificial Intelligence (CSCSI-94),
1-hop neighbor's text information: Exploiting the omission of irrelevant data. : Most learning algorithms work most effectively when their training data contain completely specified labeled samples. In many diagnostic tasks, however, the data will include the values of only some of the attributes; we model this as a blocking process that hides the values of those attributes from the learner. While blockers that remove the values of critical attributes can handicap a learner, this paper instead focuses on blockers that remove only irrelevant attribute values, i.e., values that are not needed to classify an instance, given the values of the other unblocked attributes. We first motivate and formalize this model of "superfluous-value blocking", and then demonstrate that these omissions can be useful, by proving that certain classes that seem hard to learn in the general PAC model | viz., decision trees and DNF formulae | are trivial to learn in this setting. We also show that this model can be extended to deal with (1) theory revision (i.e., modifying an existing formula); (2) blockers that occasionally include superfluous values or exclude required values; and (3) other cor ruptions of the training data.
Target text information: Learning active classifiers. : Many classification algorithms are "passive", in that they assign a class-label to each instance based only on the description given, even if that description is incomplete. In contrast, an active classifier can | at some cost | obtain the values of missing attributes, before deciding upon a class label. The expected utility of using an active classifier depends on both the cost required to obtain the additional attribute values and the penalty incurred if it outputs the wrong classification. This paper considers the problem of learning near-optimal active classifiers, using a variant of the probably-approximately-correct (PAC) model. After defining the framework | which is perhaps the main contribution of this paper | we describe a situation where this task can be achieved efficiently, but then show that the task is often intractable.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
4
|
Theory
|
cora
| 2,595
|
test
|
1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming.
1-hop neighbor's text information: Genetic programming estimates of kol-mogorov complexity. : In this paper the problem of the Kolmogorov complexity related to binary strings is faced. We propose a Genetic Programming approach which consists in evolving a population of Lisp programs looking for the optimal program that generates a given string. This evolutionary approach has permited to overcome the intractable space and time difficulties occurring in methods which perform an approximation of the Kolmogorov complexity function. The experimental results are quite significant and also show interesting computational strategies so proving the effectiveness of the implemented technique.
Target text information: Genetic programming for pedestrians. : We propose an extension to the Genetic Programming paradigm which allows users of traditional Genetic Algorithms to evolve computer programs. To this end, we have to introduce mechanisms like transscription, editing and repairing into Genetic Programming. We demonstrate the feasibility of the approach by using it to develop programs for the prediction of sequences of integer numbers.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
3
|
Genetic Algorithms
|
cora
| 2,032
|
val
|
1-hop neighbor's text information: Trading spaces: computation, representation and the limits of learning. : fl Research on this paper was partly supported by a Senior Research Leave fellowship granted by the Joint Council (SERC/MRC/ESRC) Cognitive Science Human Computer Interaction Initiative to one of the authors (Clark). Thanks to the Initiative for that support. y The order of names is arbitrary.
1-hop neighbor's text information: Is Transfer Inductive?: Work is currently underway to devise learning methods which are better able to transfer knowledge from one task to another. The process of knowledge transfer is usually viewed as logically separate from the inductive procedures of ordinary learning. However, this paper argues that this `seperatist' view leads to a number of conceptual difficulties. It offers a task analysis which situates the transfer process inside a generalised inductive protocol. It argues that transfer should be viewed as a subprocess within induction and not as an independent procedure for transporting knowledge between learning trials.
Target text information: Statistical biases in backpropagation learning. : The paper investigates the statistical effects which may need to be exploited in supervised learning. It notes that these effects can be classified according to their conditionality and their order and proposes that learning algorithms will typically have some form of bias towards particular classes of effect. It presents the results of an empirical study of the statistical bias of backpropagation. The study involved applying the algorithm to a wide range of learning problems using a variety of different internal architectures. The results of the study revealed that backpropagation has a very specific bias in the general direction of statistical rather than relational effects. The paper shows how the existence of this bias effectively constitutes a weakness in the algorithm's ability to discount noise.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 1,490
|
test
|
1-hop neighbor's text information: Characterizing rational versus exponential learning curves. : We consider the standard problem of learning a concept from random examples. Here a learning curve can be defined to be the expected error of a learner's hypotheses as a function of training sample size. Haussler, Littlestone and Warmuth have shown that, in the distribution free setting, the smallest expected error a learner can achieve in the worst case over a concept class C converges rationally to zero error (i.e., fi(1=t) for training sample size t). However, recently Cohn and Tesauro have demonstrated how exponential convergence can often be observed in experimental settings (i.e., average error decreasing as e fi(t) ). By addressing a simple non-uniformity in the original analysis, this paper shows how the dichotomy between rational and exponential worst case learning curves can be recovered in the distribution free theory. These results support the experimental findings of Cohn and Tesauro: for finite concept classes, any consistent learner achieves exponential convergence, even in the worst case; but for continuous concept classes, no learner can exhibit sub-rational convergence for every target concept and domain distribution. A precise boundary between rational and exponential convergence is drawn for simple concept chains. Here we show that somewhere dense chains always force rational convergence in the worst case, but exponential convergence can always be achieved for nowhere dense chains.
1-hop neighbor's text information: On genetic algorithms. : We analyze the performance of a Genetic Algorithm (GA) we call Culling and a variety of other algorithms on a problem we refer to as Additive Search Problem (ASP). ASP is closely related to several previously well studied problems, such as the game of Mastermind and additive fitness functions. We show that the problem of learning the Ising perceptron is reducible to a noisy version of ASP. Culling is efficient on ASP, highly noise tolerant, and the best known approach in some regimes. Noisy ASP is the first problem we are aware of where a Genetic Type Algorithm bests all known competitors. Standard GA's, by contrast, perform much more poorly on ASP than hillclimbing and other approaches even though the Schema theorem holds for ASP. We generalize ASP to k-ASP to study whether GA's will achieve `implicit parallelism' in a problem with many more schemata. GA's fail to achieve this implicit parallelism, but we describe an algorithm we call Explicitly Parallel Search that succeeds. We also compute the optimal culling point for selective breeding, which turns out to be independent of the fitness function or the population distribution. We also analyze a Mean Field Theoretic algorithm performing similarly to Culling on many problems. These results provide insight into when and how GA's can beat competing methods.
1-hop neighbor's text information: Self bounding learning algorithms: Most of the work which attempts to give bounds on the generalization error of the hypothesis generated by a learning algorithm is based on methods from the theory of uniform convergence. These bounds are a-priori bounds that hold for any distribution of examples and are calculated before any data is observed. In this paper we propose a different approach for bounding the generalization error after the data has been observed. A self-bounding learning algorithm is an algorithm which, in addition to the hypothesis that it outputs, outputs a reliable upper bound on the generalization error of this hypothesis. We first explore the idea in the statistical query learning framework of Kearns [10]. After that we give an explicit self bounding algorithm for learning algorithms that are based on local search.
Target text information: Rigorous learning curve bounds from statistical mechanics. : In this paper we introduce and investigate a mathematically rigorous theory of learning curves that is based on ideas from statistical mechanics. The advantage of our theory over the well-established Vapnik-Chervonenkis theory is that our bounds can be considerably tighter in many cases, and are also more reflective of the true behavior (functional form) of learning curves. This behavior can often exhibit dramatic properties such as phase transitions, as well as power law asymptotics not explained by the VC theory. The disadvantages of our theory are that its application requires knowledge of the input distribution, and it is limited so far to finite cardinality function classes. We illustrate our results with many concrete examples of learning curve bounds derived from our theory.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
4
|
Theory
|
cora
| 2,619
|
test
|
1-hop neighbor's text information: Extended selection mechanisms in genetic algorithms. :
1-hop neighbor's text information: (1992) Genetic self-learning. : Evolutionary Algorithms are direct random search algorithms which imitate the principles of natural evolution as a method to solve adaptation (learning) tasks in general. As such they have several features in common which can be observed on the genetic and phenotypic level of living species. In this paper the algorithms' capability of adaptation or learning in a wider sense is demonstrated, and it is focused on Genetic Algorithms to illustrate the learning process on the population level (first level learning), and on Evolution Strategies to demonstrate the learning process on the meta-level of strategy parameters (second level learning).
1-hop neighbor's text information: Self-Adaption in Genetic Algorithms. : In this paper a new approach is presented, which transfers a basic idea from Evolution Strategies (ESs) to GAs. Mutation rates are changed into endogeneous items which are adapting during the search process. First experimental results are presented, which indicate that environment-dependent self-adaptation of appropriate settings for the mutation rate is possible even for GAs.
Target text information: (1991) Global optimization by means of distributed evolution Genetic Algorithms in Engineering and Computer Science Editor J. : Genetic Algorithms (GAs) are powerful heuristic search strategies based upon a simple model of organic evolution. The basic working scheme of GAs as developed by Holland [Hol75] is described within this paper in a formal way, and extensions based upon the second-level learning principle for strategy parameters as introduced in Evolution Strategies (ESs) are proposed. First experimental results concerning this extension of GAs are also reported.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
3
|
Genetic Algorithms
|
cora
| 2,030
|
train
|
1-hop neighbor's text information: Meter as Mechanism: A Neural Network that Learns Metrical Patterns: One kind of prosodic structure that apparently underlies both music and some examples of speech production is meter. Yet detailed measurements of the timing of both music and speech show that the nested periodicities that define metrical structure can be quite noisy in time. What kind of system could produce or perceive such variable metrical timing patterns? And what would it take to be able to store and reproduce particular metrical patterns from long-term memory? We have developed a network of coupled oscillators that both produces and perceives patterns of pulses that conform to particular meters. In addition, beginning with an initial state with no biases, it can learn to prefer the particular meter that it has been previously exposed to. Meter is an abstract structure in time based on the periodic recurrence of pulses, that is, on equal time intervals between distinct phase zeros. From this point of view, the simplest meter is a regular metronome pulse. But often there appear meters with two or three (or rarely even more) nested periodicities with integral frequency ratios. A hierarchy of such metrical structures is implied in standard Western musical notation, where different levels of the metrical hierarchy are indicated by kinds of notes (quarter notes, half notes, etc.) and by the bars separating measures with an equal number of beats. For example, in a basic waltz-time meter, there are individual beats, all with the same spacing, grouped into sets of three, with every third one receiving a stronger accent at its onset. In this meter there is a hierarchy consisting of both a faster periodic cycle (at the beat level) and a slower one (at the measure level) that is 1/3 as fast, with its onset (or zero phase angle) coinciding with the zero phase angle of every third beat. This essentially temporal view of meter contrasts with the traditional symbol-string theories (such as Hayes, 1981 for speech and Lerdahl and Jackendoff, 1983 for music). Metrical systems, however they are defined, seem to underlie most of what we call music. Indeed, an expanded version of European musical notation is found to be practical for transcribing most music from around the world. That is, most forms of music employ nested periodic temporal patterns (Titon, Fujie, & Locke, 1996). Musical notation has
Target text information: Synchronization and desynchronization in a network of locally coupled Wilson-Cowan oscillators, : A network of Wilson-Cowan oscillators is constructed, and its emergent properties of synchronization and desynchronization are investigated by both computer simulation and formal analysis. The network is a two-dimensional matrix, where each oscillator is coupled only to its neighbors. We show analytically that a chain of locally coupled oscillators (the piece-wise linear approximation to the Wilson-Cowan oscillator) synchronizes, and present a technique to rapidly entrain finite numbers of oscillators. The coupling strengths change on a fast time scale based on a Hebbian rule. A global separator is introduced which receives input from and sends feedback to each oscillator in the matrix. The global separator is used to desynchronize different oscillator groups. Unlike many other models, the properties of this network emerge from local connections, that preserve spatial relationships among components, and are critical for encoding Gestalt principles of feature grouping. The ability to synchronize and desynchronize oscillator groups within this network offers a promising approach for pattern segmentation and figure/ground segregation based on oscillatory correlation.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 1,117
|
test
|
1-hop neighbor's text information: Coordination and Control Structures and Processes: Possibilities for Connectionist Networks. : The absence of powerful control structures and processes that synchronize, coordinate, switch between, choose among, regulate, direct, modulate interactions between, and combine distinct yet interdependent modules of large connectionist networks (CN) is probably one of the most important reasons why such networks have not yet succeeded at handling difficult tasks (e.g. complex object recognition and description, complex problem-solving, planning). In this paper we examine how CN built from large numbers of relatively simple neuron-like units can be given the ability to handle problems that in typical multi-computer networks and artificial intelligence programs along with all other types of programs are always handled using extremely elaborate and precisely worked out central control (coordination, synchronization, switching, etc.). We point out the several mechanisms for central control of this un-brain-like sort that CN already have built into them albeit in hidden, often overlooked, ways. We examine the kinds of control mechanisms found in computers, programs, fetal development, cellular function and the immune system, evolution, social organizations, and especially brains, that might be of use in CN. Particularly intriguing suggestions are found in the pacemakers, oscillators, and other local sources of the brain's complex partial synchronies; the diffuse, global effects of slow electrical waves and neurohormones; the developmental program that guides fetal development; communication and coordination within and among living cells; the working of the immune system; the evolutionary processes that operate on large populations of organisms; and the great variety of partially competing partially cooperating controls found in small groups, organizations, and larger societies. All these systems are rich in control but typically control that emerges from complex interactions of many local and diffuse sources. We explore how several different kinds of plausible control mechanisms might be incorporated into CN, and assess their potential benefits with respect to their cost.
1-hop neighbor's text information: A simple randomized quantization algorithm for neural network pattern classifiers. : This paper explores some algorithms for automatic quantization of real-valued datasets using thermometer codes for pattern classification applications. Experimental results indicate that a relatively simple randomized thermometer code generation technique can result in quantized datasets that when used to train simple perceptrons, can yield generalization on test data that is substantially better than that obtained with their unquantized counterparts.
1-hop neighbor's text information: Generative Learning Structures for Generalized Connectionist Networks. : Massively parallel networks of relatively simple computing elements offer an attractive and versatile framework for exploring a variety of learning structures and processes for intelligent systems. This paper briefly summarizes some popular learning structures and processes used in such networks. It outlines a range of potentially more powerful alternatives for pattern-directed inductive learning in such systems. It motivates and develops a class of new learning algorithms for massively parallel networks of simple computing elements. We call this class of learning processes generative for they offer a set of mechanisms for constructive and adaptive determination of the network architecture the number of processing elements and the connectivity among them as a function of experience. Generative learning algorithms attempt to overcome some of the limitations of some approaches to learning in networks that rely on modification of weights on the links within an otherwise fixed network topology e.g., rather slow learning and the need for an a-priori choice of a network architecture. Several alternative designs as well as a range of control structures and processes which can be used to regulate the form and content of internal representations learned by such networks are examined. Empirical results from the study of some generative learning algorithms are briefly summarized and several extensions and refinements of such algorithms, and directions for future research are outlined.
Target text information: Analysis of decision boundaries generated by constructive neural network learning algorithms. : Constructive learning algorithms offer an approach to incremental construction of near-minimal artificial neural networks for pattern classification. Examples of such algorithms include Tower, Pyramid, Upstart, and Tiling algorithms which construct multilayer networks of threshold logic units (or, multilayer perceptrons). These algorithms differ in terms of the topology of the networks that they construct which in turn biases the search for a decision boundary that correctly classifies the training set. This paper presents an analysis of such algorithms from a geometrical perspective. This analysis helps in a better characterization of the search bias employed by the different algorithms in relation to the geometrical distribution of examples in the training set. Simple experiments with non linearly separable training sets support the results of mathematical analysis of such algorithms. This suggests the possibility of designing more efficient constructive algorithms that dynamically choose among different biases to build near-minimal networks for pattern classification.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 907
|
test
|
1-hop neighbor's text information: A neural model of the cortical representation of egocentric distance. :
1-hop neighbor's text information: P (1997). Neural models for part whole hierarchies. : We present a connectionist method for representing images that explicitly addresses their hierarchical nature. It blends data from neu-roscience about whole-object viewpoint sensitive cells in inferotem-poral cortex 8 and attentional basis-field modulation in V4 3 with ideas about hierarchical descriptions based on microfeatures. 5, 11 The resulting model makes critical use of bottom-up and top-down pathways for analysis and synthesis. 6 We illustrate the model with a simple example of representing information about faces.
Target text information: TJ (1993). Egocentric spatial representation in early vision. :
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 1,376
|
test
|
1-hop neighbor's text information: "Using a genetic algorithm to learn behaviors for autonomous vehicles," : Truly autonomous vehicles will require both projec - tive planning and reactive components in order to perform robustly. Projective components are needed for long-term planning and replanning where explicit reasoning about future states is required. Reactive components allow the system to always have some action available in real-time, and themselves can exhibit robust behavior, but lack the ability to expli - citly reason about future states over a long time period. This work addresses the problem of creating reactive components for autonomous vehicles. Creating reactive behaviors (stimulus-response rules) is generally difficult, requiring the acquisition of much knowledge from domain experts, a problem referred to as the knowledge acquisition bottleneck. SAMUEL is a system that learns reactive behaviors for autonomous agents. SAMUEL learns these behaviors under simulation, automating the process of creating stimulus-response rules and therefore reducing the bottleneck. The learning algorithm was designed to learn useful behaviors from simulations of limited fidelity. Current work is investigating how well behaviors learned under simulation environments work in real world environments. In this paper, we describe SAMUEL, and describe behaviors that have been learned for simulated autonomous aircraft, autonomous underwater vehicles, and robots. These behaviors include dog fighting, missile evasion, track - ing, navigation, and obstacle avoidance.
1-hop neighbor's text information: Using a genetic algorithm to learn strategies for collision avoidance and local navigation. : Navigation through obstacles such as mine fields is an important capability for autonomous underwater vehicles. One way to produce robust behavior is to perform projective planning. However, real-time performance is a critical requirement in navigation. What is needed for a truly autonomous vehicle are robust reactive rules that perform well in a wide variety of situations, and that also achieve real-time performance. In this work, SAMUEL, a learning system based on genetic algorithms, is used to learn high-performance reactive strategies for navigation and collision avoidance.
1-hop neighbor's text information: "Learning sequential decision rules using simulation models and competition," : The problem of learning decision rules for sequential tasks is addressed, focusing on the problem of learning tactical decision rules from a simple flight simulator. The learning method relies on the notion of competition and employs genetic algorithms to search the space of decision policies. Several experiments are presented that address issues arising from differences between the simulation model on which learning occurs and the target environment on which the decision rules are ultimately tested.
Target text information: ADAPTIVE TESTING OF CONTROLLERS FOR AUTONOMOUS VEHICLES: Autonomous vehicles are likely to require sophisticated software controllers to maintain vehicle performance in the presence of vehicle faults. The test and evaluation of complex software controllers is expected to be a challenging task. The goal of this e ffort is to apply machine learning techniques from the field of arti ficial intelligence to the general problem of evaluating an intelligent controller for an autonomous vehicle. The approach involves subjecting a controller to an adaptively chosen set of fault scenarios within a vehicle simulator, and searching for combinations of faults that produce noteworthy performance by the vehicle controller. The search employs a genetic algorithm. We illustrate the approach by evaluating the performance of a subsumption-based controller for an autonomous vehicle. The preliminary evidence suggests that this approach is an e ffective alternative to manual testing of sophisticated software controllers.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
3
|
Genetic Algorithms
|
cora
| 124
|
test
|
1-hop neighbor's text information: Kanazawa Adaptive probabilistic networks. : Belief networks (or probabilistic networks) and neural networks are two forms of network representations that have been used in the development of intelligent systems in the field of artificial intelligence. Belief networks provide a concise representation of general probability distributions over a set of random variables, and facilitate exact calculation of the impact of evidence on propositions of interest. Neural networks, which represent parameterized algebraic combinations of nonlinear activation functions, have found widespread use as models of real neural systems and as function approximators because of their amenability to simple training algorithms. Furthermore, the simple, local nature of most neural network training algorithms provides a certain biological plausibility and allows for a massively parallel implementation. In this paper, we show that similar local learning algorithms can be derived for belief networks, and that these learning algorithms can operate using only information that is directly available from the normal, inferential processes of the networks. This removes the main obstacle preventing belief networks from competing with neural networks on the above-mentioned tasks. The precise, local, probabilistic interpretation of belief networks also allows them to be partially or wholly constructed by humans; allows the results of learning to be easily understood; and allows them to contribute to rational decision-making in a well-defined way.
1-hop neighbor's text information: Stochastic simulation algorithms for dynamic probabilistic networks. : Stochastic simulation algorithms such as likelihood weighting often give fast, accurate approximations to posterior probabilities in probabilistic networks, and are the methods of choice for very large networks. Unfortunately, the special characteristics of dynamic probabilistic networks (DPNs), which are used to represent stochastic temporal processes, mean that standard simulation algorithms perform very poorly. In essence, the simulation trials diverge further and further from reality as the process is observed over time. In this paper, we present simulation algorithms that use the evidence observed at each time step to push the set of trials back towards reality. The first algorithm, "evidence reversal" (ER) restructures each time slice of the DPN so that the evidence nodes for the slice become ancestors of the state variables. The second algorithm, called "survival of the fittest" sampling (SOF), "repopulates" the set of trials at each time step using a stochastic reproduction rate weighted by the likelihood of the evidence according to each trial. We compare the performance of each algorithm with likelihood weighting on the original network, and also investigate the benefits of combining the ER and SOF methods. The ER/SOF combination appears to maintain bounded error independent of the number of time steps in the simulation.
1-hop neighbor's text information: Approximating optimal policies for partially observable stochastic domains. : The problem of making optimal decisions in uncertain conditions is central to Artificial Intelligence. If the state of the world is known at all times, the world can be modeled as a Markov Decision Process (MDP). MDPs have been studied extensively and many methods are known for determining optimal courses of action, or policies. The more realistic case where state information is only partially observable, Partially Observable Markov Decision Processes (POMDPs), have received much less attention. The best exact algorithms for these problems can be very inefficient in both space and time. We introduce Smooth Partially Observable Value Approximation (SPOVA), a new approximation method that can quickly yield good approximations which can improve over time. This method can be combined with reinforcement learning methods, a combination that was very effective in our test cases.
Target text information: : MOU 130: Feasibility study of fully autonomous vehicles using decision-theoretic control Final Report
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
6
|
Probabilistic Methods
|
cora
| 788
|
test
|
1-hop neighbor's text information: Learning logical definitions from relations. :
1-hop neighbor's text information: Extraction of meta-knowledge to restrict the hypothesis space for ILP systems. : Many ILP systems, such as GOLEM, FOIL, and MIS, take advantage of user supplied meta-knowledge to restrict the hypothesis space. This meta-knowledge can be in the form of type information about arguments in the predicate being learned, or it can be information about whether a certain argument in the predicate is functionally dependent on the other arguments (supplied as mode information). This meta knowledge is explicitly supplied to an ILP system in addition to the data. The present paper argues that in many cases the meta knowledge can be extracted directly from the raw data. Three algorithms are presented that learn type, mode, and symmetric meta-knowledge from data. These algorithms can be incorporated in existing ILP systems in the form of a preprocessor that obviates the need for a user to explicitly provide this information. In many cases, the algorithms can extract meta- knowledge that the user is either unaware of, but which information can be used by the ILP system to restrict the hypothesis space.
1-hop neighbor's text information: Learning from positive data. : Gold showed in 1967 that not even regular grammars can be exactly identified from positive examples alone. Since it is known that children learn natural grammars almost exclusively from positives examples, Gold's result has been used as a theoretical support for Chomsky's theory of innate human linguistic abilities. In this paper new results are presented which show that within a Bayesian framework not only grammars, but also logic programs are learnable with arbitrarily low expected error from positive examples only. In addition, we show that the upper bound for expected error of a learner which maximises the Bayes' posterior probability when learning from positive examples is within a small additive term of one which does the same from a mixture of positive and negative examples. An Inductive Logic Programming implementation is described which avoids the pitfalls of greedy search by global optimisation of this function during the local construction of individual clauses of the hypothesis. Results of testing this implementation on artificially-generated data-sets are reported. These results are in agreement with the theoretical predictions.
Target text information: ILP with Noise and Fixed Example Size: A Bayesian Approach: Current inductive logic programming systems are limited in their handling of noise, as they employ a greedy covering approach to constructing the hypothesis one clause at a time. This approach also causes difficulty in learning recursive predicates. Additionally, many current systems have an implicit expectation that the cardinality of the positive and negative examples reflect the "proportion" of the concept to the instance space. A framework for learning from noisy data and fixed example size is presented. A Bayesian heuristic for finding the most probable hypothesis in this general framework is derived. This approach evaluates a hypothesis as a whole rather than one clause at a time. The heuristic, which has nice theoretical properties, is incorporated in an ILP system, Lime. Experimental results show that Lime handles noise better than FOIL and PROGOL. It is able to learn recursive definitions from noisy data on which other systems do not perform well. Lime is also capable of learning from only positive data and also from only negative data.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
0
|
Rule Learning
|
cora
| 175
|
test
|
1-hop neighbor's text information: Approximation with neural networks: Between local and global approximation. : We investigate neural network based approximation methods. These methods depend on the locality of the basis functions. After discussing local and global basis functions, we propose a a multi-resolution hierarchical method. The various resolutions are stored at various levels in a tree. At the root of the tree, a global approximation is kept; the leafs store the learning samples themselves. Intermediate nodes store intermediate representations. In order to find an optimal partitioning of the input space, self-organising maps (SOM's) are used. The proposed method has implementational problems reminiscent of those encountered in many-particle simulations. We will investigate the parallel implementation of this method, using parallel hierarchical meth ods for many-particle simulations as a starting point.
1-hop neighbor's text information: The optimal number of learning samples and hidden units in function approximation with a feedforward network. : This paper presents a methodology to estimate the optimal number of learning samples and the number of hidden units needed to obtain a desired accuracy of a function approximation by a feedforward network. The representation error and the generalization error, components of the total approximation error are analyzed and the approximation accuracy of a feedforward network is investigated as a function of the number of hidden units and the number of learning samples. Based on the asymptotical behavior of the approximation error, an asymptotical model of the error function (AMEF) is introduced of which the parameters can be determined experimentally. An alternative model of the error function, which include theoretical results about general bounds of approximation, is also analyzed. In combination with knowledge about the computational complexity of the learning rule an optimal learning set size and number of hidden units can be found resulting in a minimum computation time for a given desired precision of the approximation. This approach was applied to optimize the learning of the camera-robot mapping of a visually guided robot arm and a complex logarithm function approximation.
1-hop neighbor's text information: The locally linear nested network for robot manipulation. : We present a method for accurate representation of high-dimensional unknown functions from random samples drawn from its input space. The method builds representations of the function by recursively splitting the input space in smaller subspaces, while in each of these subspaces a linear approximation is computed. The representations of the function at all levels (i.e., depths in the tree) are retained during the learning process, such that a good generalisation is available as well as more accurate representations in some subareas. Therefore, fast and accurate learning are combined in this method.
Target text information: "Nested networks for robot control," in Neural Network Applications, :
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 2,206
|
val
|
1-hop neighbor's text information: Task selection for a Multiscalar processor. : The Multiscalar architecture advocates a distributed processor organization and task-level speculation to exploit high degrees of instruction level parallelism (ILP) in sequential programs without impeding improvements in clock speeds. The main goal of this paper is to understand the key implications of the architectural features of distributed processor organization and task-level speculation for compiler task selection from the point of view of performance. We identify the fundamental performance issues to be: control ow speculation, data communication, data dependence speculation, load imbalance, and task overhead. We show that these issues are intimately related to a few key characteristics of tasks: task size, inter-task control ow, and inter-task data dependence. We describe compiler heuristics to select tasks with favorable characteristics. We report experimental results to show that the heuristics are successful in boosting overall performance by establishing larger ILP windows.
1-hop neighbor's text information: Control Flow Prediction for Dynamic ILP Processors. : We introduce a technique to enhance the ability of dynamic ILP processors to exploit (speculatively executed) parallelism. Existing branch prediction mechanisms used to establish a dynamic window from which ILP can be extracted are limited in their abilities to: (i) create a large, accurate dynamic window, (ii) initiate a large number of instructions into this window in every cycle, and (iii) traverse multiple branches of the control flow graph per prediction. We introduce control flow prediction which uses information in the control flow graph of a program to overcome these limitations. We discuss how information present in the control flow graph can be represented using multiblocks, and conveyed to the hardware using Control Flow Tables and Control Flow Prediction Buffers. We evaluate the potential of control flow prediction on an abstract machine and on a dynamic ILP processing model. Our results indicate that control flow prediction is a powerful and effective assist to the hardware in making more informed run time decisions about program control flow.
1-hop neighbor's text information: The Expandable Split Window Paradigm for Exploiting Fine-Grain Parallelism, : We propose a new processing paradigm, called the Expandable Split Window (ESW) paradigm, for exploiting fine-grain parallelism. This paradigm considers a window of instructions (possibly having dependencies) as a single unit, and exploits fine-grain parallelism by overlapping the execution of multiple windows. The basic idea is to connect multiple sequential processors, in a decoupled and decentralized manner, to achieve overall multiple issue. This processing paradigm shares a number of properties of the restricted dataflow machines, but was derived from the sequential von Neumann architecture. We also present an implementation of the Expandable Split Window execution model, and preliminary performance results.
Target text information: A hardware mechanism for dynamic reordering of memory references. :
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
0
|
Rule Learning
|
cora
| 978
|
test
|
1-hop neighbor's text information: Fast Online Q(): Q()-learning uses TD()-methods to accelerate Q-learning. The update complexity of previous online Q() implementations based on lookup-tables is bounded by the size of the state/action space. Our faster algorithm's update complexity is bounded by the number of actions. The method is based on the observation that Q-value updates may be postponed until they are needed.
1-hop neighbor's text information: Applying online-search to reinforcement learning. : In reinforcement learning it is frequently necessary to resort to an approximation to the true optimal value function. Here we investigate the benefits of online search in such cases. We examine "local" searches, where the agent performs a finite-depth lookahead search, and "global" searches, where the agent performs a search for a trajectory all the way from the current state to a goal state. The key to the success of these methods lies in taking a value function, which gives a rough solution to the hard problem of finding good trajectories from every single state, and combining that with online search, which then gives an accurate solution to the easier problem of finding a good trajectory specifically from the current state.
1-hop neighbor's text information: Decision Tree Function Approximation in Reinforcement Learning: We present a decision tree based approach to function approximation in reinforcement learning. We compare our approach with table lookup and a neural network function approximator on three problems: the well known mountain car and pole balance problems as well as a simulated automobile race car. We find that the decision tree can provide better learning performance than the neural network function approximation and can solve large problems that are infeasible using table lookup.
Target text information: Generalization in reinforcement learning: Successful examples using sparse coarse coding. : On large problems, reinforcement learning systems must use parameterized function approximators such as neural networks in order to generalize between similar situations and actions. In these cases there are no strong theoretical results on the accuracy of convergence, and computational results have been mixed. In particular, Boyan and Moore reported at last year's meeting a series of negative results in attempting to apply dynamic programming together with function approximation to simple control problems with continuous state spaces. In this paper, we present positive results for all the control tasks they attempted, and for one that is significantly larger. The most important differences are that we used sparse-coarse-coded function approximators (CMACs) whereas they used mostly global function approximators, and that we learned online whereas they learned o*ine. Boyan and Moore and others have suggested that the problems they encountered could be solved by using actual outcomes ("rollouts"), as in classical Monte Carlo methods, and as in the TD() algorithm when = 1. However, in our experiments this always resulted in substantially poorer performance. We conclude that reinforcement learning can work robustly in conjunction with function approximators, and that there is little justification at present for avoiding the case of general .
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
5
|
Reinforcement Learning
|
cora
| 2,181
|
train
|
1-hop neighbor's text information: Learning to predict by the methods of temporal differences. : This article introduces a class of incremental learning procedures specialized for prediction|that is, for using past experience with an incompletely known system to predict its future behavior. Whereas conventional prediction-learning methods assign credit by means of the difference between predicted and actual outcomes, the new methods assign credit by means of the difference between temporally successive predictions. Although such temporal-difference methods have been used in Samuel's checker player, Holland's bucket brigade, and the author's Adaptive Heuristic Critic, they have remained poorly understood. Here we prove their convergence and optimality for special cases and relate them to supervised-learning methods. For most real-world prediction problems, temporal-difference methods require less memory and less peak computation than conventional methods; and they produce more accurate predictions. We argue that most problems to which supervised learning is currently applied are really prediction problems of the sort to which temporal-difference methods can be applied to advantage.
1-hop neighbor's text information: Improving elevator performance using reinforcement learning. : This paper describes the application of reinforcement learning (RL) to the difficult real world problem of elevator dispatching. The elevator domain poses a combination of challenges not seen in most RL research to date. Elevator systems operate in continuous state spaces and in continuous time as discrete event dynamic systems. Their states are not fully observable and they are nonstationary due to changing passenger arrival rates. In addition, we use a team of RL agents, each of which is responsible for controlling one elevator car. The team receives a global reinforcement signal which appears noisy to each agent due to the effects of the actions of the other agents, the random nature of the arrivals and the incomplete observation of the state. In spite of these complications, we show results that in simulation surpass the best of the heuristic elevator control algorithms of which we are aware. These results demonstrate the power of RL on a very large scale stochastic dynamic optimization problem of practical utility.
1-hop neighbor's text information: Learning to Act using Real- Time Dynamic Programming. : fl The authors thank Rich Yee, Vijay Gullapalli, Brian Pinette, and Jonathan Bachrach for helping to clarify the relationships between heuristic search and control. We thank Rich Sutton, Chris Watkins, Paul Werbos, and Ron Williams for sharing their fundamental insights into this subject through numerous discussions, and we further thank Rich Sutton for first making us aware of Korf's research and for his very thoughtful comments on the manuscript. We are very grateful to Dimitri Bertsekas and Steven Sullivan for independently pointing out an error in an earlier version of this article. Finally, we thank Harry Klopf, whose insight and persistence encouraged our interest in this class of learning problems. This research was supported by grants to A.G. Barto from the National Science Foundation (ECS-8912623 and ECS-9214866) and the Air Force Office of Scientific Research, Bolling AFB (AFOSR-89-0526).
Target text information: Reinforcement learning methods for continuous time markov decision problems. : Semi-Markov Decision Problems are continuous time generalizations of discrete time Markov Decision Problems. A number of reinforcement learning algorithms have been developed recently for the solution of Markov Decision Problems, based on the ideas of asynchronous dynamic programming and stochastic approximation. Among these are TD(), Q-learning, and Real-time Dynamic Programming. After reviewing semi-Markov Decision Problems and Bellman's optimality equation in that context, we propose algorithms similar to those named above, adapted to the solution of semi-Markov Decision Problems. We demonstrate these algorithms by applying them to the problem of determining the optimal control for a simple queueing system. We conclude with a discussion of circumstances under which these algorithms may be usefully ap plied.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
5
|
Reinforcement Learning
|
cora
| 1,508
|
test
|
1-hop neighbor's text information: A neural network model of visual tilt aftereffect. : RF-LISSOM, a self-organizing model of laterally connected orientation maps in the primary visual cortex, was used to study the psychological phenomenon known as the tilt aftereffect. The same self-organizing processes that are responsible for the long-term development of the map and its lateral connections are shown to result in tilt aftereffects over short time scales in the adult. The model allows observing large numbers of neurons and connections simultaneously, making it possible to relate higher-level phenomena to low-level events, which is difficult to do experimentally. The results give computational support for the idea that direct tilt aftereffects arise from adaptive lateral interactions between feature detectors, as has long been surmised. They also suggest that indirect effects could result from the conservation of synaptic resources during this process. The model thus provides a unified computational explanation of self-organization and both direct and indirect tilt aftereffects in the primary visual cortex.
Target text information: Modeling cortical plasticity based on adapting lateral interaction. : A neural network model called LISSOM for the cooperative self-organization of afferent and lateral connections in cortical maps is applied to modeling cortical plasticity. After self-organization, the LISSOM maps are in a dynamic equilibrium with the input, and reorganize like the cortex in response to simulated cortical lesions and intracortical microstimulation. The model predicts that adapting lateral interactions are fundamental to cortical reorganization, and suggests techniques to hasten recovery following sensory cortical surgery.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 2,460
|
val
|
1-hop neighbor's text information: optimize, analyze, repeat (SOAR): Application of neural network tools to ECG patient monitoring. : Results are reported from the application of tools for synthesizing, optimizing and analyzing neural networks to an ECG Patient Monitoring task. A neural network was synthesized from a rule-based classifier and optimized over a set of normal and abnormal heartbeats. The classification error rate on a separate and larger test set was reduced by a factor of 2. Sensitivity analysis of the synthesized and optimized networks revealed informative differences. Analysis of the weights and unit activations of the optimized network enabled a reduction in size of the network by a factor of 40% without loss of accuracy.
1-hop neighbor's text information: Recognition and exploitation of contextual clues via incremental meta-learning. : Daily experience shows that in the real world, the meaning of many concepts heavily depends on some implicit context, and changes in that context can cause more or less radical changes in the concepts. Incremental concept learning in such domains requires the ability to recognize and adapt to such changes. This paper presents a solution for incremental learning tasks where the domain provides explicit clues as to the current context (e.g., attributes with characteristic values). We present a general two-level learning model, and its realization in a system named MetaL(B), that can learn to detect certain types of contextual clues, and can react accordingly when a context change is suspected. The model consists of a base level learner that performs the regular on-line learning and classification task, and a meta-learner that identifies potential contextual clues. Context learning and detection occur during regular on-line learning, without separate training phases for context recognition. Experiments with synthetic domains as well as a `real-world' problem show that MetaL(B) is robust in a variety of dimensions and produces substantial improvement over simple object-level learning in situations with changing contexts.
1-hop neighbor's text information: The Management of Context-Sensitive Features: A Review of Strategies: In this paper, we review five heuristic strategies for handling context-sensitive features in supervised machine learning from examples. We discuss two methods for recovering lost (implicit) contextual information. We mention some evidence that hybrid strategies can have a synergetic effect. We then show how the work of several machine learning researchers fits into this framework. While we do not claim that these strategies exhaust the possibilities, it appears that the framework includes all of the techniques that can be found in the published literature on context-sensitive learning.
Target text information: "A patient-adaptive neural network ECG patient monitoring algorithm", : The patient-adaptive classifier was compared with a well-established baseline algorithm on six major databases, consisting of over 3 million heartbeats. When trained on an initial 77 records and tested on an additional 382 records, the patient-adaptive algorithm was found to reduce the number of Vn errors on one channel by a factor of 5, and the number of Nv errors by a factor of 10. We conclude that patient adaptation provides a significant advance in classifying normal vs. ventricular beats for ECG Patient Monitoring.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 1,078
|
val
|
1-hop neighbor's text information: Pattern recognition via linear programming: Theory and application to medical diagnosis. : A decision problem associated with a fundamental nonconvex model for linearly inseparable pattern sets is shown to be NP-complete. Another nonconvex model that employs an 1 norm instead of the 2-norm, can be solved in polynomial time by solving 2n linear programs, where n is the (usually small) dimensionality of the pattern space. An effective LP-based finite algorithm is proposed for solving the latter model. The algorithm is employed to obtain a noncon-vex piecewise-linear function for separating points representing measurements made on fine needle aspirates taken from benign and malignant human breasts. A computer program trained on 369 samples has correctly diagnosed each of 45 new samples encountered and is currently in use at the University of Wisconsin Hospitals. 1. Introduction. The fundamental problem we wish to address is that of
1-hop neighbor's text information: Street. Cancer diagnosis and prognosis via linear-programming-based machine learning. :
1-hop neighbor's text information: Introduction to the Theory of Neural Computa 92 tion. : Neural computation, also called connectionism, parallel distributed processing, neural network modeling or brain-style computation, has grown rapidly in the last decade. Despite this explosion, and ultimately because of impressive applications, there has been a dire need for a concise introduction from a theoretical perspective, analyzing the strengths and weaknesses of connectionist approaches and establishing links to other disciplines, such as statistics or control theory. The Introduction to the Theory of Neural Computation by Hertz, Krogh and Palmer (subsequently referred to as HKP) is written from the perspective of physics, the home discipline of the authors. The book fulfills its mission as an introduction for neural network novices, provided that they have some background in calculus, linear algebra, and statistics. It covers a number of models that are often viewed as disjoint. Critical analyses and fruitful comparisons between these models
Target text information: Mathematical programming in neural networks. : This paper highlights the role of mathematical programming, particularly linear programming, in training neural networks. A neural network description is given in terms of separating planes in the input space that suggests the use of linear programming for determining these planes. A more standard description in terms of a mean square error in the output space is also given, which leads to the use of unconstrained minimization techniques for training a neural network. The linear programming approach is demonstrated by a brief description of a system for breast cancer diagnosis that has been in use for the last four years at a major medical facility.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 1,476
|
test
|
1-hop neighbor's text information: MML and Bayesianism: similarities and differences. : Tech Report 207 Department of Computer Science, Monash University, Clayton, Vic. 3168, Australia Abstract: This paper continues the introduction to minimum encoding inductive inference given by Oliver and Hand. This series of papers was written with the objective of providing an introduction to this area for statisticians. We describe the message length estimates used in Wallace's Minimum Message Length (MML) inference and Rissanen's Minimum Description Length (MDL) inference. The differences in the message length estimates of the two approaches are explained. The implications of these differences for applications are discussed.
1-hop neighbor's text information: Decision graphs an extension of decision trees. : Technical Report No: 92/173 (C) Jonathan Oliver 1992 Shortened appeared in AI and Statistics 1993[14] Abstract: In this paper, we examine Decision Graphs, a generalization of decision trees. We present an inference scheme to construct decision graphs using the Minimum Message Length Principle. Empirical tests demonstrate that this scheme compares favourably with other decision tree inference schemes. This work provides a metric for comparing the relative merit of the decision tree and decision graph formalisms for a particular domain.
Target text information: : Tech Report 4-94 Department of Statistics, Open University, Walton Hall, MK7 6AA, UK Tech Report 205 Department of Computer Science, Monash University, Clayton, Vic. 3168, Australia Abstract: This paper examines the minimum encoding approaches to inference, Minimum Message Length (MML) and Minimum Description Length (MDL). This paper was written with the objective of providing an introduction to this area for statisticians. We describe coding techniques for data, and examine how these techniques can be applied to perform inference and model selection.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
4
|
Theory
|
cora
| 829
|
test
|
1-hop neighbor's text information: Learning k-term DNF formulas with an incomplete membership oracle. : We consider the problem of learning k-term DNF formulas using equivalence queries and incomplete membership queries as defined by Angluin and Slonim. We demonstrate that this model can be applied to non-monotone classes. Namely, we describe a polynomial-time algorithm that exactly identifies a k-term DNF formula with a k-term DNF hypothesis using incomplete membership queries and equivalence queries from the class of DNF formulas.
1-hop neighbor's text information: DNF if you can\'t learn \'em, teach \'em: An interactive model of teaching. : Previous teaching models in the learning theory community have been batch models. That is, in these models the teacher has generated a single set of helpful examples to present to the learner. In this paper we present an interactive model in which the learner has the ability to ask queries as in the query learning model of Angluin [1]. We show that this model is at least as powerful as previous teaching models. We also show that anything learnable with queries, even by a randomized learner, is teachable in our model. In all previous teaching models, all classes shown to be teachable are known to be efficiently learnable. An important concept class that is not known to be learnable is DNF formulas. We demonstrate the power of our approach by providing a deterministic teacher and learner for the class of DNF formulas. The learner makes only equivalence queries and all hypotheses are also DNF formulas.
1-hop neighbor's text information: Auer and P.M Long. Simulating access to hidden information while learning. : We introduce a new technique which enables a learner without access to hidden information to learn nearly as well as a learner with access to hidden information. We apply our technique to solve an open problem of Maass and Turan [18], showing that for any concept class F , the least number of queries sufficient for learning F by an algorithm which has access only to arbitrary equivalence queries is at most a factor of 1= log 2 (4=3) more than the least number of queries sufficient for learning F by an algorithm which has access to both arbitrary equivalence queries and membership queries. Previously known results imply that the 1= log 2 (4=3) in our bound is best possible. We describe analogous results for two generalizations of this model to function learning, and apply those results to bound the difficulty of learning in the harder of these models in terms of the difficulty of learning in the easier model. We bound the difficulty of learning unions of k concepts from a class F in terms of the difficulty of learning F . We bound the difficulty of learning in a noisy environment for deterministic algorithms in terms of the difficulty of learning in a noise-free environment. We apply a variant of our technique to develop an algorithm transformation that allows probabilistic learning algorithms to nearly optimally cope with noise. A second variant enables us to improve a general lower bound of Turan [19] for the PAC-learning model (with queries). Finally, we show that logarithmically many membership queries never help to obtain computationally efficient learning algorithms. fl Supported by Air Force Office of Scientific Research grant F49620-92-J-0515. Most of this work was done while this author was at TU Graz supported by a Lise Meitner Fellowship from the Fonds zur Forderung der wissenschaftlichen Forschung (Austria).
Target text information: Learning with queries but incomplete information. : We investigate learning with membership and equivalence queries assuming that the information provided to the learner is incomplete. By incomplete we mean that some of the membership queries may be answered by I don't know. This model is a worst-case version of the incomplete membership query model of Angluin and Slonim. It attempts to model practical learning situations, including an experiment of Lang and Baum that we describe, where the teacher may be unable to answer reliably some queries that are critical for the learning algorithm. We present algorithms to learn monotone k-term DNF with membership queries only, and to learn monotone DNF with membership and equivalence queries. Compared to the complete information case, the query complexity increases by an additive term linear in the number of I don't know answers received. We also observe that the blowup in the number of queries can in general be exponential for both our new model and the incomplete membership model.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
4
|
Theory
|
cora
| 930
|
test
|
1-hop neighbor's text information: A Weighted Nearest Neighbor Algorithm for Learning with Symbolic Features. : In the past, nearest neighbor algorithms for learning from examples have worked best in domains in which all features had numeric values. In such domains, the examples can be treated as points and distance metrics can use standard definitions. In symbolic domains, a more sophisticated treatment of the feature space is required. We introduce a nearest neighbor algorithm for learning in domains with symbolic features. Our algorithm calculates distance tables that allow it to produce real-valued distances between instances, and attaches weights to the instances to further modify the structure of feature space. We show that this technique produces excellent classification accuracy on three problems that have been studied by machine learning researchers: predicting protein secondary structure, identifying DNA promoter sequences, and pronouncing English text. Direct experimental comparisons with the other learning algorithms show that our nearest neighbor algorithm is comparable or superior in all three domains. In addition, our algorithm has advantages in training speed, simplicity, and perspicuity. We conclude that experimental evidence favors the use and continued development of nearest neighbor algorithms for domains such as the ones studied here.
1-hop neighbor's text information: Generalizing from case studies: A case study. : Most empirical evaluations of machine learning algorithms are case studies evaluations of multiple algorithms on multiple databases. Authors of case studies implicitly or explicitly hypothesize that the pattern of their results, which often suggests that one algorithm performs significantly better than others, is not limited to the small number of databases investigated, but instead holds for some general class of learning problems. However, these hypotheses are rarely supported with additional evidence, which leaves them suspect. This paper describes an empirical method for generalizing results from case studies and an example application. This method yields rules describing when some algorithms significantly outperform others on some dependent measures. Advantages for generalizing from case studies and limitations of this particular approach are also described.
1-hop neighbor's text information: Addressing the Selective Superiority Problem: Automatic Algorithm/Model Class Selection. : COINS Technical Report 92-30 February 1992 Abstract The problem of how to learn from examples has been studied throughout the history of machine learning, and many successful learning algorithms have been developed. A problem that has received less attention is how to select which algorithm to use for a given learning task. The ability of a chosen algorithm to induce a good generalization depends on how appropriate the model class underlying the algorithm is for the given task. We define an algorithm's model class to be the representation language it uses to express a generalization of the examples. Supervised learning algorithms differ in their underlying model class and in how they search for a good generalization. Given this characterization, it is not surprising that some algorithms find better generalizations for some, but not all tasks. Therefore, in order to find the best generalization for each task, an automated learning system must search for the appropriate model class in addition to searching for the best generalization within the chosen class. This thesis proposal investigates the issues involved in automating the selection of the appropriate model class. The presented approach has two facets. Firstly, the approach combines different model classes in the form of a model combination decision tree, which allows the best representation to be found for each subconcept of the learning task. Secondly, which model class is the most appropriate is determined dynamically using a set of heuristic rules. Explicit in each rule are the conditions in which a particular model class is appropriate and if it is not, what should be done next. In addition to describing the approach, this proposal describes how the approach will be evaluated in order to demonstrate that it is both an efficient and effective method for automatic model selection.
Target text information: Dynamical selection of learning algorithms. : Determining the conditions for which a given learning algorithm is appropriate is an open problem in machine learning. Methods for selecting a learning algorithm for a given domain have met with limited success. This paper proposes a new approach to predicting a given example's class by locating it in the "example space" and then choosing the best learner(s) in that region of the example space to make predictions. The regions of the example space are defined by the prediction patterns of the learners being used. The learner(s) chosen for prediction are selected according to their past performance in that region. This dynamic approach to learning algorithm selection is compared to other methods for selecting from multiple learning algorithms. The approach is then extended to weight rather than select the algorithms according to their past performance in a given region. Both approaches are further evaluated on a set of Determining the conditions for which a given learning algorithm is appropriate is an open problem in machine learning. Methods for selecting a learning algorithm for a given domain (e.g. [Aha92, Breiman84]) or for a portion of the domain ([Brodley93, Brodley94]) have met with limited success. This paper proposes a new approach that dynamically selects a learning algorithm for each example by locating it in the "example space" and then choosing the best learner(s) for prediction in that part of the example space. The regions of the example space are formed by the observed prediction patterns of the learners being used. The learner(s) chosen for prediction are selected according to their past performance in that region which is defined by the "cross-validation history." This paper introduces DS, a method for the dynamic selection of a learning algorithm(s). We call it "dynamic" because the learning algorithm(s) used to classify a novel example depends on that example. Preliminary experimentation motivated DW, an extension to DS that dynamically weights the learners predictions according to their regional accuracy. Further experimentation compares DS and DW to a collection of other meta-learning strategies such as cross-validation ([Breiman84]) and various forms of stacking ([Wolpert92]). In this phase of the experiementation, the meta-learners have six constituent learners which are heterogeneous in their search and representation methods (e.g. a rule learner, CN2 [Clark89]; a decision tree learner, C4.5 [Quinlan93]; an oblique decision tree learner, OC1 [Murthy93]; an instance-based learner, PEBLS [Cost93]; a k-nearest neighbor learner, ten domains and compared to several other meta-learning strategies.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
4
|
Theory
|
cora
| 2,417
|
train
|
1-hop neighbor's text information: Using qualitative relationships for bounding probability distributions. : We exploit qualitative probabilistic relationships among variables for computing bounds of conditional probability distributions of interest in Bayesian networks. Using the signs of qualitative relationships, we can implement abstraction operations that are guaranteed to bound the distributions of interest in the desired direction. By evaluating incrementally improved approximate networks, our algorithm obtains monotonically tightening bounds that converge to exact distributions. For supermodular utility functions, the tightening bounds monotonically reduce the set of admissible decision alternatives as well.
Target text information: Localized partial evaluation of belief networks. : in the network. Often, however, an application will not need information about every node in the network nor will it need exact probabilities. We present the localized partial evaluation (LPE) propagation algorithm, which computes interval bounds on the marginal probability of a specified query node by examining a subset of the nodes in the entire network. Conceptually, LPE ignores parts of the network that are "too far away" from the queried node to have much impact on its value. LPE has the "anytime" property of being able to produce better solutions (tighter intervals) given more time to consider more of the network.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
6
|
Probabilistic Methods
|
cora
| 1,231
|
test
|
1-hop neighbor's text information: Bayesian forecasting and dynamic models. : We discuss the development of dynamic factor models for multivariate financial time series, and the incorporation of stochastic volatility components for latent factor processes. Bayesian inference and computation is developed and explored in a study of the dynamic factor structure of daily spot exchange rates for a selection of international currencies. The models are direct generalisations of univariate stochastic volatility models, and represent specific varieties of models recently discussed in the growing multivariate stochastic volatility literature. We also discuss connections and comparisons with the much simpler method of dynamic variance discounting that, for over a decade, has been a standard approach in applied financial econometrics in the Bayesian forecasting world. We review empirical findings in applying these models to the exchange rate series, including aspects of model performance in dynamic portfolio allocation. We conclude with comments on the potential practical utility of structured factor models and future potential developments and model extensions. The authors acknowledge useful discussions with Jose M Quintana, Neil Shephard and Hong Chang, and partial support from NSF grants DMS-9704432 and DMS-9707914. This manuscript represents a preliminary draft report subject to revision. Before quoting or referencing, please check the authors' web site for a possible updated version. The latest version will be found as ISDS Discussion Paper 98-03 on the Duke web site, http://www.stat.duke.edu/papers/
1-hop neighbor's text information: Analysis of hospital quality monitors using hierarchical time series models. : The VA management services department invests considerably in the collection and assessment of data to inform on hospital and care-area specific levels of quality of care. Resulting time series of quality monitors provide information relevant to evaluating patterns of variability in hospital-specific quality of care over time and across care areas, and to compare and assess differences across hospitals. In collaboration with the VA management services group we have developed various models for evaluating such patterns of dependencies and combining data across the VA hospital system. This paper provides a brief overview of resulting models, some summary examples on three monitor time series, and discussion of data, modelling and inference issues. This work introduces new models for multivariate non-Gaussian time series. The framework combines cross-sectional, hierarchical models of the population of hospitals with time series structure to allow and measure time-variations in the associated hierarchical model parameters. In the VA study, the within-year components of the models describe patterns of heterogeneity across the population of hospitals and relationships among several such monitors, while the time series components describe patterns of variability through time in hospital-specific effects and their relationships across quality monitors. Additional model components isolate unpredictable aspects of variability in quality monitor outcomes, by hospital and care areas. We discuss model assessment, residual analysis and MCMC algorithms developed to fit these models, which will be of interest in related applications in other socio-economic areas.
Target text information: (1997) Studies of quality monitor time series: The V.A. hospital system, : This report describes statistical research and development work on hospital quality monitor data sets from the nationwide VA hospital system. The project covers statistical analysis, exploration and modelling of data from several quality monitors, with the primary goals of: (a) understanding patterns of variability over time in hospital-level and monitor area specific quality monitor measures, and (b) understanding patterns of dependencies between sets of monitors. We present discussion of basic perspectives on data structure and preliminary data exploration for three monitors, followed by developments of several classes of formal models. We identify classes of hierarchical random effects time series models to be of relevance in mod-elling single or multiple monitor time series. We summarise basic model features and results of analyses of the three monitor data sets, in both single and multiple monitor frameworks, and present a variety of summary inferences in graphical displays. Our discussion includes summary conclusions related to the two key goals, discussions of questions of comparisons across hospitals, and some recommendations about further potential substantive and statistical investigations.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
6
|
Probabilistic Methods
|
cora
| 2,009
|
test
|
1-hop neighbor's text information: Selective eager execution on the polypath architecture. : Control-flow misprediction penalties are a major impediment to high performance in wide-issue superscalar processors. In this paper we present Selective Eager Execution (SEE), an execution model to overcome mis-speculation penalties by executing both paths after diffident branches. We present the micro-architecture of the PolyPath processor, which is an extension of an aggressive superscalar, out-of-order architecture. The PolyPath architecture uses a novel instruction tagging and register renaming mechanism to execute instructions from multiple paths simultaneously in the same processor pipeline, while retaining maximum resource availability for single-path code sequences. Results of our execution-driven, pipeline-level simulations show that SEE can improve performance by as much as 36% for the go benchmark, and an average of 14% on SPECint95, when compared to a normal superscalar, out-of-order, speculative execution, monopath processor. Moreover, our architectural model is both elegant and practical to implement, using a small amount of additional state and control logic.
1-hop neighbor's text information: Exploiting Choice: Instruction Fetch and Issue on an implementable Simultaneous Multithread-ing Processor. : Simultaneous multithreading is a technique that permits multiple independent threads to issue multiple instructions each cycle. In previous work we demonstrated the performance potential of simultaneous multithreading, based on a somewhat idealized model. In this paper we show that the throughput gains from simultaneous multithreading can be achieved without extensive changes to a conventional wide-issue superscalar, either in hardware structures or sizes. We present an architecture for simultaneous multithreading that achieves three goals: (1) it minimizes the architectural impact on the conventional superscalar design, (2) it has minimal performance impact on a single thread executing alone, and (3) it achieves significant throughput gains when running multiple threads. Our simultaneous multithreading architecture achieves a throughput of 5.4 instructions per cycle, a 2.5-fold improvement over an unmodified superscalar with similar hardware resources. This speedup is enhanced by an advantage of multithreading previously unexploited in other architectures: the ability to favor for fetch and issue those threads most efficiently using the processor each cycle, thereby providing the best instructions to the processor.
1-hop neighbor's text information: A Comparison of Full and Partial Predicated Execution Support for ILP Processors. : One can effectively utilize predicated execution to improve branch handling in instruction-level parallel processors. Although the potential benefits of predicated execution are high, the tradeoffs involved in the design of an instruction set to support predicated execution can be difficult. On one end of the design spectrum, architectural support for full predicated execution requires increasing the number of source operands for all instructions. Full predicate support provides for the most flexibility and the largest potential performance improvements. On the other end, partial predicated execution support, such as conditional moves, requires very little change to existing architectures. This paper presents a preliminary study to qualitatively and quantitatively address the benefit of full and partial predicated execution support. With our current compiler technology, we show that the compiler can use both partial and full predication to achieve speedup in large control-intensive programs. Some details of the code generation techniques are shown to provide insight into the benefit of going from partial to full predication. Preliminary experimental results are very encouraging: partial predication provides an average of 33% performance improvement for an 8-issue processor with no predicate support while full predication provides an additional 30% improvement.
Target text information: Dynamic Hammock Predication for Non-predicated Instruction Set Architectures: Conventional speculative architectures use branch prediction to evaluate the most likely execution path during program execution. However, certain branches are difficult to predict. One solution to this problem is to evaluate both paths following such a conditional branch. Predicated execution can be used to implement this form of multi-path execution. Predicated architectures fetch and issue instructions that have associated predicates. These predicates indicate if the instruction should commit its result. Predicating a branch reduces the number of branches executed, eliminating the chance of branch misprediction at the cost of executing additional instructions. In this paper, we propose a restricted form of multi-path execution called Dynamic Predication for architectures with little or no support for predicated instructions in their instruction set. Dynamic predication dynamically predicates instruction sequences in the form of a branch hammock, concurrently executing both paths of the branch. A branch hammock is a short forward branch that spans a few instructions in the form of an if-then or if-then-else construct. We mark these and other constructs in the executable. When the decode stage detects such a sequence, it passes a predicated instruction sequence to a dynamically scheduled execution core. Our results show that dynamic predication can accrue speedups of up to 13%.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
0
|
Rule Learning
|
cora
| 650
|
test
|
1-hop neighbor's text information: Machine Learning: A Multistrategy Approach, : Machine learning techniques are perceived to have a great potential as means for the acquisition of knowledge; nevertheless, their use in complex engineering domains is still rare. Most machine learning techniques have been studied in the context of knowledge acquisition for well defined tasks, such as classification. Learning for these tasks can be handled by relatively simple algorithms. Complex domains present difficulties that can be approached by combining the strengths of several complementing learning techniques, and overcoming their weaknesses by providing alternative learning strategies. This study presents two perspectives, the macro and the micro, for viewing the issue of multistrategy learning. The macro perspective deals with the decomposition of an overall complex learning task into relatively well-defined learning tasks, and the micro perspective deals with designing multistrategy learning techniques for supporting the acquisition of knowledge for each task. The two perspectives are discussed in the context of
1-hop neighbor's text information: Irrelevant features and the subset selection problem. : We address the problem of finding a subset of features that allows a supervised induction algorithm to induce small high-accuracy concepts. We examine notions of relevance and irrelevance, and show that the definitions used in the machine learning literature do not adequately partition the features into useful categories of relevance. We present definitions for irrelevance and for two degrees of relevance. These definitions improve our understanding of the behavior of previous subset selection algorithms, and help define the subset of features that should be sought. The features selected should depend not only on the features and the target concept, but also on the induction algorithm. We describe a method for feature subset selection using cross-validation that is applicable to any induction algorithm, and discuss experiments conducted with ID3 and C4.5 on artificial and real datasets.
1-hop neighbor's text information: `Classification by pairwise coupling\', : We discuss a strategy for polychotomous classification that involves estimating class probabilities for each pair of classes, and then coupling the estimates together. The coupling model is similar to the Bradley-Terry method for paired comparisons. We study the nature of the class probability estimates that arise, and examine the performance of the procedure in real and simulated datasets. Classifiers used include linear discriminants, nearest neighbors, and the support vector machine.
Target text information: 21 Using n 2 classifier in constructive induction: In this paper, we propose a multi-classification approach for constructive induction. The idea of an improvement of classification accuracy is based on iterative modification of input data space. This process is independently repeated for each pair of n classes. Finally, it gives (n 2 n)/2 input data subspaces of attributes dedicated for optimal discrimination of appropriate pairs of classes. We use genetic algorithms as a constructive induction engine. A final classification is obtained by a weighted majority voting rule, according to n 2 - classifier approach. The computational experiment was performed on medical data set. The obtained results point out the advantage of using a multi-classification model (n 2 classifier) in constructive induction in relation to the analogous single-classifier approach.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
4
|
Theory
|
cora
| 463
|
test
|
1-hop neighbor's text information: The GP-Music System: Interactive Genetic Programming for Music Composition, : Technical Report CSRP-98-13 Abstract In this paper we present the GP-Music System, an interactive system which allows users to evolve short musical sequences using interactive genetic programming, and its extensions aimed at making the system fully automated. The basic GP-system works by using a genetic programming algorithm, a small set of functions for creating musical sequences, and a user interface which allows the user to rate individual sequences. With this user interactive technique it was possible to generate pleasant tunes over runs of 20 individuals over 10 generations. As the user is the bottleneck in interactive systems, the system takes rating data from a users run and uses it to train a neural network based automatic rater, or auto rater, which can replace the user in bigger runs. Using this auto rater we were able to make runs of up to 50 generations with 500 individuals per generation. The best of run pieces generated by the auto raters were pleasant but were not, in general, as nice as those generated in user interactive runs.
1-hop neighbor's text information: Entailment for specification refinement. : Specification refinement is part of formal program derivation, a method by which software is directly constructed from a provably correct specification. Because program derivation is an intensive manual exercise used for critical software systems, an automated approach would allow it to be viable for many other types of software systems. The goal of this research is to determine if genetic programming (GP) can be used to automate the specification refinement process. The initial steps toward this goal are to show that a well-known proof logic for program derivation can be encoded such that a GP-based system can infer sentences in the logic for proof of a particular sentence. The results are promising and indicate that GP can be useful in aiding pro gram derivation.
1-hop neighbor's text information: "Evolving Control Structures with Automatically Defined Macros," :
Target text information: Induction and recapitulation of deep musical structure. : We describe recent extensions to our framework for the automatic generation of music-making programs. We have previously used genetic programming techniques to produce music-making programs that satisfy user-provided critical criteria. In this paper we describe new work on the use of connectionist techniques to automatically induce musical structure from a corpus. We show how the resulting neural networks can be used as critics that drive our genetic programming system. We argue that this framework can potentially support the induction and recapitulation of deep structural features of music. We present some initial results produced using neural and hybrid symbolic/neural critics, and we discuss directions for future work.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
3
|
Genetic Algorithms
|
cora
| 2,063
|
test
|
1-hop neighbor's text information: Prototype and feature selection by sampling and random mutation hill climbing algorithms. : With the goal of reducing computational costs without sacrificing accuracy, we describe two algorithms to find sets of prototypes for nearest neighbor classification. Here, the term prototypes refers to the reference instances used in a nearest neighbor computation the instances with respect to which similarity is assessed in order to assign a class to a new data item. Both algorithms rely on stochastic techniques to search the space of sets of prototypes and are simple to implement. The first is a Monte Carlo sampling algorithm; the second applies random mutation hill climbing. On four datasets we show that only three or four prototypes sufficed to give predictive accuracy equal or superior to a basic nearest neighbor algorithm whose run-time storage costs were approximately 10 to 200 times greater. We briefly investigate how random mutation hill climbing may be applied to select features and prototypes simultaneously. Finally, we explain the performance of the sampling algorithm on these datasets in terms of a statistical measure of the extent of clustering displayed by the target classes.
1-hop neighbor's text information: A genetic prototype learner. : Supervised classification problems have received considerable attention from the machine learning community. We propose a novel genetic algorithm based prototype learning system, PLEASE, for this class of problems. Given a set of prototypes for each of the possible classes, the class of an input instance is determined by the prototype nearest to this instance. We assume ordinal attributes and prototypes are represented as sets of feature-value pairs. A genetic algorithm is used to evolve the number of prototypes per class and their positions on the input space as determined by corresponding feature-value pairs. Comparisons with C4.5 on a set of artificial problems of controlled complexity demonstrate the effectiveness of the pro posed system.
1-hop neighbor's text information: "Induction of Decision Trees," :
Target text information: PLEASE: A prototype learning system using genetic algorithms. : Prototypes have been proposed as representation of concepts that are used effectively by humans. Developing computational schemes for generating prototypes from examples, however, has proved to be a difficult problem. We present a novel genetic algorithm based prototype learning system, PLEASE, for constructing appropriate prototypes from classified training instances. After constructing a set of prototypes for each of the possible classes, the class of a new input instance is determined by the nearest prototype to this instance. Attributes are assumed to be ordinal in nature and prototypes are represented as sets of feature-value pairs. A genetic algorithm is used to evolve the number of prototypes per class and their positions on the input space. We present experimental results on a series of artificial problems of varying complexity. PLEASE performs competitively with several nearest neighbor classification algorithms on the problem set. An analysis of the strengths and weaknesses of the initial version of our system motivates the need for additional operators. The inclusion of these operators substantially improves the performance of the system on particularly difficult problems.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
3
|
Genetic Algorithms
|
cora
| 1,845
|
val
|
1-hop neighbor's text information: "Adapting control strategies for situated autonomous agents." : This paper studies how to balance evolutionary design and human expertise in order to best design situated autonomous agents which can learn specific tasks. A genetic algorithm designs control circuits to learn simple behaviors, and given control strategies for simple behaviors, the genetic algorithm designs a combinational circuit that switches between these simple behaviors to perform a navigation task. Keywords: Genetic Algorithms, Computational Design, Autonomous Agents, Robotics.
1-hop neighbor's text information: Har-vey (1993) Evolving Visually Guided Robots. : A version of this paper appears in: Proceedings of SAB92, the Second International Conference on Simulation of Adaptive Behaviour J.-A. Meyer, H. Roitblat, and S. Wilson, editors, MIT Press Bradford Books, Cambridge, MA, 1993.
1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming.
Target text information: Design strategies for evolutionary robotics. : This paper deals with the question of how to balance evolutionary design and human expertise in order to best design robots which can learn specific tasks. We study two behavioral tasks, approach and avoidance, and provide some preliminary results.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
3
|
Genetic Algorithms
|
cora
| 1,748
|
test
|
1-hop neighbor's text information: Semilinear predictability minimization produces well-known feature detectors. : Predictability minimization (PM | Schmidhuber, 1992) exhibits various intuitive and theoretical advantages over many other methods for unsupervised redundancy reduction. So far, however, there were only toy applications of PM. In this paper, we apply semilinear PM to static real world images and find: without a teacher and without any significant preprocessing, the system automatically learns to generate distributed representations based on well-known feature detectors, such as orientation sensitive edge detectors and off-center-on-surround-like structures, thus extracting simple features related to those considered useful for image pre-processing and compression.
1-hop neighbor's text information: The Role of Constraints in Hebbian Learning: Models of unsupervised correlation-based (Hebbian) synaptic plasticity are typically unstable: either all synapses grow until each reaches the maximum allowed strength, or all synapses decay to zero strength. A common method of avoiding these outcomes is to use a constraint that conserves or limits the total synaptic strength over a cell. We study the dynamical effects of such constraints. Two methods of enforcing a constraint are distinguished, multiplicative and subtractive. For otherwise linear learning rules, multiplicative enforcement of a constraint results in dynamics that converge to the principal eigenvector of the operator determining unconstrained synaptic development. Subtractive enforcement, in contrast, typically leads to a final state in which almost all synaptic strengths reach either the maximum or minimum allowed value. This final state is often dominated by weight configurations other than the principal eigenvector of the unconstrained operator. Multiplicative enforcement yields a "graded" receptive field in which most mutually correlated inputs are represented, whereas subtractive enforcement yields a receptive field that is "sharpened" to a subset of maximally-correlated inputs. If two equivalent input populations (e.g. two eyes) innervate a common target, multiplicative enforcement prevents their segregation (ocular dominance segregation) when the two populations are weakly correlated; whereas subtractive enforcement allows segregation under these circumstances. These results may be used to understand constraints both over output cells and over input cells. A variety of rules that can implement constrained dynamics are discussed.
1-hop neighbor's text information: Objective functions for neural map formation. : Institute for Neural Computation Technical Report Series, No. INC-9701, January 1997. University of California, San Diego. La Jolla, CA 92093. Abstract Computational models of neural map formation can be considered on at least three different levels of abstraction: detailed models including neural activity dynamics, weight dynamics which abstract from the the neural activity dynamics by an adiabatic approximation, and objective functions from which weight dynamics may be derived as gradient flows. In this paper we present an example of how an objective function can be derived from detailed non-linear neural dynamics. A systematic investigation reveals how different weight dynamics introduced previously can be derived from objective functions generated from a few prototypical terms. This includes dynamic link matching as a special case of neural map formation. We focus in particular on the role of coordinate transformations to derive different weight dynamics from the same objective function. Coordinate transformations are also important in deriving normalization rules from constraints. Several examples illustrate how objective functions can help in understanding, generating, and comparing different models of neural map formation. The techniques used in this analysis may also be useful in investigating other types of neural dynamics.
Target text information: "Analysis of Linsker\'s simulations of Hebbian rules," : Linsker has reported the development of structured receptive fields in simulations using a Hebb-type synaptic plasticity rule in a feed-forward linear network. The synapses develop under dynamics determined by a matrix that is closely related to the covariance matrix of input cell activities. We analyse the dynamics of the learning rule in terms of the eigenvectors of this matrix. These eigenvectors represent independently evolving weight structures. Some general theorems are presented regarding the properties of these eigenvectors and their eigenvalues. For a general covariance matrix four principal parameter regimes are predicted. We concentrate on the gaussian covariances at layer B ! C of Linsker's network. Analytic and numerical solutions for the eigenvectors at this layer are presented. Three eigenvectors dominate the dynamics: a DC eigenvector, in which all synapses have the same sign; a bi-lobed, oriented eigenvector; and a circularly symmetric, centre-surround eigenvector. Analysis of the circumstances in which each of these vectors dominates yields an explanation of the emergence of centre-surround structures and symmetry-breaking bi-lobed structures. Criteria are developed estimating the boundary of the parameter regime in which centre-surround structures emerge. The application of our analysis to Linsker's higher layers, at which the covariance functions were oscillatory, is briefly discussed.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 1,582
|
test
|
1-hop neighbor's text information: Efficient reinforcement learning through symbiotic evolution. : This article presents a new reinforcement learning method called SANE (Symbiotic, Adaptive Neuro-Evolution), which evolves a population of neurons through genetic algorithms to form a neural network capable of performing a task. Symbiotic evolution promotes both cooperation and specialization, which results in a fast, efficient genetic search and discourages convergence to suboptimal solutions. In the inverted pendulum problem, SANE formed effective networks 9 to 16 times faster than the Adaptive Heuristic Critic and 2 times faster than Q-learning and the GENITOR neuro-evolution approach without loss of generalization. Such efficient learning, combined with few domain assumptions, make SANE a promising approach to a broad range of reinforcement learning problems, including many real-world applications.
1-hop neighbor's text information: 2-D Pole Balancing with Recurrent Evolutionary Networks: The success of evolutionary methods on standard control learning tasks has created a need for new benchmarks. The classic pole balancing problem is no longer difficult enough to serve as a viable yardstick for measuring the learning efficiency of these systems. In this paper we present a more difficult version to the classic problem where the cart and pole can move in a plane. We demonstrate a neuroevolution system (Enforced Sub-Populations, or ESP) that can solve this difficult problem without velocity information.
1-hop neighbor's text information: Cliff (1993). "Issues in evolutionary robotics," From Animals to Animats 2 (Ed. : A version of this paper appears in: Proceedings of SAB92, the Second International Conference on Simulation of Adaptive Behaviour J.-A. Meyer, H. Roitblat, and S. Wilson, editors, MIT Press Bradford Books, Cambridge, MA, 1993.
Target text information: Evolving obstacle avoidance behavior in a robot arm. : Existing approaches for learning to control a robot arm rely on supervised methods where correct behavior is explicitly given. It is difficult to learn to avoid obstacles using such methods, however, because examples of obstacle avoidance behavior are hard to generate. This paper presents an alternative approach that evolves neural network controllers through genetic algorithms. No input/output examples are necessary, since neuro-evolution learns from a single performance measurement over the entire task of grasping an object. The approach is tested in a simulation of the OSCAR-6 robot arm which receives both visual and sensory input. Neural networks evolved to effectively avoid obstacles at various locations to reach random target locations.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
5
|
Reinforcement Learning
|
cora
| 1,272
|
test
|
1-hop neighbor's text information: Non-axiomatic reasoning system (version 2.2). : NARS uses a new form of term logic, or an extended syllogism, in which several types of uncertainties can be represented and processed, and in which deduction, induction, abduction, and revision are carried out in a unified format. The system works in an asynchronously parallel way. The memory of the system is dynamically organized, and can also be interpreted as a network.
1-hop neighbor's text information: From inheritance relation to non-axiomatic logic. : At the beginning of the paper, three binary term logics are defined. The first is based only on an inheritance relation. The second and the third suggest a novel way to process extension and intension, and they also have interesting relations with Aristotle's syllogistic logic. Based on the three simple systems, a Non-Axiomatic Logic is defined. It has a term-oriented language and an experience-grounded semantics. It can uniformly represents and processes randomness, fuzziness, and ignorance. It can also uniformly carries out deduction, abduction, induction, and revision.
1-hop neighbor's text information: An unified treatment of uncertainties. : Uncertainty in artificial intelligence" is an active research field, where several approaches have been suggested and studied for dealing with various types of uncertainty. However, it's hard to rank the approaches in general, because each of them is usually aimed at a special application environment. This paper begins by defining such an environment, then show why some existing approaches cannot be used in such a situation. Then a new approach, Non-Axiomatic Reasoning System, is introduced to work in the environment. The system is designed under the assumption that the system's knowledge and resources are usually insufficient to handle the tasks imposed by its environment. The system can consistently represent several types of uncertainty, and can carry out multiple operations on these uncertainties. Finally, the new approach is compared with the previous approaches in terms of uncertainty representation and interpretation.
Target text information: A defect in Dempster-Shafer theory. : By analyzing the relationships among chance, weight of evidence and degree of belief, it is shown that the assertion "chances are special cases of belief functions" and the assertion "Dempster's rule can be used to combine belief functions based on distinct bodies of evidence" together lead to an inconsistency in Dempster-Shafer theory. To solve this problem, some fundamental postulates of the theory must be rejected. A new approach for uncertainty management is introduced, which shares many intuitive ideas with D-S theory, while avoiding this problem.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
6
|
Probabilistic Methods
|
cora
| 2,462
|
val
|
1-hop neighbor's text information: Systematic Evaluation of Design Decisions in CBR Systems: Two important goals in the evaluation of an AI theory or model are to assess the merit of the design decisions in the performance of an implemented computer system and to analyze the impact in the performance when the system faces problem domains with different characteristics. This is particularly difficult in case-based reasoning systems because such systems are typically very complex, as are the tasks and domains in which they operate. We present a methodology for the evaluation of case-based reasoning systems through systematic empirical experimentation over a range of system configurations and environmental conditions, coupled with rigorous statistical analysis of the results of the experiments. This methodology enables us to understand the behavior of the system in terms of the theory and design of the computational model, to select the best system configuration for a given domain, and to predict how the system will behave in response to changing domain and problem characteristics. A case study of a mul-tistrategy case-based and reinforcement learning system which performs autonomous robotic navigation is presented as an example.
1-hop neighbor's text information: "Multistrategy Learning in Reactive Control Systems for Autonomous Robotic Navigation," : This paper presents a self-improving reactive control system for autonomous robotic navigation. The navigation module uses a schema-based reactive control system to perform the navigation task. The learning module combines case-based reasoning and reinforcement learning to continuously tune the navigation system through experience. The case-based reasoning component perceives and characterizes the system's environment, retrieves an appropriate case, and uses the recommendations of the case to tune the parameters of the reactive control system. The reinforcement learning component refines the content of the cases based on the current experience. Together, the learning components perform on-line adaptation, resulting in improved performance as the reactive control system tunes itself to the environment, as well as on-line learning, resulting in an improved library of cases that capture environmental regularities necessary to perform on-line adaptation. The system is extensively evaluated through simulation studies using several performance metrics and system configurations.
1-hop neighbor's text information: Knowledge Compilation and Speedup Learning in Continuous Task Domains: Many techniques for speedup learning and knowledge compilation focus on the learning and optimization of macro-operators or control rules in task domains that can be characterized using a problem-space search paradigm. However, such a characterization does not fit well the class of task domains in which the problem solver is required to perform in a continuous manner. For example, in many robotic domains, the problem solver is required to monitor real-valued perceptual inputs and vary its motor control parameters in a continuous, on-line manner to successfully accomplish its task. In such domains, discrete symbolic states and operators are difficult to define. To improve its performance in continuous problem domains, a problem solver must learn, modify, and use continuous operators that continuously map input sensory information to appropriate control outputs. Additionally, the problem solver must learn the contexts in which those continuous operators are applicable. We propose a learning method that can compile sensorimo-tor experiences into continuous operators, which can then be used to improve performance of the problem solver. The method speeds up the task performance as well as results in improvements in the quality of the resulting solutions. The method is implemented in a robotic navigation system, which is evaluated through extensive experimen tation.
Target text information: Continuous case-based reasoning. : Case-based reasoning systems have traditionally been used to perform high-level reasoning in problem domains that can be adequately described using discrete, symbolic representations. However, many real-world problem domains, such as autonomous robotic navigation, are better characterized using continuous representations. Such problem domains also require continuous performance, such as continuous sensori-motor interaction with the environment, and continuous adaptation and learning during the performance task. We introduce a new method for continuous case-based reasoning, and discuss how it can be applied to the dynamic selection, modification, and acquisition of robot behaviors in autonomous navigation systems. We conclude with a general discussion of case-based reasoning issues addressed by this work.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
2
|
Case Based
|
cora
| 1,360
|
test
|
1-hop neighbor's text information: Analysis of the Numerical Effects of Parallelism on a Parallel Genetic Algorithm: This paper examines the effects of relaxed synchronization on both the numerical and parallel efficiency of parallel genetic algorithms (GAs). We describe a coarse-grain geographically structured parallel genetic algorithm. Our experiments provide preliminary evidence that asynchronous versions of these algorithms have a lower run time than synchronous GAs. Our analysis shows that this improvement is due to (1) decreased synchronization costs and (2) high numerical efficiency (e.g. fewer function evaluations) for the asynchronous GAs. This analysis includes a critique of the utility of traditional parallel performance measures for parallel GAs.
1-hop neighbor's text information: Genetic Algorithms as Multi-Coordinators in Large-Scale Optimization: We present high-level, decomposition-based algorithms for large-scale block-angular optimization problems containing integer variables, and demonstrate their effectiveness in the solution of large-scale graph partitioning problems. These algorithms combine the subproblem-coordination paradigm (and lower bounds) of price-directive decomposition methods with knapsack and genetic approaches to the utilization of "building blocks" of partial solutions. Even for graph partitioning problems requiring billions of variables in a standard 0-1 formulation, this approach produces high-quality solutions (as measured by deviations from an easily computed lower bound), and substantially outperforms widely-used graph partitioning techniques based on heuristics and spectral methods.
1-hop neighbor's text information: "The Role of Development in Genetic Algorithms." : Technical Report Number CS94-394 Computer Science and Engineering, U.C.S.D. Abstract The developmental mechanisms transforming genotypic to phenotypic forms are typically omitted in formulations of genetic algorithms (GAs) in which these two representational spaces are identical. We argue that a careful analysis of developmental mechanisms is useful when understanding the success of several standard GA techniques, and can clarify the relationships between more recently proposed enhancements. We provide a framework which distinguishes between two developmental mechanisms | learning and maturation | while also showing several common effects on GA search. This framework is used to analyze how maturation and local search can change the dynamics of the GA. We observe that in some contexts, maturation and local search can be incorporated into the fitness evaluation, but illustrate reasons for considering them seperately. Further, we identify contexts in which maturation and local search can be distinguished from the fitness evaluation.
Target text information: Adaptive global optimization with local search. :
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
3
|
Genetic Algorithms
|
cora
| 1,174
|
test
|
1-hop neighbor's text information: Regression with gaussian processes. : The main aim of this paper is to provide a tutorial on regression with Gaussian processes. We start from Bayesian linear regression, and show how by a change of viewpoint one can see this method as a Gaussian process predictor based on priors over functions, rather than on priors over parameters. This leads in to a more general discussion of Gaussian processes in section 4. Section 5 deals with further issues, including hierarchical modelling and the setting of the parameters that control the Gaussian process, the covariance functions for neural network models and the use of Gaussian processes in classification problems.
1-hop neighbor's text information: Gaussian Regression and Optimal Finite Dimensional Linear Models:
Target text information: Rohwer (1996). Bayesian regression filters and the issue of priors. : We propose a Bayesian framework for regression problems, which covers areas which are usually dealt with by function approximation. An online learning algorithm is derived which solves regression problems with a Kalman filter. Its solution always improves with increasing model complexity, without the risk of over-fitting. In the infinite dimension limit it approaches the true Bayesian posterior. The issues of prior selection and over-fitting are also discussed, showing that some of the commonly held beliefs are misleading. The practical implementation is summarised. Simulations using 13 popular publicly available data sets are used to demonstrate the method and highlight important issues concerning the choice of priors.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 1,290
|
val
|
1-hop neighbor's text information: 21 Using n 2 classifier in constructive induction: In this paper, we propose a multi-classification approach for constructive induction. The idea of an improvement of classification accuracy is based on iterative modification of input data space. This process is independently repeated for each pair of n classes. Finally, it gives (n 2 n)/2 input data subspaces of attributes dedicated for optimal discrimination of appropriate pairs of classes. We use genetic algorithms as a constructive induction engine. A final classification is obtained by a weighted majority voting rule, according to n 2 - classifier approach. The computational experiment was performed on medical data set. The obtained results point out the advantage of using a multi-classification model (n 2 classifier) in constructive induction in relation to the analogous single-classifier approach.
Target text information: `Classification by pairwise coupling\', : We discuss a strategy for polychotomous classification that involves estimating class probabilities for each pair of classes, and then coupling the estimates together. The coupling model is similar to the Bradley-Terry method for paired comparisons. We study the nature of the class probability estimates that arise, and examine the performance of the procedure in real and simulated datasets. Classifiers used include linear discriminants, nearest neighbors, and the support vector machine.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
4
|
Theory
|
cora
| 1,395
|
test
|
1-hop neighbor's text information: Nonlinear wavelet shrinkage with Bayes rules and Bayes factors. : Wavelet shrinkage,the method proposed by seminal work of Donohoand Johnstone is a disarmingly simple and efficient way of de-noising data. Shrinking wavelet coefficients was proposed from several optimality criteria. The most notable are the asymptotic minimax and cross-validation criteria. In this paper a wavelet shrinkage by imposing natural properties of Bayesian models on data is proposed. The performance of methods are tested on standard Donoho-Johnstone test functions. Key Words and Phrases: Wavelets, Discrete Wavelet Transform, Thresholding, Bayes Model. 1991 AMS Subject Classification: 42A06, 62G07.
1-hop neighbor's text information: Minimax Bayes, asymptotic minimax and sparse wavelet priors. In Statistical Decision Theory and Related Topics, V, : Pinsker(1980) gave a precise asymptotic evaluation of the minimax mean squared error of estimation of a signal in Gaussian noise when the signal is known a priori to lie in a compact ellipsoid in Hilbert space. This `Minimax Bayes' method can be applied to a variety of global non-parametric estimation settings with parameter spaces far from ellipsoidal. For example it leads to a theory of exact asymptotic minimax estimation over norm balls in Besov and Triebel spaces using simple co-ordinatewise estimators and wavelet bases. This paper outlines some features of the method common to several applications. In particular, we derive new results on the exact asymptotic minimax risk over weak ` p balls in R n as n ! 1, and also for a class of `local' estimators on the Triebel scale. By its very nature, the method reveals the structure of asymptotically least favorable distributions. Thus we may simulate `least favorable' sample paths. We illustrate this for estimation of a signal in Gaussian white noise over norm balls in certain Besov spaces. In wavelet bases, when p < 2, the least favorable priors are sparse, and the resulting sample paths strikingly different from those observed in Pinsker's ellipsoidal setting (p = 2). Acknowledgements. I am grateful for many conversations with David Donoho and Carl Taswell, and to a referee for helpful comments. This work was supported in part by NSF grants DMS 84-51750, 9209130, and NIH PHS grant GM21215-12.
1-hop neighbor's text information: I.M.: Adapting to unknown smoothness via wavelet shrinkage. : We attempt to recover a function of unknown smoothness from noisy, sampled data. We introduce a procedure, SureShrink, which suppresses noise by thresholding the empirical wavelet coefficients. The thresholding is adaptive: a threshold level is assigned to each dyadic resolution level by the principle of minimizing the Stein Unbiased Estimate of Risk (Sure) for threshold estimates. The computational effort of the overall procedure is order N log(N ) as a function of the sample size N. SureShrink is smoothness-adaptive: if the unknown function contains jumps, the reconstruction (essentially) does also; if the unknown function has a smooth piece, the reconstruction is (essentially) as smooth as the mother wavelet will allow. The procedure is in a sense optimally smoothness-adaptive: it is near-minimax simultaneously over a whole interval of the Besov scale; the size of this interval depends on the choice of mother wavelet. We know from a previous paper by the authors that traditional smoothing methods kernels, splines, and orthogonal series estimates - even with optimal choices of the smoothing parameter, would be unable to perform in a near-minimax way over many spaces in the Besov scale. Acknowledgements. The first author was supported at U.C. Berkeley by NSF DMS 88-10192, by NASA Contract NCA2-488, and by a grant from ATT Foundation. The second author was supported in part by NSF grants DMS 84-51750, 86-00235, and NIH PHS grant GM21215-12, and by a grant from ATT Foundation.
Target text information: Wavelet Tresholding via a Bayesian Approach. : We discuss a Bayesian formalism which gives rise to a type of wavelet threshold estimation in non-parametric regression. A prior distribution is imposed on the wavelet coefficients of the unknown response function, designed to capture the sparseness of wavelet expansion common to most applications. For the prior specified, the posterior median yields a thresholding procedure. Our prior model for the underlying function can be adjusted to give functions falling in any specific Besov space. We establish a relation between the hyperparameters of the prior model and the parameters of those Besov spaces within which realizations from the prior will fall. Such a relation gives insight into the meaning of the Besov space parameters. Moreover, the established relation makes it possible in principle to incorporate prior knowledge about the function's regularity properties into the prior model for its wavelet coefficients. However, prior knowledge about a function's regularity properties might be hard to elicit; with this in mind, we propose a standard choise of prior hyperparameters that works well in our examples. Several simulated examples are used to illustrate our method, and comparisons are made with other thresholding methods. We also present an application to a data set collected in an anaesthesiological study.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
6
|
Probabilistic Methods
|
cora
| 1,792
|
val
|
1-hop neighbor's text information: Bayes factors and model uncertainty. : Technical Report no. 255 Department of Statistics, University of Washington August 1993; Revised March 1994
1-hop neighbor's text information: Inference in model-based cluster analysis. : Technical Report no. 285 Department of Statistics University of Washington. March 10, 1995
Target text information: "Estimating Bayes factors via posterior simulation with the Laplace-Metropolis estimator," : The key quantity needed for Bayesian hypothesis testing and model selection is the marginal likelihood for a model, also known as the integrated likelihood, or the marginal probability of the data. In this paper we describe a way to use posterior simulation output to estimate marginal likelihoods. We describe the basic Laplace-Metropolis estimator for models without random effects. For models with random effects the compound Laplace-Metropolis estimator is introduced. This estimator is applied to data from the World Fertility Survey and shown to give accurate results. Batching of simulation output is used to assess the uncertainty involved in using the compound Laplace-Metropolis estimator. The method allows us to test for the effects of independent variables in a random effects model, and also to test for the presence of the random effects.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
6
|
Probabilistic Methods
|
cora
| 63
|
test
|
1-hop neighbor's text information: Oblivious decision trees and abstract cases. : In this paper, we address the problem of case-based learning in the presence of irrelevant features. We review previous work on attribute selection and present a new algorithm, Oblivion, that carries out greedy pruning of oblivious decision trees, which effectively store a set of abstract cases in memory. We hypothesize that this approach will efficiently identify relevant features even when they interact, as in parity concepts. We report experimental results on artificial domains that support this hypothesis, and experiments with natural domains that show improvement in some cases but not others. In closing, we discuss the implications of our experiments, consider additional work on irrelevant features, and outline some directions for future research.
1-hop neighbor's text information: Towards a better understanding of memory-based and bayesian classifiers. : We quantify both experimentally and analytically the performance of memory-based reasoning (MBR) algorithms. To start gaining insight into the capabilities of MBR algorithms, we compare an MBR algorithm using a value difference metric to a popular Bayesian classifier. These two approaches are similar in that they both make certain independence assumptions about the data. However, whereas MBR uses specific cases to perform classification, Bayesian methods summarize the data probabilistically. We demonstrate that a particular MBR system called Pebls works comparatively well on a wide range of domains using both real and artificial data. With respect to the artificial data, we consider distributions where the concept classes are separated by functional discriminants, as well as time-series data generated by Markov models of varying complexity. Finally, we show formally that Pebls can learn (in the limit) natural concept classes that the Bayesian classifier cannot learn, and that it will attain perfect accuracy whenever
1-hop neighbor's text information: Inductive bias in case-based reasoning systems. : In order to learn more about the behaviour of case-based reasoners as learning systems, we form-alise a simple case-based learner as a PAC learning algorithm, using the case-based representation hCB; i. We first consider a `naive' case-based learning algorithm CB1( H ) which learns by collecting all available cases into the case-base and which calculates similarity by counting the number of features on which two problem descriptions agree. We present results concerning the consistency of this learning algorithm and give some partial results regarding its sample complexity. We are able to characterise CB1( H ) as a `weak but general' learning algorithm. We then consider how the sample complexity of case-based learning can be reduced for specific classes of target concept by the application of inductive bias, or prior knowledge of the class of target concepts. Following recent work demonstrating how case-based learning can be improved by choosing a similarity measure appropriate to the concept being learnt, we define a second case-based learning `algorithm' CB2 which learns using the best possible similarity measure that might be inferred for the chosen target concept. While CB2 is not an executable learning strategy (since the chosen similarity measure is defined in terms of a priori knowledge of the actual target concept) it allows us to assess in the limit the maximum possible contribution of this approach to case-based learning. Also, in addition to illustrating the role of inductive bias, the definition of CB2 simplifies the general problem of establishing which functions might be represented in the form hCB; i. Reasoning about the case-based representation in this special case has therefore been a little more straight-forward than in the general case of CB1( H ), allowing more substantial results regarding representable functions and sample complexity to be presented for CB2. In assessing these results, we are forced to conclude that case-based learning is not the best approach to learning the chosen concept space (the space of monomial functions). We discuss, however, how our study has demonstrated, in the context of case-based learning, the operation of concepts well known in machine learning such as inductive bias and the trade-off between computational complexity and sample complexity.
Target text information: Average-case analysis of a nearest neighbour algorithm. : In this paper we present an average-case analysis of the nearest neighbor algorithm, a simple induction method that has been studied by many researchers. Our analysis assumes a conjunctive target concept, noise-free Boolean attributes, and a uniform distribution over the instance space. We calculate the probability that the algorithm will encounter a test instance that is distance d from the prototype of the concept, along with the probability that the nearest stored training case is distance e from this test instance. From this we compute the probability of correct classification as a function of the number of observed training cases, the number of relevant attributes, and the number of irrelevant attributes. We also explore the behavioral implications of the analysis by presenting predicted learning curves for artificial domains, and give experimental results on these domains as a check on our reasoning.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
2
|
Case Based
|
cora
| 1,762
|
test
|
1-hop neighbor's text information: (1993) A NN Algorithm for Hard Satisfiability problems, : Satisfiability (SAT) refers to the task of finding a truth assignment that makes an arbitrary boolean expression true. This paper compares a neural network algorithm (NNSAT) with GSAT [4], a greedy algorithm for solving satisfiability problems. GSAT can solve problem instances that are difficult for traditional satisfiability algorithms. Results suggest that NNSAT scales better as the number of variables increase, solving at least as many hard SAT problems.
1-hop neighbor's text information: Optimal mutation rates in genetic search. : The optimization of a single bit string by means of iterated mutation and selection of the best (a (1+1)-Genetic Algorithm) is discussed with respect to three simple fitness functions: The counting ones problem, a standard binary encoded integer, and a Gray coded integer optimization problem. A mutation rate schedule that is optimal with respect to the success probability of mutation is presented for each of the objective functions, and it turns out that the standard binary code can hamper the search process even in case of unimodal objective functions. While normally a mutation rate of 1=l (where l denotes the bit string length) is recommendable, our results indicate that a variation of the mutation rate is useful in cases where the fitness function is a multimodal pseudo-boolean function, where multimodality may be caused by the objective function as well as the encoding mechanism.
1-hop neighbor's text information: "Using DNA to solve NP-Complete Problems", : A strategy for using Genetic Algorithms (GAs) to solve NP-complete problems is presented. The key aspect of the approach taken is to exploit the observation that, although all NP-complete problems are equally difficult in a general computational sense, some have much better GA representations than others, leading to much more successful use of GAs on some NP-complete problems than on others. Since any NP-complete problem can be mapped into any other one in polynomial time, the strategy described here consists of identifying a canonical NP-complete problem on which GAs work well, and solving other NP-complete problems indirectly by mapping them onto the canonical problem. Initial empirical results are presented which support the claim that the Boolean Satisfiability Problem (SAT) is a GA-effective canonical problem, and that other NP-complete problems with poor GA representations can be solved efficiently by mapping them first onto SAT problems.
Target text information: Simulated annealing for hard satisfiability problems. In, Workshop Notes from the 1993 DIMACS Challenge. FINDING HARD INSTANCES OF THE SATISFIABILITY PROBLEM 17 : Satisfiability (SAT) refers to the task of finding a truth assignment that makes an arbitrary boolean expression true. This paper compares a simulated annealing algorithm (SASAT) with GSAT (Selman et al., 1992), a greedy algorithm for solving satisfiability problems. GSAT can solve problem instances that are extremely difficult for traditional satisfiability algorithms. Results suggest that SASAT scales up better as the number of variables increases, solving at least as many hard SAT problems with less effort. The paper then presents an ablation study that helps to explain the relative advantage of SASAT over GSAT. Next, an improvement to the basic SASAT algorithm is examined, based on a random walk implemented in GSAT (Selman et al., 1993). Finally, we examine the performance of SASAT on a test suite of satisfiability problems produced by the 1993 DIMACS challenge.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
3
|
Genetic Algorithms
|
cora
| 1,769
|
test
|
1-hop neighbor's text information: Learning conjunctions of Horn clauses. :
1-hop neighbor's text information: Weakly Learning DNF and Characterizing Statistical Query Learning Using Fourier Analysis, : We present new results, both positive and negative, on the well-studied problem of learning disjunctive normal form (DNF) expressions. We first prove that an algorithm due to Kushilevitz and Mansour [16] can be used to weakly learn DNF using membership queries in polynomial time, with respect to the uniform distribution on the inputs. This is the first positive result for learning unrestricted DNF expressions in polynomial time in any nontrivial formal model of learning. It provides a sharp contrast with the results of Kharitonov [15], who proved that AC 0 is not efficiently learnable in the same model (given certain plausible cryptographic assumptions). We also present efficient learning algorithms in various models for the read-k and SAT-k subclasses of DNF. For our negative results, we turn our attention to the recently introduced statistical query model of learning [11]. This model is a restricted version of the popular Probably Approximately Correct (PAC) model [23], and practically every class known to be efficiently learnable in the PAC model is in fact learnable in the statistical query model [11]. Here we give a general characterization of the complexity of statistical query learning in terms of the number of uncorrelated functions in the concept class. This is a distribution-dependent quantity yielding upper and lower bounds on the number of statistical queries required for learning on any input distribution. As a corollary, we obtain that DNF expressions and decision trees are not even weakly learnable with fl This research is sponsored in part by the Wright Laboratory, Aeronautical Systems Center, Air Force Materiel Command, USAF, and the Advanced Research Projects Agency (ARPA) under grant number F33615-93-1-1330. Support also is sponsored by the National Science Foundation under Grant No. CC-9119319. Blum also supported in part by NSF National Young Investigator grant CCR-9357793. Views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing official policies or endorsements, either expressed or implied, of Wright Laboratory or the United States Government, or NSF. respect to the uniform input distribution in polynomial time in the statistical query model. This result is information-theoretic and therefore does not rely on any unproven assumptions. It demonstrates that no simple modification of the existing algorithms in the computational learning theory literature for learning various restricted forms of DNF and decision trees from passive random examples (and also several algorithms proposed in the experimental machine learning communities, such as the ID3 algorithm for decision trees [22] and its variants) will solve the general problem. The unifying tool for all of our results is the Fourier analysis of a finite class of boolean functions on the hypercube.
1-hop neighbor's text information: On learning visual concepts and DNF formulae. : We consider the problem of learning DNF formulae in the mistake-bound and the PAC models. We develop a new approach, which is called polynomial explainability, that is shown to be useful for learning some new subclasses of DNF (and CNF) formulae that were not known to be learnable before. Unlike previous learnability results for DNF (and CNF) formulae, these subclasses are not limited in the number of terms or in the number of variables per term; yet, they contain the subclasses of k-DNF and k-term-DNF (and the corresponding classes of CNF) as special cases. We apply our DNF results to the problem of learning visual concepts and obtain learning algorithms for several natural subclasses of visual concepts that appear to have no natural boolean counterpart. On the other hand, we show that learning some other natural subclasses of visual concepts is as hard as learning the class of all DNF formulae. We also consider the robustness of these results under various types of noise.
Target text information: On Learning Read-k-Satisfy-j DNF: We study the learnability of Read-k-Satisfy-j (RkSj) DNF formulas. These are boolean formulas in disjunctive normal form (DNF), in which the maximum number of occurrences of a variable is bounded by k, and the number of terms satisfied by any assignment is at most j. After motivating the investigation of this class of DNF formulas, we present an algorithm that for any unknown RkSj DNF formula to be learned, with high probability finds a logically equivalent DNF formula using the well-studied protocol of equivalence and membership queries. The algorithm runs in polynomial time for k j = O( log n log log n ), where n is the number of input variables.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
4
|
Theory
|
cora
| 196
|
test
|
1-hop neighbor's text information: Genetic Algorithms and Very Fast Reannealing: A Comparison, : We compare Genetic Algorithms (GA) with a functional search method, Very Fast Simulated Reannealing (VFSR), that not only is efficient in its search strategy, but also is statistically guaranteed to find the function optima. GA previously has been demonstrated to be competitive with other standard Boltzmann-type simulated annealing techniques. Presenting a suite of six standard test functions to GA and VFSR codes from previous studies, without any additional fine tuning, strongly suggests that VFSR can be expected to be orders of magnitude more efficient than GA.
1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming.
1-hop neighbor's text information: On the Usage of Differential Evolution for Function Optimization, : assumed unless otherwise stated. Basically, DE generates new parameter vectors by adding the weighted difference between two population vectors to a third vector. If the resulting vector yields a lower objective function value than a predetermined population member, the newly generated vector replaces the vector, with which it was compared, in the next generation; otherwise, the old vector is retained. This basic principle, however, is extended when it comes to the practical variants of DE. For example an existing vector can be perturbed by adding more than one weighted difference vector to it. In most cases, it is also worthwhile to mix the parameters of the old vector with those of the perturbed one before comparing the objective function values. Several variants of DE which have proven to be useful will be described in the
Target text information: Differential Evolution - a Simple and Efficient Heuristic for Global Optimization over Continuous Spaces, Journal of Global Optimization, : A new heuristic approach for minimizing possibly nonlinear and non differentiable continuous space functions is presented. By means of an extensive testbed, which includes the De Jong functions, it will be demonstrated that the new method converges faster and with more certainty than Adaptive Simulated Annealing as well as the Annealed Nelder&Mead approach, both of which have a reputation for being very powerful. The new method requires few control variables, is robust, easy to use and lends itself very well to parallel computation.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
3
|
Genetic Algorithms
|
cora
| 2,471
|
test
|
1-hop neighbor's text information: A Genetic Programming Approach to Strategy Optimization in the Extended Two-Dimensional Pursuer/Evader Problem, :
Target text information: A NEW METHODOLOGY FOR REDUCING BRITTLENESS IN GENETIC PROGRAMMING optimized maneuvers for an extended two-dimensional: programs were independently evolved using fixed and randomly-generated fitness cases. These programs were subsequently tested against a large, representative fixed population of pursuers to determine their relative effectiveness. This paper describes the implementation of both the original and modified systems, and summarizes the results of these tests.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
3
|
Genetic Algorithms
|
cora
| 543
|
test
|
1-hop neighbor's text information: "A Survey of Evolutionary Strategies," :
1-hop neighbor's text information: The coevolution of mutation rates. : In order to better understand life, it is helpful to look beyond the envelop of life as we know it. A simple model of coevolution was implemented with the addition of genes for longevity and mutation rate in the individuals. This made it possible for a lineage to evolve to be immortal. It also allowed the evolution of no mutation or extremely high mutation rates. The model shows that when the individuals interact in a sort of zero-sum game, the lineages maintain relatively high mutation rates. However, when individuals engage in interactions that have greater consequences for one individual in the interaction than the other, lineages tend to evolve relatively low mutation rates. This model suggests that different genes may have evolved different mutation rates as adaptations to the varying pressures of interactions with other genes.
1-hop neighbor's text information: Simulated annealing for hard satisfiability problems. In, Workshop Notes from the 1993 DIMACS Challenge. FINDING HARD INSTANCES OF THE SATISFIABILITY PROBLEM 17 : Satisfiability (SAT) refers to the task of finding a truth assignment that makes an arbitrary boolean expression true. This paper compares a simulated annealing algorithm (SASAT) with GSAT (Selman et al., 1992), a greedy algorithm for solving satisfiability problems. GSAT can solve problem instances that are extremely difficult for traditional satisfiability algorithms. Results suggest that SASAT scales up better as the number of variables increases, solving at least as many hard SAT problems with less effort. The paper then presents an ablation study that helps to explain the relative advantage of SASAT over GSAT. Next, an improvement to the basic SASAT algorithm is examined, based on a random walk implemented in GSAT (Selman et al., 1993). Finally, we examine the performance of SASAT on a test suite of satisfiability problems produced by the 1993 DIMACS challenge.
Target text information: Optimal mutation rates in genetic search. : The optimization of a single bit string by means of iterated mutation and selection of the best (a (1+1)-Genetic Algorithm) is discussed with respect to three simple fitness functions: The counting ones problem, a standard binary encoded integer, and a Gray coded integer optimization problem. A mutation rate schedule that is optimal with respect to the success probability of mutation is presented for each of the objective functions, and it turns out that the standard binary code can hamper the search process even in case of unimodal objective functions. While normally a mutation rate of 1=l (where l denotes the bit string length) is recommendable, our results indicate that a variation of the mutation rate is useful in cases where the fitness function is a multimodal pseudo-boolean function, where multimodality may be caused by the objective function as well as the encoding mechanism.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
3
|
Genetic Algorithms
|
cora
| 1,578
|
test
|
1-hop neighbor's text information: Packet Routing and Reinforcement Learning: Estimating Shortest Paths in Dynamic Graphs:
1-hop neighbor's text information: Parameterized Heuristics for Intelligent Adaptive Network Routing in Large Communication Networks: Parameterized heuristics offers an elegant and powerful theoretical framework for design and analysis of autonomous adaptive communication networks. Routing of messages in such networks presents a real-time instance of a multi-criterion optimization problem in a dynamic and uncertain environment. This paper describes a framework for heuristic routing in large networks. The effectiveness of the heuristic routing mechanism upon which Quo Vadis is based is described as part of a simulation study within a network with grid topology. A formal analysis of the underlying principles is presented through the incremental design of a set of heuristic decision functions that can be used to guide messages along a near-optimal (e.g., minimum delay) path in a large network. This paper carefully derives the properties of such heuristics under a set of simplifying assumptions about the network topology and load dynamics and identify the conditions under which they are guaranteed to route messages along an optimal path. The paper concludes with a discussion of the relevance of the theoretical results presented in the paper to the design of intelligent autonomous adaptive communication networks and an outline of some directions of future research.
1-hop neighbor's text information: Predictive Q-routing: A memory-based reinforcement learning approach to adaptive traffic control. : In this paper, we propose a memory-based Q-learning algorithm called predictive Q-routing (PQ-routing) for adaptive traffic control. We attempt to address two problems encountered in Q-routing (Boyan & Littman, 1994), namely, the inability to fine-tune routing policies under low network load and the inability to learn new optimal policies under decreasing load conditions. Unlike other memory-based reinforcement learning algorithms in which memory is used to keep past experiences to increase learning speed, PQ-routing keeps the best experiences learned and reuses them by predicting the traffic trend. The effectiveness of PQ-routing has been verified under various network topologies and traffic conditions. Simulation results show that PQ-routing is superior to
Target text information: A distributed reinforcement learning scheme for network routing. : In this paper we describe a self-adjusting algorithm for packet routing in which a reinforcement learning method is embedded into each node of a network. Only local information is used at each node to keep accurate statistics on which routing policies lead to minimal routing times. In simple experiments involving a 36-node irregularly-connected network, this learning approach proves superior to routing based on precomputed shortest paths.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
5
|
Reinforcement Learning
|
cora
| 1,521
|
test
|
1-hop neighbor's text information: Backfitting in smoothing spline ANOVA with application to historical global temperature data (thesis). : A computational scheme for fitting smoothing spline ANOVA models to large data sets with a (near) tensor product design is proposed. Such data sets are common in spatial-temporal analyses. The proposed scheme uses the backfitting algorithm to take advantage of the tensor product design to save both computational memory and time. Several ways to further speed up the backfitting algorithm, such as collapsing component functions and successive over-relaxation, are discussed. An iterative imputation procedure is used to handle the cases of near tensor product designs. An application to a global historical surface air temperature data set, which motivated this work, is used to illustrate the scheme proposed.
1-hop neighbor's text information: Adaptive tuning of numerical weather prediction models: Randomized GCV in three and four dimensional data assimilation. :
Target text information: Spatial-temporal analysis of temperature using smoothing spline ANOVA. :
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 2,338
|
test
|
1-hop neighbor's text information: Self-Organization and Associative Memory, : Selective suppression of transmission at feedback synapses during learning is proposed as a mechanism for combining associative feedback with self-organization of feedforward synapses. Experimental data demonstrates cholinergic suppression of synaptic transmission in layer I (feedback synapses), and a lack of suppression in layer IV (feed-forward synapses). A network with this feature uses local rules to learn mappings which are not linearly separable. During learning, sensory stimuli and desired response are simultaneously presented as input. Feedforward connections form self-organized representations of input, while suppressed feedback connections learn the transpose of feedfor-ward connectivity. During recall, suppression is removed, sensory input activates the self-organized representation, and activity generates the learned response.
1-hop neighbor's text information: Introduction to the Theory of Neural Computa 92 tion. : Neural computation, also called connectionism, parallel distributed processing, neural network modeling or brain-style computation, has grown rapidly in the last decade. Despite this explosion, and ultimately because of impressive applications, there has been a dire need for a concise introduction from a theoretical perspective, analyzing the strengths and weaknesses of connectionist approaches and establishing links to other disciplines, such as statistics or control theory. The Introduction to the Theory of Neural Computation by Hertz, Krogh and Palmer (subsequently referred to as HKP) is written from the perspective of physics, the home discipline of the authors. The book fulfills its mission as an introduction for neural network novices, provided that they have some background in calculus, linear algebra, and statistics. It covers a number of models that are often viewed as disjoint. Critical analyses and fruitful comparisons between these models
Target text information: Figure 1: The architecture of a Kohonen network. Each input neuron is fully connected with:
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 655
|
test
|
1-hop neighbor's text information: A Theory of Learning Classification Rules. :
1-hop neighbor's text information: Incremental induction of decision trees. : Technical Report 94-07 February 7, 1994 (updated April 25, 1994) This paper will appear in Proceedings of the Eleventh International Conference on Machine Learning. Abstract This paper presents an algorithm for incremental induction of decision trees that is able to handle both numeric and symbolic variables. In order to handle numeric variables, a new tree revision operator called `slewing' is introduced. Finally, a non-incremental method is given for finding a decision tree based on a direct metric of a candidate tree.
1-hop neighbor's text information: An empirical comparison of selection measures for decision-tree induction. : Ourston and Mooney, 1990b ] D. Ourston and R. J. Mooney. Improving shared rules in multiple category domain theories. Technical Report AI90-150, Artificial Intelligence Labora tory, University of Texas, Austin, TX, December 1990.
Target text information: Learning Classification Trees: Algorithms for learning classification trees have had successes in artificial intelligence and statistics over many years. This paper outlines how a tree learning algorithm can be derived using Bayesian statistics. This introduces Bayesian techniques for splitting, smoothing, and tree averaging. The splitting rule is similar to Quinlan's information gain, while smoothing and averaging replace pruning. Comparative experiments with reimplementations of a minimum encoding approach, Quinlan's C4 (Quinlan et al., 1987) and Breiman et al.'s CART (Breiman et al., 1984) show the full Bayesian algorithm can produce Publication: This paper is a final draft submitted for publication to the Statistics and Computing journal; a version with some minor changes appeared in Volume 2, 1992, pages 63-73. more accurate predictions than versions of these other approaches, though pay a computational price.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
6
|
Probabilistic Methods
|
cora
| 819
|
val
|
1-hop neighbor's text information: Bayesian Case-Based Reasoning with Neural Networks. Pp. : Given a problem, a case-based reasoning (CBR) system will search its case memory and use the stored cases to find the solution, possibly modifying retrieved cases to adapt to the required input specifications. In this paper we introduce a neural network architecture for efficient case-based reasoning. We show how a rigorous Bayesian probability propagation algorithm can be implemented as a feedforward neural network and adapted for CBR. In our approach the efficient indexing problem of CBR is naturally implemented by the parallel architecture, and heuristic matching is replaced by a probability metric. This allows our CBR to perform theoretically sound Bayesian reasoning. We also show how the probability propagation actually offers a solution to the adaptation problem in a very natural way.
1-hop neighbor's text information: Learning Bayesian Prototype Trees by Simulated Annealing: Given a set of samples of an unknown probability distribution, we study the problem of constructing a good approximative Bayesian network model of the probability distribution in question. This task can be viewed as a search problem, where the goal is to find a maximal probability network model, given the data. In this work, we do not make an attempt to learn arbitrarily complex multi-connected Bayesian network structures, since such resulting models can be unsuitable for practical purposes due to the exponential amount of time required for the reasoning task. Instead, we restrict ourselves to a special class of simple tree-structured Bayesian networks called Bayesian prototype trees, for which a polynomial time algorithm for Bayesian reasoning exists. We show how the probability of a given Bayesian prototype tree model can be evaluated, given the data, and how this evaluation criterion can be used in a stochastic simulated annealing algorithm for searching the model space. The simulated annealing algorithm provably finds the maximal probability model, provided that a sufficient amount of time is used.
1-hop neighbor's text information: On estimation of a probability density function and mode. : To apply the algorithm for classification we assign each class a separate set of codebook Gaussians. Each set is only trained with patterns from a single class. After having trained the codebook Gaussians, each set provides an estimate of the probability function of one class; just as with Parzen window estimation, we take as the estimate of the pattern distribution the average of all Gaussians in the set. Classification of a pattern may now be done by calculating the probability of each class at the respective sample point, and assigning to the pattern the class with the highest probability. Hence the whole codebook plays a role in the classification of patterns. This is not the case with regular classification schemes using codebooks. We have tested the classification scheme on several classification tasks including the two spiral problem. We compared our algorithm to various other classification algorithms and it came out second; the best algorithm for the applications is the Parzen window estimation. However, the computing time and memory for Parzen window estimation are excessive when compared to our algorithm, and hence, in practical situations, our algorithm is to be preferred. We have developed a fast algorithm which combines attractive properties of both Parzen window estimation and vector quantization. The scale parameter is tuned adaptively and, therefore, is not set in an ad hoc manner. It allows a classification strategy in which all the codebook vectors are taken into account. This yields better results than the standard vector quantization techniques. An interesting topic for further research is to use radially non-symmetric Gaussians.
Target text information: Learning in neural networks with Bayesian prototypes. : Given a set of samples of a probability distribution on a set of discrete random variables, we study the problem of constructing a good approximative neural network model of the underlying probability distribution. Our approach is based on an unsupervised learning scheme where the samples are first divided into separate clusters, and each cluster is then coded as a single vector. These Bayesian prototype vectors consist of conditional probabilities representing the attribute-value distribution inside the corresponding cluster. Using these prototype vectors, it is possible to model the underlying joint probability distribution as a simple Bayesian network (a tree), which can be realized as a feedforward neural network capable of probabilistic reasoning. In this framework, learning means choosing the size of the prototype set, partitioning the samples into the corresponding clusters, and constructing the cluster prototypes. We describe how the prototypes can be determined, given a partition of the samples, and present a method for evaluating the likelihood of the corresponding Bayesian tree. We also present a greedy heuristic for searching through the space of different partition schemes with different numbers of clusters, aiming at an optimal approximation of the probability distribution.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
6
|
Probabilistic Methods
|
cora
| 1,559
|
test
|
1-hop neighbor's text information: Generalization in reinforcement learning: Successful examples using sparse coarse coding. : On large problems, reinforcement learning systems must use parameterized function approximators such as neural networks in order to generalize between similar situations and actions. In these cases there are no strong theoretical results on the accuracy of convergence, and computational results have been mixed. In particular, Boyan and Moore reported at last year's meeting a series of negative results in attempting to apply dynamic programming together with function approximation to simple control problems with continuous state spaces. In this paper, we present positive results for all the control tasks they attempted, and for one that is significantly larger. The most important differences are that we used sparse-coarse-coded function approximators (CMACs) whereas they used mostly global function approximators, and that we learned online whereas they learned o*ine. Boyan and Moore and others have suggested that the problems they encountered could be solved by using actual outcomes ("rollouts"), as in classical Monte Carlo methods, and as in the TD() algorithm when = 1. However, in our experiments this always resulted in substantially poorer performance. We conclude that reinforcement learning can work robustly in conjunction with function approximators, and that there is little justification at present for avoiding the case of general .
1-hop neighbor's text information: Learning to predict by the methods of temporal differences. : This article introduces a class of incremental learning procedures specialized for prediction|that is, for using past experience with an incompletely known system to predict its future behavior. Whereas conventional prediction-learning methods assign credit by means of the difference between predicted and actual outcomes, the new methods assign credit by means of the difference between temporally successive predictions. Although such temporal-difference methods have been used in Samuel's checker player, Holland's bucket brigade, and the author's Adaptive Heuristic Critic, they have remained poorly understood. Here we prove their convergence and optimality for special cases and relate them to supervised-learning methods. For most real-world prediction problems, temporal-difference methods require less memory and less peak computation than conventional methods; and they produce more accurate predictions. We argue that most problems to which supervised learning is currently applied are really prediction problems of the sort to which temporal-difference methods can be applied to advantage.
Target text information: Modeling the Student with Reinforcement Learning: We describe a methodology for enabling an intelligent teaching system to make high level strategy decisions on the basis of low level student modeling information. This framework is less costly to construct, and superior to hand coding teaching strategies as it is more responsive to the learner's needs. In order to accomplish this, reinforcement learning is used to learn to associate superior teaching actions with certain states of the student's knowledge. Reinforcement learning (RL) has been shown to be flexible in handling noisy data, and does not need expert domain knowledge. A drawback of RL is that it often needs a significant number of trials for learning. We propose an off-line learning methodology using sample data, simulated students, and small amounts of expert knowledge to bypass this problem.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
5
|
Reinforcement Learning
|
cora
| 450
|
test
|
1-hop neighbor's text information: Sequential PAC Learning: We consider the use of "on-line" stopping rules to reduce the number of training examples needed to pac-learn. Rather than collect a large training sample that can be proved sufficient to eliminate all bad hypotheses a priori, the idea is instead to observe training examples one-at-a-time and decide "on-line" whether to stop and return a hypothesis, or continue training. The primary benefit of this approach is that we can detect when a hypothesizer has actually "converged," and halt training before the standard fixed-sample-size bounds. This paper presents a series of such sequential learning procedures for: distribution-free pac-learning, "mistake-bounded to pac" conversion, and distribution-specific pac-learning, respectively. We analyze the worst case expected training sample size of these procedures, and show that this is often smaller than existing fixed sample size bounds | while providing the exact same worst case pac-guarantees. We also provide lower bounds that show these reductions can at best involve constant (and possibly log) factors. However, empirical studies show that these sequential learning procedures actually use many times fewer training examples in prac tice.
1-hop neighbor's text information: An Incremental Interactive Algorithm for Regular Grammar Inference: We present interactive algorithms for learning regular grammars from positive examples and membership queries. A structurally complete set of strings from a language L(G) corresponding to an unknown regular grammar G implicitly specifies a lattice (or version space) which represents a space of candidate grammars containing the unknown grammar G. This lattice can be searched efficiently using membership queries to identify the unknown grammar G using an implicit representation of the version space in the form of two sets S and G that correspond (respectively) to the set of most specific and most general grammars consistent with the set of positive examples provided and the queries answered by the teacher at any given time. We present a provably correct incremental version of the algorithm in which a structurally complete set of positive samples is not necessarily available to the learner at the beginning of learning. The learner constructs a lattice of grammars based on the strings provided at the start and performs candidate elimination by posing safe membership queries. When additional examples become available, the learner incrementally updates the lattice and continues with candidate elimination. Eventually, when the set of positive samples provided by the teacher encompasses a structurally complete set for the unknown grammar, the algorithm terminates by identifying the unknown grammar G.
1-hop neighbor's text information: Boosting a Weak Learning Algorithm by Majority. : We present an algorithm for improving the accuracy of algorithms for learning binary concepts. The improvement is achieved by combining a large number of hypotheses, each of which is generated by training the given learning algorithm on a different set of examples. Our algorithm is based on ideas presented by Schapire in his paper "The strength of weak learnability", and represents an improvement over his results. The analysis of our algorithm provides general upper bounds on the resources required for learning in Valiant's polynomial PAC learning framework, which are the best general upper bounds known today. We show that the number of hypotheses that are combined by our algorithm is the smallest number possible. Other outcomes of our analysis are results regarding the representational power of threshold circuits, the relation between learnability and compression, and a method for parallelizing PAC learning algorithms. We provide extensions of our algorithms to cases in which the concepts are not binary and to the case where the accuracy of the learning algorithm depends on the distribution of the instances.
Target text information: The Design and Analysis of Efficient Learning Algorithms. :
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
5
|
Reinforcement Learning
|
cora
| 944
|
test
|
1-hop neighbor's text information: Information Filtering: Selection Mechanisms in Learning Systems. :
1-hop neighbor's text information: Case-based Acquisition of User Preferences for Solution Improvement in Ill-Structured Domains, : 1 We have developed an approach to acquire complicated user optimization criteria and use them to guide
Target text information: Modeling Ill-Structured Optimization Tasks through Cases: CABINS is a framework of modeling an optimization task in ill-structured domains. In such domains, neither systems nor human experts possess the exact model for guiding optimization. And the user's model of optimality is subjective and situation-dependent. CABINS optimizes a solution through iterative revision using case-based reasoning. In CABINS, task structure analysis was adopted for creating an initial model of the optimization task. Generic vocabularies found in the analysis were specialized into case feature descriptions for application problems. Extensive experimentation on job shop scheduling problems has shown that CABINS can operationalize and improve the model through the accumulation of cases.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
2
|
Case Based
|
cora
| 282
|
test
|
1-hop neighbor's text information: "Adaptive source separation without prewhitening," : Source separation consists in recovering a set of independent signals when only mixtures with unknown coefficients are observed. This paper introduces a class of adaptive algorithms for source separation which implements an adaptive version of equivariant estimation and is henceforth called EASI (Equivariant Adaptive Separation via Independence). The EASI algorithms are based on the idea of serial updating: this specific form of matrix updates systematically yields algorithms with a simple, parallelizable structure, for both real and complex mixtures. Most importantly, the performance of an EASI algorithm does not depend on the mixing matrix. In particular, convergence rates, stability conditions and interference rejection levels depend only on the (normalized) distributions of the source signals. Close form expressions of these quantities are given via an asymptotic performance analysis. This is completed by some numerical experiments illustrating the effectiveness of the proposed approach.
1-hop neighbor's text information: Maximum likelihood source separation for discrete sources: This communication deals with the source separation problem which consists in the separation of a noisy mixture of independent sources without a priori knowledge of the mixture coefficients. In this paper, we consider the maximum likelihood (ML) approach for discrete source signals with known probability distributions. An important feature of the ML approach in Gaussian noise is that the covariance matrix of the additive noise can be treated as a parameter. Hence, it is not necessary to know or to model the spatial structure of the noise. Another striking feature offered in the case of discrete sources is that, under mild assumptions, it is possible to separate more sources than sensors. In this paper, we consider maximization of the likelihood via the Expectation-Maximization (EM) algorithm.
Target text information: Cardoso."On the performance of source separation algorithms" In Proc. : Source separation consists in recovering a set of n independent signals from m n observed instantaneous mixtures of these signals, possibly corrupted by additive noise. Many source separation algorithms use second order information in a whitening operation which reduces the non trivial part of the separation to determining a unitary matrix. Most of them further show a kind of invariance property which can be exploited to predict some general results about their performance. Our first contribution is to exhibit a lower bound to the performance in terms of accuracy of the separation. This bound is independent of the algorithm and, in the i.i.d. case, of the distribution of the source signals. Second, we show that the performance of invariant algorithms depends on the mixing matrix and on the noise level in a specific way. A consequence is that at low noise levels, the performance does not depend on the mixture but only on the distribution of the sources, via a function which is characteristic of the given source separation algorithm.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 2,423
|
test
|
1-hop neighbor's text information: "The Predictability of Data Values", : Copyright 1997 IEEE. Published in the Proceedings of Micro-30, December 1-3, 1997 in Research Triangle Park, North Carolina. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works, must be obtained from the IEEE. Contact: Manager, Copyrights and Permissions IEEE Service Center 445 Hoes Lane P.O. Box 1331 Piscataway, NJ 08855-1331, USA. Telephone: + Intl. 908-562-3966.
Target text information: Data Value Prediction Methods and Performance:
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
0
|
Rule Learning
|
cora
| 700
|
test
|
1-hop neighbor's text information: Truth-from-Trash Learning and the Mobot: As natural resources become less abundant, we naturally become more interested in, and more adept at utilisation of waste materials. In doing this we are bringing to bear a ploy which is of key importance in learning | or so I argue in this paper. In the `Truth from Trash' model, learning is viewed as a process which uses environmental feedback to assemble fortuitous sensory predispositions (sensory `trash') into useful, information vehicles, i.e., `truthful' indicators of salient phenomena. The main aim will be to show how a computer implementation of the model has been used to enhance (through learning) the strategic abilities of a simulated, football playing mobot.
1-hop neighbor's text information: There is No Free Lunch but the Starter is Cheap: Generalisation from First Principles: According to Wolpert's no-free-lunch (NFL) theorems [1, 2], gener-alisation in the absence of domain knowledge is necessarily a zero-sum enterprise. Good generalisation performance in one situation is always offset by bad performance in another. Wolpert notes that the theorems do not demonstrate that effective generalisation is a logical impossibility but merely that a learner's bias (or assumption set) is of key importance
1-hop neighbor's text information: Statistical biases in backpropagation learning. : The paper investigates the statistical effects which may need to be exploited in supervised learning. It notes that these effects can be classified according to their conditionality and their order and proposes that learning algorithms will typically have some form of bias towards particular classes of effect. It presents the results of an empirical study of the statistical bias of backpropagation. The study involved applying the algorithm to a wide range of learning problems using a variety of different internal architectures. The results of the study revealed that backpropagation has a very specific bias in the general direction of statistical rather than relational effects. The paper shows how the existence of this bias effectively constitutes a weakness in the algorithm's ability to discount noise.
Target text information: Trading spaces: computation, representation and the limits of learning. : fl Research on this paper was partly supported by a Senior Research Leave fellowship granted by the Joint Council (SERC/MRC/ESRC) Cognitive Science Human Computer Interaction Initiative to one of the authors (Clark). Thanks to the Initiative for that support. y The order of names is arbitrary.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 29
|
val
|
1-hop neighbor's text information: Dietterich (1991). Learning with Many Irrelevant Features. : In many domains, an appropriate inductive bias is the MIN-FEATURES bias, which prefers consistent hypotheses definable over as few features as possible. This paper defines and studies this bias. First, it is shown that any learning algorithm implementing the MIN-FEATURES bias requires fi( 1 * [2 p + p ln n]) training examples to guarantee PAC-learning a concept having p relevant features out of n available features. This bound is only logarithmic in the number of irrelevant features. The paper also presents a quasi-polynomial time algorithm, FOCUS, which implements MIN-FEATURES. Experimental studies are presented that compare FOCUS to the ID3 and FRINGE algorithms. These experiments show that| contrary to expectations|these algorithms do not implement good approximations of MIN-FEATURES. The coverage, sample complexity, and generalization performance of FOCUS is substantially better than either ID3 or FRINGE on learning problems where the MIN-FEATURES bias is appropriate. This suggests that, in practical applications, training data should be preprocessed to remove irrelevant features before being
1-hop neighbor's text information: "Induction of Decision Trees," :
Target text information: On learning more concepts, : The coverage of a learning algorithm is the number of concepts that can be learned by that algorithm from samples of a given size. This paper asks whether good learning algorithms can be designed by maximizing their coverage. The paper extends a previous upper bound on the coverage of any Boolean concept learning algorithm and describes two algorithms|Multi-Balls and Large-Ball|whose coverage approaches this upper bound. Experimental measurement of the coverage of the ID3 and FRINGE algorithms shows that their coverage is far below this bound. Further analysis of Large-Ball shows that although it learns many concepts, these do not seem to be very interesting concepts. Hence, coverage maximization alone does not appear to yield practically-useful learning algorithms. The paper concludes with a definition of coverage within a bias, which suggests a way that coverage maximization could be applied to strengthen weak preference biases.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
4
|
Theory
|
cora
| 1,989
|
test
|
1-hop neighbor's text information: Forecasting glucose concentration in diabetic patients using ignorant belief networks. :
1-hop neighbor's text information: Anytime Influence Diagrams:
1-hop neighbor's text information: Belief maintenance with probabilistic logic. :
Target text information: Belief maintenance in bayesian networks. :
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
6
|
Probabilistic Methods
|
cora
| 2,196
|
test
|
1-hop neighbor's text information: Brain-Structured Networks That Perceive and Learn. : This paper specifies the main features of Brain-like, Neuronal, and Connectionist models; argues for the need for, and usefulness of, appropriate successively larger brain-like structures; and examines parallel-hierarchical Recognition Cone models of perception from this perspective, as examples of such structures. The anatomy, physiology, behavior, and development of the visual system are briefly summarized to motivate the architecture of brain-structured networks for perceptual recognition. Results are presented from simulations of carefully pre-designed Recognition Cone structures that perceive objects (e.g., houses) in digitized photographs. A framework for perceptual learning is introduced, including mechanisms for generation-discovery (feedback-guided growth of new links and nodes, subject to brain-like constraints (e.g., local receptive fields, global convergence-divergence). The information processing transforms discovered through generation are fine-tuned by feedback-guided reweight-ing of links. Some preliminary results are presented of brain-structured networks that learn to recognize simple objects (e.g., letters of the alphabet, cups, apples, bananas) through feedback-guided generation and reweighting. These show large improvements over networks that either lack brain-like structure or/and learn by reweighting of links alone.
1-hop neighbor's text information: Coordination and Control Structures and Processes: Possibilities for Connectionist Networks. : The absence of powerful control structures and processes that synchronize, coordinate, switch between, choose among, regulate, direct, modulate interactions between, and combine distinct yet interdependent modules of large connectionist networks (CN) is probably one of the most important reasons why such networks have not yet succeeded at handling difficult tasks (e.g. complex object recognition and description, complex problem-solving, planning). In this paper we examine how CN built from large numbers of relatively simple neuron-like units can be given the ability to handle problems that in typical multi-computer networks and artificial intelligence programs along with all other types of programs are always handled using extremely elaborate and precisely worked out central control (coordination, synchronization, switching, etc.). We point out the several mechanisms for central control of this un-brain-like sort that CN already have built into them albeit in hidden, often overlooked, ways. We examine the kinds of control mechanisms found in computers, programs, fetal development, cellular function and the immune system, evolution, social organizations, and especially brains, that might be of use in CN. Particularly intriguing suggestions are found in the pacemakers, oscillators, and other local sources of the brain's complex partial synchronies; the diffuse, global effects of slow electrical waves and neurohormones; the developmental program that guides fetal development; communication and coordination within and among living cells; the working of the immune system; the evolutionary processes that operate on large populations of organisms; and the great variety of partially competing partially cooperating controls found in small groups, organizations, and larger societies. All these systems are rich in control but typically control that emerges from complex interactions of many local and diffuse sources. We explore how several different kinds of plausible control mechanisms might be incorporated into CN, and assess their potential benefits with respect to their cost.
1-hop neighbor's text information: Faster Learning in Multi-Layer Networks by Handling Output Layer Flat-Spots. : Generalized delta rule, popularly known as back-propagation (BP) [9, 5] is probably one of the most widely used procedures for training multi-layer feed-forward networks of sigmoid units. Despite reports of success on a number of interesting problems, BP can be excruciatingly slow in converging on a set of weights that meet the desired error criterion. Several modifications for improving the learning speed have been proposed in the literature [2, 4, 8, 1, 6]. BP is known to suffer from the phenomenon of flat spots [2]. The slowness of BP is a direct consequence of these flat-spots together with the formulation of the BP Learning rule. This paper proposes a new approach to minimizing the error that is suggested by the mathematical properties of the conventional error function and that effectively handles flat-spots occurring in the output layer. The robustness of the proposed technique is demonstrated on a number of data-sets widely studied in the machine learning community.
Target text information: Experiments with the cascade-correlation algorithm. Microcomputer Applications, : Technical Report # 91-16 July 1991; Revised August 1991
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 2,567
|
val
|
1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming.
1-hop neighbor's text information: "Evolution in Time and Space: The Parallel Genetic Algorithm." In Foundations of Genetic Algorithms, : The parallel genetic algorithm (PGA) uses two major modifications compared to the genetic algorithm. Firstly, selection for mating is distributed. Individuals live in a 2-D world. Selection of a mate is done by each individual independently in its neighborhood. Secondly, each individual may improve its fitness during its lifetime by e.g. local hill-climbing. The PGA is totally asynchronous, running with maximal efficiency on MIMD parallel computers. The search strategy of the PGA is based on a small number of active and intelligent individuals, whereas a GA uses a large population of passive individuals. We will investigate the PGA with deceptive problems and the traveling salesman problem. We outline why and when the PGA is succesful. Abstractly, a PGA is a parallel search with information exchange between the individuals. If we represent the optimization problem as a fitness landscape in a certain configuration space, we see, that a PGA tries to jump from two local minima to a third, still better local minima, by using the crossover operator. This jump is (probabilistically) successful, if the fitness landscape has a certain correlation. We show the correlation for the traveling salesman problem by a configuration space analysis. The PGA explores implicitly the above correlation.
1-hop neighbor's text information: An Analysis of Genetic Programming, : In this paper we carefully formulate a Schema Theorem for Genetic Programming (GP) using a schema definition that accounts for the variable length and the non-homologous nature of GP's representation. In a manner similar to early GA research, we use interpretations of our GP Schema Theorem to obtain a GP Building Block definition and to state a "classical" Building Block Hypothesis (BBH): that GP searches by hierarchically combining building blocks. We report that this approach is not convincing for several reasons: it is difficult to find support for the promotion and combination of building blocks solely by rigourous interpretation of a GP Schema Theorem; even if there were such support for a BBH, it is empirically questionable whether building blocks always exist because partial solutions of consistently above average fitness and resilience to disruption are not assured; also, a BBH constitutes a narrow and imprecise account of GP search behavior.
Target text information: "The Schema Theorem and Price\'s Theorem," : Holland's Schema Theorem is widely taken to be the foundation for explanations of the power of genetic algorithms (GAs). Yet some dissent has been expressed as to its implications. Here, dissenting arguments are reviewed and elaborated upon, explaining why the Schema Theorem has no implications for how well a GA is performing. Interpretations of the Schema Theorem have implicitly assumed that a correlation exists between parent and offspring fitnesses, and this assumption is made explicit in results based on Price's Covariance and Selection Theorem. Schemata do not play a part in the performance theorems derived for representations and operators in general. However, schemata re-emerge when recombination operators are used. Using Geiringer's recombination distribution representation of recombination operators, a "missing" schema theorem is derived which makes explicit the intuition for when a GA should perform well. Finally, the method of "adaptive landscape" analysis is examined and counterexamples offered to the commonly used correlation statistic. Instead, an alternative statistic | the transmission function in the fitness domain | is proposed as the optimal statistic for estimating GA performance from limited samples.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
3
|
Genetic Algorithms
|
cora
| 2,340
|
test
|
1-hop neighbor's text information: (1996) "Bayesian analysis of mixtures of mixtures," : Discrete mixtures of normal distributions are widely used in modeling amplitude fluctuations of electrical potentials at synapses of human, and other animal nervous systems. The usual framework has independent data values y j arising as y j = j + x n 0 +j where the means j come from some discrete prior G() and the unknown x n 0 +j 's and observed x j ; j = 1; : : : ; n 0 are gaussian noise terms. A practically important development of the associated statistical methods is the issue of non-normality of the noise terms, often the norm rather than the exception in the neurological context. We have recently developed models, based on convolutions of Dirichlet process mixtures, for such problems. Explicitly, we model the noise data values x j as arising from a Dirich-let process mixture of normals, in addition to modeling the location prior G() as a Dirichlet process itself. This induces a Dirichlet mixture of mixtures of normals, whose analysis may be developed using Gibbs sampling techniques. We discuss these models and their analysis, and illustrate in the context of neurological response analysis.
1-hop neighbor's text information: (1992) Computing Bayesian nonparametric hierarchical models. ISDS Discussion Paper #92-A20, : Bayesian models involving Dirichlet process mixtures are at the heart of the modern nonparametric Bayesian movement. Much of the rapid development of these models in the last decade has been a direct result of advances in simulation-based computational methods. Some of the very early work in this area, circa 1988-1991, focused on the use of such nonparametric ideas and models in applications of otherwise standard hierarchical models. This chapter provides some historical review and perspective on these developments, with a prime focus on the use and integration of such nonparametric ideas in hierarchical models. We illustrate the ease with which the strict parametric assumptions common to most standard Bayesian hierarchical models can be relaxed to incorporate uncertainties about functional forms using Dirichlet process components, partly enabled by the approach to computation using MCMC methods. The resulting methology is illustrated with two examples taken from an unpublished 1992 report on the topic.
1-hop neighbor's text information: Mixture Models in the Exploration of Structure-Activity Relationships in Drug Design: We report on a study of mixture modeling problems arising in the assessment of chemical structure-activity relationships in drug design and discovery. Pharmaceutical research laboratories developing test compounds for screening synthesize many related candidate compounds by linking together collections of basic molecular building blocks, known as monomers. These compounds are tested for biological activity, feeding in to screening for further analysis and drug design. The tests also provide data relating compound activity to chemical properties and aspects of the structure of associated monomers, and our focus here is studying such relationships as an aid to future monomer selection. The level of chemical activity of compounds is based on the geometry of chemical binding of test compounds to target binding sites on receptor compounds, but the screening tests are unable to identify binding configurations. Hence potentially critical covari-ate information is missing as a natural latent variable. Resulting statistical models are then mixed with respect to such missing information, so complicating data analysis and inference. This paper reports on a study of a two-monomer, two-binding site framework and associated data. We build structured mixture models that mix linear regression models, predicting chemical effectiveness, with respect to site-binding selection mechanisms. We discuss aspects of modeling and analysis, including problems and pitfalls, and describe results of analyses of a simulated and real data set. In modeling real data, we are led into critical model extensions that introduce hierarchical random effects components to adequately capture heterogeneities in both the site binding mechanisms and in the resulting levels of effectiveness of compounds once bound. Comments on current and potential future directions conclude the report.
Target text information: (1997) "Hierarchical mixture models in neurological transmission analysis," : Hierarchically structured mixture models are studied in the context of data analysis and inference on neural synaptic transmission characteristics in mammalian, and other, central nervous systems. Mixture structures arise due to uncertainties about the stochastic mechanisms governing the responses to electro-chemical stimulation of individual neuro-transmitter release sites at nerve junctions. Models attempt to capture scientific features such as the sensitivity of individual synaptic transmission sites to electro-chemical stimuli, and the extent of their electro-chemical responses when stimulated. This is done via suitably structured classes of prior distributions for parameters describing these features. Such priors may be structured to permit assessment of currently topical scientific hypotheses about fundamental neural function. Posterior analysis is implemented via stochastic simulation. Several data analyses are described to illustrate the approach, with resulting neurophysiological insights in some recently generated experimental contexts. Further developments and open questions, both neurophysiological and statistical, are noted. Research partially supported by the NSF under grants DMS-9024793, DMS-9305699 and DMS-9304250. This work represents part of a collaborative project with Dr Dennis A Turner, of Duke University Medical Center and Durham VA. Data was provided by Dr Turner and by Dr Howard V Wheal of Southampton University. A slightly revised version of this paper is published in the Journal of the American Statistical Association (vol 92, pp587-606), under the modified title Hierarchical Mixture Models in Neurological Transmission Analysis. The author is the recipient of the 1997 Mitchell Prize for "the Bayesian analysis of a substantive and concrete problem" based on the work reported in this paper.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 2,011
|
test
|
1-hop neighbor's text information: Evolving Optimal Populations with XCS Classifier Systems, :
Target text information: XCS Classifier System Reliably Evolves Accurate, Complete, and Minimal Representations for Boolean Functions more complex: Wilson's recent XCS classifier system forms complete mappings of the payoff environment in the reinforcement learning tradition thanks to its accuracy based fitness. According to Wilson's Generalization Hypothesis, XCS has a tendency towards generalization. With the XCS Optimality Hypothesis, I suggest that XCS systems can evolve optimal populations (representations); populations which accurately map all input/action pairs to payoff predictions using the smallest possible set of non-overlapping classifiers. The ability of XCS to evolve optimal populations for boolean multiplexer problems is demonstrated using condensation, a technique in which evolutionary search is suspended by setting the crossover and mutation rates to zero. Condensation is automatically triggered by self-monitoring of performance statistics, and the entire learning process is terminated by autotermination. Combined, these techniques allow a classifier system to evolve optimal representations of boolean functions without any form of supervision.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
5
|
Reinforcement Learning
|
cora
| 695
|
test
|
1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming.
Target text information: : A lower-bound result on the power of Abstract This paper presents a lower-bound result on the computational power of a genetic algorithm in the context of combinatorial optimization. We describe a new genetic algorithm, the merged genetic algorithm, and prove that for the class of monotonic functions, the algorithm finds the optimal solution, and does so with an exponential convergence rate. The analysis pertains to the ideal behavior of the algorithm where the main task reduces to showing convergence of probability distributions over the search space of combinatorial structures to the optimal one. We take exponential convergence to be indicative of efficient solvability for the sample-bounded algorithm, although a sampling theory is needed to better relate the limit behavior to actual behavior. The paper concludes with a discussion of some immediate problems that lie ahead. a genetic algorithm
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
3
|
Genetic Algorithms
|
cora
| 785
|
val
|
1-hop neighbor's text information: On biases estimating multi-valued attributes. : We analyse the biases of eleven measures for estimating the quality of the multi-valued attributes. The values of information gain, J-measure, gini-index, and relevance tend to linearly increase with the number of values of an attribute. The values of gain-ratio, distance measure, Relief , and the weight of evidence decrease for informative attributes and increase for irrelevant attributes. The bias of the statistic tests based on the chi-square distribution is similar but these functions are not able to discriminate among the attributes of different quality. We also introduce a new function based on the MDL principle whose value slightly decreases with the increasing number of attribute's values.
1-hop neighbor's text information: Irrelevant features and the subset selection problem. : We address the problem of finding a subset of features that allows a supervised induction algorithm to induce small high-accuracy concepts. We examine notions of relevance and irrelevance, and show that the definitions used in the machine learning literature do not adequately partition the features into useful categories of relevance. We present definitions for irrelevance and for two degrees of relevance. These definitions improve our understanding of the behavior of previous subset selection algorithms, and help define the subset of features that should be sought. The features selected should depend not only on the features and the target concept, but also on the induction algorithm. We describe a method for feature subset selection using cross-validation that is applicable to any induction algorithm, and discuss experiments conducted with ID3 and C4.5 on artificial and real datasets.
Target text information: Discovering Compressive Partial Determinations in Mixed Numerical and Symbolic Domains: Partial determinations are an interesting form of dependency between attributes in a relation. They generalize functional dependencies by allowing exceptions. We modify a known MDL formula for evaluating such partial determinations to allow for its use in an admissible heuristic in exhaustive search. Furthermore we describe an efficient preprocessing-based approach for handling numerical attributes. An empirical investigation tries to evaluate the viability of the presented ideas.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
0
|
Rule Learning
|
cora
| 321
|
test
|
1-hop neighbor's text information: Free energy coding. : In this paper, we introduce a new approach to the problem of optimal compression when a source code produces multiple codewords for a given symbol. It may seem that the most sensible codeword to use in this case is the shortest one. However, in the proposed free energy approach, random codeword selection yields an effective codeword length that can be less than the shortest codeword length. If the random choices are Boltzmann distributed, the effective length is optimal for the given source code. The expectation-maximization parameter estimation algorithms minimize this effective codeword length. We illustrate the performance of free energy coding on a simple problem where a compression factor of two is gained by using the new method.
Target text information: Bits-back coding software guide: Abstract | In this document, I first review the theory behind bits-back coding (aka. free energy coding) (Frey and Hinton 1996) and then describe the interface to C-language software that can be used for bits-back coding. This method is a new approach to the problem of optimal compression when a source code produces multiple codewords for a given symbol. It may seem that the most sensible codeword to use in this case is the shortest one. However, in the proposed bits-back approach, random codeword selection yields an effective codeword length that can be less than the shortest codeword length. If the random choices are Boltzmann distributed, the effective length is optimal for the given source code. The software which I describe in this guide is easy to use and the source code is only a few pages long. I illustrate the bits-back coding software on a simple quantized Gaussian mixture problem.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
6
|
Probabilistic Methods
|
cora
| 188
|
val
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.