content
stringlengths 633
9.91k
| label
stringclasses 7
values | category
stringclasses 7
values | dataset
stringclasses 1
value | node_id
int64 0
2.71k
| split
stringclasses 3
values |
|---|---|---|---|---|---|
1-hop neighbor's text information: and T.M. Barnhill, Application of statistical mechanics methodology to term-structure bond-pricing models, :
1-hop neighbor's text information: Genetic Algorithms and Very Fast Reannealing: A Comparison, : We compare Genetic Algorithms (GA) with a functional search method, Very Fast Simulated Reannealing (VFSR), that not only is efficient in its search strategy, but also is statistically guaranteed to find the function optima. GA previously has been demonstrated to be competitive with other standard Boltzmann-type simulated annealing techniques. Presenting a suite of six standard test functions to GA and VFSR codes from previous studies, without any additional fine tuning, strongly suggests that VFSR can be expected to be orders of magnitude more efficient than GA.
1-hop neighbor's text information: Volatility of Volatility of Financial Markets: We present empirical evidence for considering volatility of Eurodollar futures as a stochastic process, requiring a generalization of the standard Black-Scholes (BS) model which treats volatility as a constant. We use a previous development of a statistical mechanics of financial markets (SMFM) to model these issues.
Target text information: Adaptive Simulated Annealing (ASA), :
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
3
|
Genetic Algorithms
|
cora
| 2,684
|
test
|
1-hop neighbor's text information: Boosting a Weak Learning Algorithm by Majority. : We present an algorithm for improving the accuracy of algorithms for learning binary concepts. The improvement is achieved by combining a large number of hypotheses, each of which is generated by training the given learning algorithm on a different set of examples. Our algorithm is based on ideas presented by Schapire in his paper "The strength of weak learnability", and represents an improvement over his results. The analysis of our algorithm provides general upper bounds on the resources required for learning in Valiant's polynomial PAC learning framework, which are the best general upper bounds known today. We show that the number of hypotheses that are combined by our algorithm is the smallest number possible. Other outcomes of our analysis are results regarding the representational power of threshold circuits, the relation between learnability and compression, and a method for parallelizing PAC learning algorithms. We provide extensions of our algorithms to cases in which the concepts are not binary and to the case where the accuracy of the learning algorithm depends on the distribution of the instances.
1-hop neighbor's text information: "A General Lower Bound on the Number of Examples Needed for Learning," : We prove a lower bound of ( 1 * ln 1 ffi + VCdim(C) * ) on the number of random examples required for distribution-free learning of a concept class C, where VCdim(C) is the Vapnik-Chervonenkis dimension and * and ffi are the accuracy and confidence parameters. This improves the previous best lower bound of ( 1 * ln 1 ffi + VCdim(C)), and comes close to the known general upper bound of O( 1 ffi + VCdim(C) * ln 1 * ) for consistent algorithms. We show that for many interesting concept classes, including kCNF and kDNF, our bound is actually tight to within a constant factor.
1-hop neighbor's text information: Query by Committee, : We propose an algorithm called query by committee, in which a committee of students is trained on the same data set. The next query is chosen according to the principle of maximal disagreement. The algorithm is studied for two toy models: the high-low game and perceptron learning of another perceptron. As the number of queries goes to infinity, the committee algorithm yields asymptotically finite information gain. This leads to generalization error that decreases exponentially with the number of examples. This in marked contrast to learning from randomly chosen inputs, for which the information gain approaches zero and the generalization error decreases with a relatively slow inverse power law. We suggest that asymptotically finite information gain may be an important characteristic of good query algorithms.
Target text information: Sifting informative examples from a random source.: We discuss two types of algorithms for selecting relevant examples that have been developed in the context of computation learning theory. The examples are selected out of a stream of examples that are generated independently at random. The first two algorithms are the so-called "boosting" algorithms of Schapire [ Schapire, 1990 ] and Fre-und [ Freund, 1990 ] , and the Query-by-Committee algorithm of Seung [ Seung et al., 1992 ] . We describe the algorithms and some of their proven properties, point to some of their commonalities, and suggest some possible future implications.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
4
|
Theory
|
cora
| 190
|
val
|
1-hop neighbor's text information: Extraction of rules from discrete-time recurrent neural networks. Neural Networks, : Technical Report CS-TR-3465 and UMIACS-TR-95-54 University of Maryland, College Park, MD 20742 Abstract The extraction of symbolic knowledge from trained neural networks and the direct encoding of (partial) knowledge into networks prior to training are important issues. They allow the exchange of information between symbolic and connectionist knowledge representations. The focus of this paper is on the quality of the rules that are extracted from recurrent neural networks. Discrete-time recurrent neural networks can be trained to correctly classify strings of a regular language. Rules defining the learned grammar can be extracted from networks in the form of deterministic finite-state automata (DFA's) by applying clustering algorithms in the output space of recurrent state neurons. Our algorithm can extract different finite-state automata that are consistent with a training set from the same network. We compare the generalization performances of these different models and the trained network and we introduce a heuristic that permits us to choose among the consistent DFA's the model which best approximates the learned regular grammar.
1-hop neighbor's text information: A unified gradient-descent/clustering algorithm architecture for finite state machine induction. : Although recurrent neural nets have been moderately successful in learning to emulate finite-state machines (FSMs), the continuous internal state dynamics of a neural net are not well matched to the discrete behavior of an FSM. We describe an architecture, called DOLCE, that allows discrete states to evolve in a net as learning progresses. dolce consists of a standard recurrent neural net trained by gradient descent and an adaptive clustering technique that quantizes the state space. dolce is based on the assumption that a finite set of discrete internal states is required for the task, and that the actual network state belongs to this set but has been corrupted by noise due to inaccuracy in the weights. dolce learns to recover the discrete state with maximum a posteriori probability from the noisy state. Simulations show that dolce leads to a significant improvement in generalization performance over earlier neural net approaches to FSM induction.
1-hop neighbor's text information: "Learning context-free grammars: Limitations of a recurrent neural network with an external stack memory," : This work describes an approach for inferring Deterministic Context-free (DCF) Grammars in a Connectionist paradigm using a Recurrent Neural Network Pushdown Automaton (NNPDA). The NNPDA consists of a recurrent neural network connected to an external stack memory through a common error function. We show that the NNPDA is able to learn the dynamics of an underlying pushdown automaton from examples of grammatical and non-grammatical strings. Not only does the network learn the state transitions in the automaton, it also learns the actions required to control the stack. In order to use continuous optimization methods, we develop an analog stack which reverts to a discrete stack by quantization of all activations, after the network has learned the transition rules and stack actions. We further show an enhancement of the network's learning capabilities by providing hints. In addition, an initial comparative study of simulations with first, second and third order recurrent networks has shown that the increased degree of freedom in a higher order networks improve generalization but not necessarily learning speed.
Target text information: "Rule checking with recurrent neural networks," : Recurrent neural networks readily process, recognize and generate temporal sequences. By encoding grammatical strings as temporal sequences, recurrent neural networks can be trained to behave like deterministic sequential finite-state automata. Algorithms have been developed for extracting grammatical rules from trained networks. Using a simple method for inserting prior knowledge (or rules) into recurrent neural networks, we show that recurrent neural networks are able to perform rule revision. Rule revision is performed by comparing the inserted rules with the rules in the finite-state automata extracted from trained networks. The results from training a recurrent neural network to recognize a known non-trivial, randomly generated regular grammar show that not only do the networks preserve correct rules but that they are able to correct through training inserted rules which were initially incorrect. (By incorrect, we mean that the rules were not the ones in the randomly generated grammar.)
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 1,139
|
test
|
1-hop neighbor's text information: Markov games as a framework for multi-agent reinforcement learning. : In the Markov decision process (MDP) formalization of reinforcement learning, a single adaptive agent interacts with an environment defined by a probabilistic transition function. In this solipsistic view, secondary agents can only be part of the environment and are therefore fixed in their behavior. The framework of Markov games allows us to widen this view to include multiple adaptive agents with interacting or competing goals. This paper considers a step in this direction in which exactly two agents with diametrically opposed goals share an environment. It describes a Q-learning-like algorithm for finding optimal policies and demonstrates its application to a simple two-player game in which the optimal policy is probabilistic.
1-hop neighbor's text information: Jordan (1996b). Recursive algorithms for approximating proba-bilities in graphical models. : MIT Computational Cognitive Science Technical Report 9604 Abstract We develop a recursive node-elimination formalism for efficiently approximating large probabilistic networks. No constraints are set on the network topologies. Yet the formalism can be straightforwardly integrated with exact methods whenever they are/become applicable. The approximations we use are controlled: they maintain consistently upper and lower bounds on the desired quantities at all times. We show that Boltzmann machines, sigmoid belief networks, or any combination (i.e., chain graphs) can be handled within the same framework. The accuracy of the methods is verified exper imentally.
1-hop neighbor's text information: Computing upper and lower bounds on likelihoods in intractable networks. : We present deterministic techniques for computing upper and lower bounds on marginal probabilities in sigmoid and noisy-OR networks. These techniques become useful when the size of the network (or clique size) precludes exact computations. We illustrate the tightness of the bounds by numerical experi ments.
Target text information: Monte-carlo reinforcement learning in non-Markovian decision problems. : MIT Computational Cognitive Science Technical Report 9701 Abstract We describe variational approximation methods for efficient probabilistic reasoning, applying these methods to the problem of diagnostic inference in the QMR-DT database. The QMR-DT database is a large-scale belief network based on statistical and expert knowledge in internal medicine. The size and complexity of this network render exact probabilistic diagnosis infeasible for all but a small set of cases. This has hindered the development of the QMR-DT network as a practical diagnostic tool and has hindered researchers from exploring and critiquing the diagnostic behavior of QMR. In this paper we describe how variational approximation methods can be applied to the QMR network, resulting in fast diagnostic inference. We evaluate the accuracy of our methods on a set of standard diagnostic cases and compare to stochastic sampling methods.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
6
|
Probabilistic Methods
|
cora
| 992
|
test
|
1-hop neighbor's text information: Lookahead and Pathology in Decision Tree Induction, : The standard approach to decision tree induction is a top-down, greedy algorithm that makes locally optimal, irrevocable decisions at each node of a tree. In this paper, we study an alternative approach, in which the algorithms use limited lookahead to decide what test to use at a node. We systematically compare, using a very large number of decision trees, the quality of decision trees induced by the greedy approach to that of trees induced using lookahead. The main results of our experiments are: (i) the greedy approach produces trees that are just as accurate as trees produced with the much more expensive lookahead step; and (ii) decision tree induction exhibits pathology, in the sense that lookahead can produce trees that are both larger and less accurate than trees produced without it.
1-hop neighbor's text information: What should be minimized in a decision tree?. : Computer Science Department University of Massachusetts at Amherst CMPSCI Technical Report 95-20 September 6, 1995
1-hop neighbor's text information: Learning decision lists using homogeneous rules. : rules (Rivest 1987). Inductive algorithms such as AQ and CN2 learn decision lists incrementally, one rule at a time. Such algorithms face the rule overlap problem | the classification accuracy of the decision list depends on the overlap between the learned rules. Thus, even though the rules are learned in isolation, they can only be evaluated in concert. Existing algorithms solve this problem by adopting a greedy, iterative structure. Once a rule is learned, the training examples that match the rule are removed from the training set. We propose a novel solution to the problem: composing decision lists from homogeneous rules, rules whose classification accuracy does not change with their position in the decision list. We prove that the problem of finding a maximally accurate decision list can be reduced to the problem of finding maximally accurate homogeneous rules. We report on the performance of our algorithm on data sets from the UCI repository and on the MONK's problems.
Target text information: Exploring the decision forest: An empirical investigation of Occam\'s razor in decision tree induction. : We report on a series of experiments in which all decision trees consistent with the training data are constructed. These experiments were run to gain an understanding of the properties of the set of consistent decision trees, and the factors that affect the error rate of individual trees. The experiments were performed on a massively parallel Maspar 1 computer. The results of the experimentation on two artificial and two real world problems indicate that for three of the four problems investigated, the smallest consistent decision trees tend to be less accurate than the average accuracy of those slightly larger.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
4
|
Theory
|
cora
| 699
|
test
|
1-hop neighbor's text information: Rates of convergence of the Hastings and Metropolis algorithms. : We apply recent results in Markov chain theory to Hastings and Metropolis algorithms with either independent or symmetric candidate distributions, and provide necessary and sufficient conditions for the algorithms to converge at a geometric rate to a prescribed distribution . In the independence case (in IR k ) these indicate that geometric convergence essentially occurs if and only if the candidate density is bounded below by a multiple of ; in the symmetric case (in IR only) we show geometric convergence essentially occurs if and only if has geometric tails. We also evaluate recently developed computable bounds on the rates of convergence in this context: examples show that these theoretical bounds can be inherently extremely conservative, although when the chain is stochastically monotone the bounds may well be effective.
Target text information: EXACT BOUND FOR THE CONVERGENCE OF METROPOLIS CHAINS: In this note, we present a calculation which gives us the exact bound for the convergence of independent Metropolis chains in a finite state space. Metropolis chain, convergence rate, Markov chain Monte Carlo
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
6
|
Probabilistic Methods
|
cora
| 606
|
test
|
1-hop neighbor's text information: Planning and acting in partially observable stochastic domains. : In this paper, we bring techniques from operations research to bear on the problem of choosing optimal actions in partially observable stochastic domains. We begin by introducing the theory of Markov decision processes (mdps) and partially observable mdps (pomdps). We then outline a novel algorithm for solving pomdps off line and show how, in some cases, a finite-memory controller can be extracted from the solution to a pomdp. We conclude with a discussion of how our approach relates to previous work, the complexity of finding exact solutions to pomdps, and of some possibilities for finding approximate solutions. Consider the problem of a robot navigating in a large office building. The robot can move from hallway intersection to intersection and can make local observations of its world. Its actions are not completely reliable, however. Sometimes, when it intends to move, it stays where it is or goes too far; sometimes, when it intends to turn, it overshoots. It has similar problems with observation. Sometimes a corridor looks like a corner; sometimes a T-junction looks like an L-junction. How can such an error-plagued robot navigate, even given a map of the corridors? In general, the robot will have to remember something about its history of actions and observations and use this information, together with its knowledge of the underlying dynamics of the world (the map and other information), to maintain an estimate of its location. Many engineering applications follow this approach, using methods like the Kalman filter [18] to maintain a running estimate of the robot's spatial uncertainty, expressed as an ellipsoid or normal distribution in Cartesian space. This approach will not do for our robot, though. Its uncertainty may be discrete: it might be almost certain that it is in the north-east corner of either the fourth or the seventh floors, though it admits a chance that it is on the fifth floor, as well. Then, given an uncertain estimate of its location, the robot has to decide what actions to take. In some cases, it might be sufficient to ignore its uncertainty and take actions that would be appropriate for the most likely location. In other cases, it might be better for
1-hop neighbor's text information: Pack Kaelbling. On the complexity of solving Markov decision problems. : Markov decision problems (MDPs) provide the foundations for a number of problems of interest to AI researchers studying automated planning and reinforcement learning. In this paper, we summarize results regarding the complexity of solving MDPs and the running time of MDP solution algorithms. We argue that, although MDPs can be solved efficiently in theory, more study is needed to reveal practical algorithms for solving large problems quickly. To encourage future research, we sketch some alternative methods of analysis that rely on the struc ture of MDPs.
1-hop neighbor's text information: Reinforcement learning for planning and control. :
Target text information: Optimal Navigation in a Probibalistic World: In this paper, we define and examine two versions of the bridge problem. The first variant of the bridge problem is a determistic model where the agent knows a superset of the transitions and a priori probabilities that those transitions are intact. In the second variant, transitions can break or be fixed with some probability at each time step. These problems are applicable to planning in uncertain domains as well as packet routing in a computer network. We show how an agent can act optimally in these models by reduction to Markov decision processes. We describe methods of solving them but note that these methods are intractable for reasonably sized problems. Finally, we suggest neuro-dynamic programming as a method of value function approximation for these types of models.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
5
|
Reinforcement Learning
|
cora
| 180
|
val
|
1-hop neighbor's text information: The wake-sleep algorithm for unsupervised neural networks. : An unsupervised learning algorithm for a multilayer network of stochastic neurons is described. Bottom-up recognition connections convert the input into representations in successive hidden layers and top-down generative connections reconstruct the representation in one layer from the representation in the layer above. In the wake phase, neurons are driven by recognition connections, and generative connections are adapted to increase the probability that they would reconstruct the correct activity vector in the layer below. In the sleep phase, neurons are driven by generative connections and recognition connections are adapted to increase the probability that they would produce Supervised learning algorithms for multilayer neural networks face two problems: They require a teacher to specify the desired output of the network and they require some method of communicating error information to all of the connections. The wake-sleep algorithm avoids both these problems. When there is no external teaching signal to be matched, some other goal is required to force the hidden units to extract underlying structure. In the wake-sleep algorithm the goal is to learn representations that are economical to describe but allow the input to be reconstructed accurately. We can quantify this goal by imagining a communication game in which each vector of raw sensory inputs is communicated to a receiver by first sending its hidden representation and then sending the difference between the input vector and its top-down reconstruction from the hidden representation. The aim of learning is to minimize the description length which is the total number of bits that would be required to communicate the input vectors in this way [1]. No communication actually takes place, but minimizing the description length that would be required forces the network to learn economical representations that capture the underlying regularities in the data [2]. the correct activity vector in the layer above.
1-hop neighbor's text information: Self-Organization and Associative Memory, : Selective suppression of transmission at feedback synapses during learning is proposed as a mechanism for combining associative feedback with self-organization of feedforward synapses. Experimental data demonstrates cholinergic suppression of synaptic transmission in layer I (feedback synapses), and a lack of suppression in layer IV (feed-forward synapses). A network with this feature uses local rules to learn mappings which are not linearly separable. During learning, sensory stimuli and desired response are simultaneously presented as input. Feedforward connections form self-organized representations of input, while suppressed feedback connections learn the transpose of feedfor-ward connectivity. During recall, suppression is removed, sensory input activates the self-organized representation, and activity generates the learned response.
1-hop neighbor's text information: Hierarchical Mixtures of Experts and the EM Algorithm, : We present a tree-structured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIM's). Learning is treated as a maximum likelihood problem; in particular, we present an Expectation-Maximization (EM) algorithm for adjusting the parameters of the architecture. We also develop an on-line learning algorithm in which the parameters are updated incrementally. Comparative simulation results are presented in the robot dynamics domain. *We want to thank Geoffrey Hinton, Tony Robinson, Mitsuo Kawato and Daniel Wolpert for helpful comments on the manuscript. This project was supported in part by a grant from the McDonnell-Pew Foundation, by a grant from ATR Human Information Processing Research Laboratories, by a grant from Siemens Corporation, by by grant IRI-9013991 from the National Science Foundation, and by grant N00014-90-J-1942 from the Office of Naval Research. The project was also supported by NSF grant ASC-9217041 in support of the Center for Biological and Computational Learning at MIT, including funds provided by DARPA under the HPCC program, and NSF grant ECS-9216531 to support an Initiative in Intelligent Control at MIT. Michael I. Jordan is a NSF Presidential Young Investigator.
Target text information: Cortical Mechanisms of Visual Recognition and Learning: A Hierarchical Kalman Filter Model: We describe a biologically plausible model of dynamic recognition and learning in the visual cortex based on the statistical theory of Kalman filtering from optimal control theory. The model utilizes a hierarchical network whose successive levels implement Kalman filters operating over successively larger spatial and temporal scales. Each hierarchical level in the network predicts the current visual recognition state at a lower level and adapts its own recognition state using the residual error between the prediction and the actual lower-level state. Simultaneously, the network also learns an internal model of the spatiotemporal dynamics of the input stream by adapting the synaptic weights at each hierarchical level in order to minimize prediction errors. The Kalman filter model respects key neuroanatomical data such as the reciprocity of connections between visual cortical areas, and assigns specific computational roles to the inter-laminar connections known to exist between neurons in the visual cortex. Previous work elucidated the usefulness of this model in explaining neurophysiological phenomena such as endstopping and other related extra-classical receptive field effects. In this paper, in addition to providing a more detailed exposition of the model, we present a variety of experimental results demonstrating the ability of this model to perform robust spatiotemporal segmentation and recognition of objects and image sequences in the presence of varying amounts of occlusion, background clutter, and noise.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 301
|
test
|
1-hop neighbor's text information: A computer scientist\'s view of life, the universe, and everything. : Is the universe computable? If so, it may be much cheaper in terms of information requirements to compute all computable universes instead of just ours. I apply basic concepts of Kolmogorov complexity theory to the set of possible universes, and chat about perceived and true randomness, life, generalization, and learning in a given universe. Assumptions. A long time ago, the Great Programmer wrote a program that runs all possible universes on His Big Computer. "Possible" means "computable": (1) Each universe evolves on a discrete time scale. (2) Any universe's state at a given time is describable by a finite number of bits. One of the many universes is ours, despite some who evolved in it and claim it is incomputable. Computable universes. Let T M denote an arbitrary universal Turing machine with unidirectional output tape. T M 's input and output symbols are "0", "1", and "," (comma). T M 's possible input programs can be ordered alphabetically: "" (empty program), "0", "1", ",", "00", "01", "0,", "10", "11", "1,", ",0", ",1", ",,", "000", etc. Let A k denote T M 's k-th program in this list. Its output will be a finite or infinite string over the alphabet f "0","1",","g. This sequence of bitstrings separated by commas will be interpreted as the evolution E k of universe U k . If E k includes at least one comma, then let U l k represents U k 's state at the l-th time step of E k (k; l 2 f1; 2; : : : ; g). E k is represented by the sequence U 1 k corresponds to U k 's big bang. Different algorithms may compute the same universe. Some universes are finite (those whose programs cease producing outputs at some point), others are not. I don't know about ours. TM not important. The choice of the Turing machine is not important. This is due to the compiler theorem: for each universal Turing machine C there exists a constant prefix C 2 f "0","1",","g fl such that for all possible programs p, C's output in response to program C p is identical to T M 's output in response to p. The prefix C is the compiler that compiles programs for T M into equivalent programs for C. k denote the l-th (possibly empty) bitstring before the l-th comma. U l
Target text information: Shifting inductive bias with success-story algorithm, adaptive Levin search, and incremental self-improvement. : We study task sequences that allow for speeding up the learner's average reward intake through appropriate shifts of inductive bias (changes of the learner's policy). To evaluate long-term effects of bias shifts setting the stage for later bias shifts we use the "success-story algorithm" (SSA). SSA is occasionally called at times that may depend on the policy itself. It uses backtracking to undo those bias shifts that have not been empirically observed to trigger long-term reward accelerations (measured up until the current SSA call). Bias shifts that survive SSA represent a lifelong success history. Until the next SSA call, they are considered useful and build the basis for additional bias shifts. SSA allows for plugging in a wide variety of learning algorithms. We plug in (1) a novel, adaptive extension of Levin search and (2) a method for embedding the learner's policy modification strategy within the policy itself (incremental self-improvement). Our inductive transfer case studies involve complex, partially observable environments where traditional reinforcement learning fails.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
5
|
Reinforcement Learning
|
cora
| 2,510
|
test
|
1-hop neighbor's text information: A Neural Architecture for a High-Speed Database Query System. : Artificial neural networks (ANN), due to their inherent parallelism and potential fault tolerance offer an attractive paradigm for robust and efficient implementations of large modern database and knowledge base systems. This paper explores a neural network model for efficient implementation of a database query system. The application of the proposed model to a high-speed library query system for retrieval of multiple items is based on partial match of the specified query criteria with the stored records. The performance of the ANN realization of the database query module is analyzed and compared with other techniques commonly in current computer systems. The results of this analysis suggest that the proposed ANN design offers an attractive approach for the realization of query modules in large database and knowledge base systems, especially for retrieval based on partial matches. fl This research was partially supported by the National Science Foundation through the grant IRI-9409580 to Vasant Honavar. A preliminary version of this paper [Chen and Honavar, 1995c] appears in the Proceedings of the 1995 World Congress on Neural Networks.
1-hop neighbor's text information: A Neural Network Architecture for Syntax Analysis. :
1-hop neighbor's text information: Toward Learning Systems That Integrate Different Strategies and Representations. In: Artificial Intelligence and Neural Networks: Steps toward Principled Integration. Honavar, :
Target text information: A Neural Architecture for Content as well as Address-Based Storage and Recall: :
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 1,060
|
test
|
1-hop neighbor's text information: The number of nucleotide sites needed to accurately reconstruct large evolutionary trees, : DIMACS Technical Report 96-19 July 1996
Target text information: Local quartet splits of a binary tree infer all quartet splits via one dyadic inference rule. : DIMACS Technical Report 96-43 DIMACS is a partnership of Rutgers University, Princeton University, AT&T Research, Bellcore, and Bell Laboratories. DIMACS is an NSF Science and Technology Center, funded under contract STC-91-19999; and also receives support from the New Jersey Commission on Science and Technology.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 2,553
|
test
|
1-hop neighbor's text information: An optimized theory revision module. : Theory revision systems typically use a set of theory-to-theory transformations f k g to hill-climb from a given initial theory to a new theory whose empirical accuracy, over a given set of labeled training instances fc j g, is a local maximum. At the heart of each such process is an "evaluator", which compares the accuracy of the current theory KB with that of each of its "neighbors" f k (KB)g, with the goal of determining which neighbor has the highest accuracy. The obvious "wrapper" evaluator simply evaluates each individual neighbor theory KB k = k (KB) on each instance c j . As it can be very expensive to evaluate a single theory on a single instance, and there can be a great many training instances and a huge number of neighbors, this approach can be prohibitively slow. We present an alternative system which employs a smarter evaluator that quickly computes the accuracy of a transformed theory k (KB) by "looking inside" KB and reasoning about the effects of the k transformation. We compare the performance of with the naive wrapper system on real-world theories obtained from a fielded expert system, and find that runs over 35 times faster than , while attaining the same accuracy. This paper also discusses 's source of power. Keywords: theory revision, efficient algorithm, hill-climbing system Multiple Submissions: We have submited a related version of this paper to AAAI96. fl We gratefully acknowledge the many helpful comments on this report from George Drastal, Chandra Mouleeswaran and Geoff Towell.
1-hop neighbor's text information: The Challenge of Revising an Impure Theory: A pure rule-based program will return a set of answers to each query; and will return the same answer set even if its rules are re-ordered. However, an impure program, which includes the Prolog cut "!" and not() operators, can return different answers if the rules are re-ordered. There are also many reasoning systems that return only the first answer found for each query; these first answers, too, depend on the rule order, even in pure rule-based systems. A theory revision algorithm, seeking a revised rule-base whose expected accuracy, over the distribution of queries, is optimal, should therefore consider modifying the order of the rules. This paper first shows that a polynomial number of training "labeled queries" (each a query coupled with its correct answer) provides the distribution information necessary to identify the optimal ordering. It then proves, however, that the task of determining which ordering is optimal, once given this information, is intractable even in trivial situations; e.g., even if each query is an atomic literal, we are seeking only a "perfect" theory, and the rule base is propositional. We also prove that this task is not even approximable: Unless P = N P , no polynomial time algorithm can produce an ordering of an n-rule theory whose accuracy is within n fl of optimal, for some fl > 0. We also prove similar hardness, and non-approximatability, results for the related tasks of determining, in these impure contexts, (1) the optimal ordering of the antecedents; (2) the optimal set of rules to add or (3) to delete; and (4) the optimal priority values for a set of defaults.
Target text information: The complexity of theory revision. : A knowledge-based system uses its database (a.k.a. its "theory") to produce answers to the queries it receives. Unfortunately, these answers may be incorrect if the underlying theory is faulty. Standard "theory revision" systems use a given set of "labeled queries" (each a query paired with its correct answer) to transform the given theory, by adding and/or deleting either rules and/or antecedents, into a related theory that is as accurate as possible. After formally defining the theory revision task, this paper provides both sample and computational complexity bounds for this process. It first specifies the number of labeled queries necessary to identify a revised theory whose error is close to minimal with high probability. It then considers the computational complexity of finding this best theory, and proves that, unless P = N P , no polynomial time algorithm can identify this near-optimal revision, even given the exact distribution of queries, except in the most trivial of situations. It also shows that, except in such trivial situations, no polynomial-time algorithm can produce a theory whose error is even close to (i.e., within a particular polynomial factor of) optimal. These results suggest reasons why theory revision can be more effective than learning from scratch, and also justify many aspects of the standard theory revision systems, including the practice of hill-climbing to a locally-optimal theory, based on a given set of labeled queries. fl This paper extends the short article that appeared in the Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence (IJCAI95), Montreal, August 1995. y I gratefully acknowledge receiving helpful comments from Edoardo Amaldi, Mukesh Dalal, George Drastal, Adam Grove, Tom Hancock, Sheila McIlraith, Roni Khardon, Dan Roth and especially the very thorough comments from the anonymous referees.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
4
|
Theory
|
cora
| 2,598
|
test
|
1-hop neighbor's text information: A Transformation System for Interactive Reformulation of Design Optimization Strategies: Numerical design optimization algorithms are highly sensitive to the particular formulation of the optimization problems they are given. The formulation of the search space, the objective function and the constraints will generally have a large impact on the duration of the optimization process as well as the quality of the resulting design. Furthermore, the best formulation will vary from one application domain to another, and from one problem to another within a given application domain. Unfortunately, a design engineer may not know the best formulation in advance of attempting to set up and run a design optimization process. In order to attack this problem, we have developed a software environment that supports interactive formulation, testing and reformulation of design optimization strategies. Our system represents optimization strategies in terms of second-order dataflow graphs. Reformulations of strategies are implemented as transformations between dataflow graphs. The system permits the user to interactively generate and search a space of design optimization strategies, and experimentally evaluate their performance on test problems, in order to find a strategy that is suitable for his application domain. The system has been implemented in a domain independent fashion, and is being tested in the domain of racing yacht design.
1-hop neighbor's text information: Ellman. Learning prototype-selection rules for case-based iterative design. : The first step for most case-based design systems is to select an initial prototype from a database of previous designs. The retrieved prototype is then modified to tailor it to the given goals. For any particular design goal the selection of a starting point for the design process can have a dramatic effect both on the quality of the eventual design and on the overall design time. We present a technique for automatically constructing effective prototype-selection rules. Our technique applies a standard inductive-learning algorithm, C4.5, to a set of training data describing which particular prototype would have been the best choice for each goal encountered in a previous design session. We have tested our technique in the domain of racing-yacht-hull design, comparing our inductively learned selection rules to several competing prototype-selection methods. Our results show that the inductive prototype-selection method leads to better final designs when the design process is guided by a noisy evaluation function, and that the inductively learned rules will often be more efficient than competing methods. Many automated design systems begin by retrieving an initial prototype from a library of previous designs, using the given design goal as an index to guide the retrieval process [14]. The retrieved prototype is then modified by a set of design modification operators to tailor the selected design to the given goals. In many cases the quality of competing designs can be assessed using domain-specific evaluation functions, and in such cases the design-modification process is often This research has benefited from numerous discussions with members of the Rutgers CAP project. We thank Andrew Gelsey for helping with the cross-validation code, John Keane for helping with RUVPP, and Andrew Gelsey and Tim Weinrich for comments on a previous draft of this paper. This research was supported under ARPA-funded NASA grant NAG 2-645. In the context of such case-based design systems, the choice of an initial prototype can affect both the quality of the final design and the computational cost of obtaining that design, for three reasons. First, prototype selection may impact quality when the prototypes lie in disjoint search spaces. In particular, if the system's design modification operators cannot convert any prototype into any other prototype, the choice of initial prototype will restrict the set of possible designs that can be obtained by any search process. A poor choice of initial prototype may therefore lead to a suboptimal final design. Second, prototype selection may impact quality when the design process is guided by a nonlinear evaluation function with unknown global properties. Since there is no known method that is guaranteed to find the global optimum of an arbitrary nonlinear function [7], most design systems rely on iterative local search methods whose results are sensitive to the initial starting point. Finally, the choice of prototype may have an impact on the time needed to carry out the design modification process|two different starting points may yield the same final design but take very different amounts of time to get there. In design problems where evaluating even just a single design can take tremendous amounts of time, selecting an appropriate initial prototype can be the determining factor in the success or failure of the design process. This paper describes the application of inductive learning [11] to form rules for selecting appropriate prototype designs. The paper is structured as follows. In Section 2, we describe our inductive method for learning prototype-selection rules. In Section 3 we describe the domain of racing-yacht-hull design, in which we tested our prototype-selection methods. In Sections 4 and 5, we describe the experiments
1-hop neighbor's text information: Committees of decision trees. : Many intelligent systems are designed to sift through a mass of evidence and arrive at a decision. Certain pieces of evidence may be given more weight than others, and this may affect the final decision significantly. When than one intelligent agent is available to make a decision, we can form a committee of experts. By combining the different opinions of these experts, the committee approach can sometimes outperform any individual expert. In this paper, we show how to exploit randomized learning algorithms in order to develop committees of experts. By using the majority vote of these experts to make decisions, we are able to improve the performance of the original learning algorithm. More precisely, we have developed a randomized decision tree induction algorithm, which generates different decision trees every time it is run. Each tree represents a different expert decision-maker. We combine these trees using a majority voting scheme in order to overcome small errors that appear in individual trees. We have tested our idea with several real data sets, and found that accuracy consistently improved when compared to the decision made by a single expert. We have developed some analytical results that explain why this effect occurs. Our experiments also show that the majority voting technique outperforms at least some alternative strategies for exploiting randomization.
Target text information: Learning when reformulation is appropriate for iterative design. : It is well known that search-space reformulation can improve the speed and reliability of numerical optimization in engineering design. We argue that the best choice of reformulation depends on the design goal, and present a technique for automatically constructing rules that map the design goal into a reformulation chosen from a space of possible reformulations. We tested our technique in the domain of racing-yacht-hull design, where each reformulation corresponds to incorporating constraints into the search space. We applied a standard inductive-learning algorithm, C4.5, to a set of training data describing which constraints are active in the optimal design for each goal encountered in a previous design session. We then used these rules to choose an appropriate reformulation for each of a set of test cases. Our experimental results show that using these reformulations improves both the speed and the reliability of design optimization, outperforming competing methods and approaching the best performance possible.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
2
|
Case Based
|
cora
| 1,069
|
test
|
1-hop neighbor's text information: Truncating temporal differences: On the efficient implementation of TD() for reinforcement learning. : Temporal difference (TD) methods constitute a class of methods for learning predictions in multi-step prediction problems, parameterized by a recency factor . Currently the most important application of these methods is to temporal credit assignment in reinforcement learning. Well known reinforcement learning algorithms, such as AHC or Q-learning, may be viewed as instances of TD learning. This paper examines the issues of the efficient and general implementation of TD() for arbitrary , for use with reinforcement learning algorithms optimizing the discounted sum of rewards. The traditional approach, based on eligibility traces, is argued to suffer from both inefficiency and lack of generality. The TTD (Truncated Temporal Differences) procedure is proposed as an alternative, that indeed only approximates TD(), but requires very little computation per action and can be used with arbitrary function representation methods. The idea from which it is derived is fairly simple and not new, but probably unexplored so far. Encouraging experimental results are presented, suggesting that using > 0 with the TTD procedure allows one to obtain a significant learning speedup at essentially the same cost as usual TD(0) learning.
1-hop neighbor's text information: Generalizing in TD() learning. :
1-hop neighbor's text information: Learning to predict by the methods of temporal differences. : This article introduces a class of incremental learning procedures specialized for prediction|that is, for using past experience with an incompletely known system to predict its future behavior. Whereas conventional prediction-learning methods assign credit by means of the difference between predicted and actual outcomes, the new methods assign credit by means of the difference between temporally successive predictions. Although such temporal-difference methods have been used in Samuel's checker player, Holland's bucket brigade, and the author's Adaptive Heuristic Critic, they have remained poorly understood. Here we prove their convergence and optimality for special cases and relate them to supervised-learning methods. For most real-world prediction problems, temporal-difference methods require less memory and less peak computation than conventional methods; and they produce more accurate predictions. We argue that most problems to which supervised learning is currently applied are really prediction problems of the sort to which temporal-difference methods can be applied to advantage.
Target text information: On-line adaptive critic for changing systems. : In this paper we propose a reactive critic, that is able to respond to changing situations. We will explain why this is usefull in reinforcement learning, where the critic is used to improve the control strategy. We take a problem for which we can derive the solution analytically. This enables us to investigate the relation between the parameters and the resulting approximations of the critic. We will also demonstrate how the reactive critic reponds to changing situations.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
5
|
Reinforcement Learning
|
cora
| 2,205
|
train
|
1-hop neighbor's text information: Learning to Act using Real- Time Dynamic Programming. : fl The authors thank Rich Yee, Vijay Gullapalli, Brian Pinette, and Jonathan Bachrach for helping to clarify the relationships between heuristic search and control. We thank Rich Sutton, Chris Watkins, Paul Werbos, and Ron Williams for sharing their fundamental insights into this subject through numerous discussions, and we further thank Rich Sutton for first making us aware of Korf's research and for his very thoughtful comments on the manuscript. We are very grateful to Dimitri Bertsekas and Steven Sullivan for independently pointing out an error in an earlier version of this article. Finally, we thank Harry Klopf, whose insight and persistence encouraged our interest in this class of learning problems. This research was supported by grants to A.G. Barto from the National Science Foundation (ECS-8912623 and ECS-9214866) and the Air Force Office of Scientific Research, Bolling AFB (AFOSR-89-0526).
Target text information: Exploiting structure in policy construction. : Markov decision processes (MDPs) have recently been applied to the problem of modeling decision-theoretic planning. While traditional methods for solving MDPs are often practical for small states spaces, their effectiveness for large AI planning problems is questionable. We present an algorithm, called structured policy iteration (SPI), that constructs optimal policies without explicit enumeration of the state space. The algorithm retains the fundamental computational steps of the commonly used modified policy iteration algorithm, but exploits the variable and propositional independencies reflected in a temporal Bayesian network representation of MDPs. The principles behind SPI can be applied to any structured representation of stochastic actions, policies and value functions, and the algorithm itself can be used in conjunction with re cent approximation methods.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
6
|
Probabilistic Methods
|
cora
| 1,243
|
test
|
1-hop neighbor's text information: On the sample complexity of finding good search strategies. : A satisficing search problem consists of a set of probabilistic experiments to be performed in some order, without repetitions, until a satisfying configuration of successes and failures has been reached. The cost of performing the experiments depends on the order chosen. Earlier work has concentrated on finding optimal search strategies in special cases of this model, such as search trees and and-or graphs, when the cost function and the success probabilities for the experiments are given. In contrast, we study the complexity of "learning" an approximately optimal search strategy when some of the success probabilities are not known at the outset. Working in the fully general model, we show that if n is the number of unknown probabilities, and C is the maximum cost of performing all the experiments, then
1-hop neighbor's text information: A Statistical Approach to Solving the EBL Utility Problem, : Many "learning from experience" systems use information extracted from problem solving experiences to modify a performance element PE, forming a new element PE 0 that can solve these and similar problems more efficiently. However, as transformations that improve performance on one set of problems can degrade performance on other sets, the new PE 0 is not always better than the original PE; this depends on the distribution of problems. We therefore seek the performance element whose expected performance, over this distribution, is optimal. Unfortunately, the actual distribution, which is needed to determine which element is optimal, is usually not known. Moreover, the task of finding the optimal element, even knowing the distribution, is intractable for most interesting spaces of elements. This paper presents a method, palo, that side-steps these problems by using a set of samples to estimate the unknown distribution, and by using a set of transformations to hill-climb to a local optimum. This process is based on a mathematically rigorous form of utility analysis: in particular, it uses statistical techniques to determine whether the result of a proposed transformation will be better than the original system. We also present an efficient way of implementing this learning system in the context of a general class of performance elements, and include empirical evidence that this approach can work effectively. fl Much of this work was performed at the University of Toronto, where it was supported by the Institute for Robotics and Intelligent Systems and by an operating grant from the National Science and Engineering Research Council of Canada. We also gratefully acknowledge receiving many helpful comments from William Cohen, Dave Mitchell, Dale Schuurmans and the anonymous referees.
1-hop neighbor's text information: Knowing what doesn\'t matter: Exploiting (intentionally) omitted superfluous data. : Most inductive inference algorithms (i.e., "learners") work most effectively when their training data contain completely specified labeled samples. In many diagnostic tasks, however, the data will include the values of only some of the attributes; we model this as a blocking process that hides the values of those attributes from the learner. While blockers that remove the values of critical attributes can handicap a learner, this paper instead focuses on blockers that remove only superfluous attribute values, i.e., values that are not needed to classify an instance, given the values of the other unblocked attributes. We first motivate and formalize this model of "superfluous-value blocking," and then demonstrate that these omissions can be useful, by showing that certain classes that seem hard to learn in the general PAC model | viz., decision trees | are trivial to learn in this setting, and can even be learned in a manner that is very robust to classification noise. We also discuss how this model can be extended to deal with (1) theory revision (i.e., modifying an existing decision tree); (2) "complex" attributes (which correspond to combinations of other atomic attributes); (3) blockers that occasionally include superfluous values or exclude required values; and (4) other hypothesis classes (e.g., DNF formulae). Declaration: This paper has not already been accepted by and is not currently under review for a journal or another conference, nor will it be submitted for such during IJCAI's review period. fl This is an extended version of a paper that appeared in working notes of the 1994 AAAI Fall Symposium on "Relevance", New Orleans, November 1994. y Authors listed alphabetically. We gratefully acknowledge receiving helpful comments from Dale Schuurmans and George Drastal.
Target text information: Learning default concepts. : Classical concepts, based on necessary and sufficient defining conditions, cannot classify logically insufficient object descriptions. Many reasoning systems avoid this limitation by using "default concepts" to classify incompletely described objects. This paper addresses the task of learning such default concepts from observational data. We first model the underlying performance task | classifying incomplete examples | as a probabilistic process that passes random test examples through a "blocker" that can hide object attributes from the classifier. We then address the task of learning accurate default concepts from random training examples. After surveying the learning techniques that have been proposed for this task in the machine learning and knowledge representation literatures, and investigating their relative merits, we present a more data-efficient learning technique, developed from well-known statistical principles. Finally, we extend Valiant's pac-learning framework to this context and obtain a number of useful learnability results. Appears in the Proceedings of the Tenth Canadian Conference on Artificial Intelligence (CSCSI-94),
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
4
|
Theory
|
cora
| 16
|
val
|
1-hop neighbor's text information: Learning concepts by asking questions. In R.S. : Tw o important issues in machine learning are explored: the role that memory plays in acquiring new concepts; and the extent to which the learner can take an active part in acquiring these concepts. This chapter describes a program, called Marvin, which uses concepts it has learned previously to learn new concepts. The program forms hypotheses about the concept being learned and tests the hypotheses by asking the trainer questions. Learning begins when the trainer shows Marvin an example of the concept to be learned. The program determines which objects in the example belong to concepts stored in the memory. A description of the new concept is formed by using the information obtained from the memory to generalize the description of the training example. The generalized description is tested when the program constructs new examples and shows these to the trainer, asking if they belong to the target concept.
1-hop neighbor's text information: Eclectic Machine Learning:
1-hop neighbor's text information: Hierarchical explanation-based reinforcement learning. : Explanation-Based Reinforcement Learning (EBRL) was introduced by Dietterich and Flann as a way of combining the ability of Reinforcement Learning (RL) to learn optimal plans with the generalization ability of Explanation-Based Learning (EBL) (Di-etterich & Flann, 1995). We extend this work to domains where the agent must order and achieve a sequence of subgoals in an optimal fashion. Hierarchical EBRL can effectively learn optimal policies in some of these sequential task domains even when the subgoals weakly interact with each other. We also show that when a planner that can achieve the individual subgoals is available, our method converges even faster.
Target text information: "Acquiring Recursive Concepts with Explanation-Based Learning," : University of Wisconsin Computer Sciences Technical Report 876 (September 1989) Abstract In explanation-based learning, a specific problem's solution is generalized into a form that can be later used to solve conceptually similar problems. Most research in explanation-based learning involves relaxing constraints on the variables in the explanation of a specific example, rather than generalizing the graphical structure of the explanation itself. However, this precludes the acquisition of concepts where an iterative or recursive process is implicitly represented in the explanation by a fixed number of applications. This paper presents an algorithm that generalizes explanation structures and reports empirical results that demonstrate the value of acquiring recursive and iterative concepts. The BAGGER2 algorithm learns recursive and iterative concepts, integrates results from multiple examples, and extracts useful subconcepts during generalization. On problems where learning a recursive rule is not appropriate, the system produces the same result as standard explanation-based methods. Applying the learned recursive rules only requires a minor extension to a PROLOG-like problem solver, namely, the ability to explicitly call a specific rule. Empirical studies demonstrate that generalizing the structure of explanations helps avoid the recently reported negative effects of learning.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
2
|
Case Based
|
cora
| 2,690
|
test
|
1-hop neighbor's text information: Truncating temporal differences: On the efficient implementation of TD() for reinforcement learning. : Temporal difference (TD) methods constitute a class of methods for learning predictions in multi-step prediction problems, parameterized by a recency factor . Currently the most important application of these methods is to temporal credit assignment in reinforcement learning. Well known reinforcement learning algorithms, such as AHC or Q-learning, may be viewed as instances of TD learning. This paper examines the issues of the efficient and general implementation of TD() for arbitrary , for use with reinforcement learning algorithms optimizing the discounted sum of rewards. The traditional approach, based on eligibility traces, is argued to suffer from both inefficiency and lack of generality. The TTD (Truncated Temporal Differences) procedure is proposed as an alternative, that indeed only approximates TD(), but requires very little computation per action and can be used with arbitrary function representation methods. The idea from which it is derived is fairly simple and not new, but probably unexplored so far. Encouraging experimental results are presented, suggesting that using > 0 with the TTD procedure allows one to obtain a significant learning speedup at essentially the same cost as usual TD(0) learning.
1-hop neighbor's text information: Learning to predict by the methods of temporal differences. : This article introduces a class of incremental learning procedures specialized for prediction|that is, for using past experience with an incompletely known system to predict its future behavior. Whereas conventional prediction-learning methods assign credit by means of the difference between predicted and actual outcomes, the new methods assign credit by means of the difference between temporally successive predictions. Although such temporal-difference methods have been used in Samuel's checker player, Holland's bucket brigade, and the author's Adaptive Heuristic Critic, they have remained poorly understood. Here we prove their convergence and optimality for special cases and relate them to supervised-learning methods. For most real-world prediction problems, temporal-difference methods require less memory and less peak computation than conventional methods; and they produce more accurate predictions. We argue that most problems to which supervised learning is currently applied are really prediction problems of the sort to which temporal-difference methods can be applied to advantage.
Target text information: AVERAGED REWARD REINFORCEMENT LEARNING APPLIED TO FUZZY RULE TUNING: Fuzzy rules for control can be effectively tuned via reinforcement learning. Reinforcement learning is a weak learning method, which only requires information on the success or failure of the control application. The tuning process allows people to generate fuzzy rules which are unable to accurately perform control and have them tuned to be rules which provide smooth control. This paper explores a new simplified method of using reinforcement learning for the tuning of fuzzy control rules. It is shown that the learned fuzzy rules provide smoother control in the pole balancing domain than another approach.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
5
|
Reinforcement Learning
|
cora
| 546
|
test
|
1-hop neighbor's text information: Learning control knowledge in models of expertise. : During the development and the life-cycle of knowledge-based systems the requirements on the system and the knowledge in the system will change. One of the types of knowledge affected by changing requirements is control-knowledge, which prescribes the ordering of problem-solving steps. Machine-learning can aid developers of knowledge-based systems in adapting their systems to changing requirements. A number of machine-learning techniques for learning control-knowledge have been applied to problem-solvers (Prodigy-EBL, LEX). In knowledge engineering, the focus has shifted to the construction of knowledge-level models of problem-solving instead of directly constructing a knowledge-based system in a problem-solver. In this paper we describe work in progress on how to apply machine learning techniques to the KADS model of expertise.
Target text information: Problem Solving for Redesign: A knowledge-level analysis of complex tasks like diagnosis and design can give us a better understanding of these tasks in terms of the goals they aim to achieve and the different ways to achieve these goals. In this paper we present a knowledge-level analysis of redesign. Redesign is viewed as a family of methods based on some common principles, and a number of dimensions along which redesign problem solving methods can vary are distinguished. By examining the problem-solving behavior of a number of existing redesign systems and approaches, we came up with a collection of problem-solving methods for redesign and developed a task-method structure for redesign. In constructing a system for redesign a large number of knowledge-related choices and decisions are made. In order to describe all relevant choices in redesign problem solving, we have to extend the current notion of possible relations between tasks and methods in a PSM architecture. The realization of a task by a PSM, and the decomposition of a PSM into subtasks are the most common relations in a PSM architecture. However, we suggest to extend fl This work has been funded by NWO/SION within project 612-322-316, Evolutionary design in knowledge-based systems (the REVISE-project). Participants in the REVISE-project are: the TWIST group at the University of Twente, the SWI department of the University of Amsterdam, the AI department of the Vrije Universiteit van Amsterdam and the STEVIN group at the University of Twente. these relations with the notions of task refinement and method refinement. These notions represent intermediate decisions in a task-method structure, in which the competence of a task or method is refined without immediately paying attention to the operationalization in terms of subtasks. Explicit representation of this kind of intermediate decisions helps to make and represent decisions in a more piecemeal fashion.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
2
|
Case Based
|
cora
| 658
|
train
|
1-hop neighbor's text information: Computational Models of Sensorimotor Integration Computational Maps and Motor Control.: The sensorimotor integration system can be viewed as an observer attempting to estimate its own state and the state of the environment by integrating multiple sources of information. We describe a computational framework capturing this notion, and some specific models of integration and adaptation that result from it. Psychophysical results from two sensorimotor systems, subserving the integration and adaptation of visuo-auditory maps, and estimation of the state of the hand during arm movements, are presented and analyzed within this framework. These results suggest that: (1) Spatial information from visual and auditory systems is integrated so as to reduce the variance in localization. (2) The effects of a remapping in the relation between visual and auditory space can be predicted from a simple learning rule. (3) The temporal propagation of errors in estimating the hand's state is captured by a linear dynamic observer, providing evidence for the existence of an internal model which simulates the dynamic behavior of the arm.
Target text information: Computation and Psychophysics of Senso-rimotor Integration, :
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 2,443
|
val
|
1-hop neighbor's text information: Approximation by scattered shifts of a radial basis function, : The paper studies L 1 (IR d )-norm approximations from a space spanned by a discrete set of translates of a basis function . Attention here is restricted to functions whose Fourier transform is smooth on IR d n0, and has a singularity at the origin. Examples of such basis functions are the thin-plate splines and the multiquadrics, as well as other types of radial basis functions that are employed in Approximation Theory. The above approximation problem is well-understood in case the set of points ffi used for translating forms a lattice in IR d , and many optimal and quasi-optimal approximation schemes can already be found in the literature. In contrast, only few, mostly specific, results are known for a set ffi of scattered points. The main objective of this paper is to provide a general tool for extending approximation schemes that use integer translates of a basis function to the non-uniform case. We introduce a single, relatively simple, conversion method that preserves the approximation orders provided by a large number of schemes presently in the literature (more precisely, to almost all "stationary schemes"). In anticipation of future introduction of new schemes for uniform grids, an effort is made to impose only a few mild conditions on the function , which still allow for a unified error analysis to hold. In the course of the discussion here, the recent results of [BuDL] on scattered center approximation are reproduced and improved upon.
1-hop neighbor's text information: Negative observations concerning approximations from spaces generated by scattered shifts of functions vanishing at 1: Approximation by scattered shifts f( ff)g ff2A of a basis function are considered, and different methods for localizing these translates are compared. It is argued in the note that the superior localization processes are those that employ the original translates only.
Target text information: Approximation from shift-invariant subspaces of L 2 (IR d ), CMS TSR #92-2, : A complete characterization is given of closed shift-invariant subspaces of L 2 (IR d ) which provide a specified approximation order. When such a space is principal (i.e., generated by a single function), then this characterization is in terms of the Fourier transform of the generator. As a special case, we obtain the classical Strang-Fix conditions, but without requiring the generating function to decay at infinity. The approximation order of a general closed shift-invariant space is shown to be already realized by a specifiable principal subspace.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 2,473
|
test
|
1-hop neighbor's text information: A benchmark for classifier learning. : Although many algorithms for learning from examples have been developed and many comparisons have been reported, there is no generally accepted benchmark for classifier learning. The existence of a standard benchmark would greatly assist such comparisons. Sixteen dimensions are proposed to describe classification tasks. Based on these, thirteen real-world and synthetic datasets are chosen by a set covering method from the UCI Repository of machine learning databases to form such a benchmark.
1-hop neighbor's text information: Virtual Seens and the Frequently Used Dataset: The paper considers the situation in which a learner's testing set contains close approximations of cases which appear in the training set. Such cases can be considered `virtual seens' since they are approximately seen by the learner. Generalisation measures which do not take account of the frequency of virtual seens may be misleading. The paper shows that the 1-NN algorithm can be used to derive a normalising baseline for gen-eralisation statistics. The normalisation process is demonstrated though application to Holte's [1] study in which the generalisation performance of the 1R algorithm was tested against C4.5 on 16 commonly used datasets.
1-hop neighbor's text information: Exemplar-based Music Structure Recognition: We tend to think of what we really know as what we can talk about, and disparage knowledge that we can't verbalize. [Dowling 1989, p. 252]
Target text information: Instance-based learning algorithms. :
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
4
|
Theory
|
cora
| 1,504
|
val
|
1-hop neighbor's text information: (1997) "Analysis of a non-reversible Markov chain sampler", : Technical Report BU-1385-M, Biometrics Unit, Cornell University Abstract We analyse the convergence to stationarity of a simple non-reversible Markov chain that serves as a model for several non-reversible Markov chain sampling methods that are used in practice. Our theoretical and numerical results show that non-reversibility can indeed lead to improvements over the diffusive behavior of simple Markov chain sampling schemes. The analysis uses both probabilistic techniques and an explicit diagonalisation. We thank David Aldous, Martin Hildebrand, Brad Mann, and Laurent Saloff-Coste for their help.
1-hop neighbor's text information: Markov chain Monte Carlo methods based on "slicing" the density function. : Technical Report No. 9722, Department of Statistics, University of Toronto Abstract. One way to sample from a distribution is to sample uniformly from the region under the plot of its density function. A Markov chain that converges to this uniform distribution can be constructed by alternating uniform sampling in the vertical direction with uniform sampling from the horizontal `slice' defined by the current vertical position. Variations on such `slice sampling' methods can easily be implemented for univariate distributions, and can be used to sample from a multivariate distribution by updating each variable in turn. This approach is often easier to implement than Gibbs sampling, and may be more efficient than easily-constructed versions of the Metropolis algorithm. Slice sampling is therefore attractive in routine Markov chain Monte Carlo applications, and for use by software that automatically generates a Markov chain sampler from a model specification. One can also easily devise overrelaxed versions of slice sampling, which sometimes greatly improve sampling efficiency by suppressing random walk behaviour. Random walks can also be avoided in some slice sampling schemes that simultaneously update all variables.
Target text information: (1995) "Suppressing random walks in Markov chain Monte Carlo using ordered overrelaxation", : Technical Report No. 9508, Department of Statistics, University of Toronto Markov chain Monte Carlo methods such as Gibbs sampling and simple forms of the Metropolis algorithm typically move about the distribution being sampled via a random walk. For the complex, high-dimensional distributions commonly encountered in Bayesian inference and statistical physics, the distance moved in each iteration of these algorithms will usually be small, because it is difficult or impossible to transform the problem to eliminate dependencies between variables. The inefficiency inherent in taking such small steps is greatly exacerbated when the algorithm operates via a random walk, as in such a case moving to a point n steps away will typically take around n 2 iterations. Such random walks can sometimes be suppressed using "overrelaxed" variants of Gibbs sampling (a.k.a. the heatbath algorithm), but such methods have hitherto been largely restricted to problems where all the full conditional distributions are Gaussian. I present an overrelaxed Markov chain Monte Carlo algorithm based on order statistics that is more widely applicable. In particular, the algorithm can be applied whenever the full conditional distributions are such that their cumulative distribution functions and inverse cumulative distribution functions can be efficiently computed. The method is demonstrated on an inference problem for a simple hierarchical Bayesian model.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
6
|
Probabilistic Methods
|
cora
| 959
|
train
|
1-hop neighbor's text information: Problem Solving for Redesign: A knowledge-level analysis of complex tasks like diagnosis and design can give us a better understanding of these tasks in terms of the goals they aim to achieve and the different ways to achieve these goals. In this paper we present a knowledge-level analysis of redesign. Redesign is viewed as a family of methods based on some common principles, and a number of dimensions along which redesign problem solving methods can vary are distinguished. By examining the problem-solving behavior of a number of existing redesign systems and approaches, we came up with a collection of problem-solving methods for redesign and developed a task-method structure for redesign. In constructing a system for redesign a large number of knowledge-related choices and decisions are made. In order to describe all relevant choices in redesign problem solving, we have to extend the current notion of possible relations between tasks and methods in a PSM architecture. The realization of a task by a PSM, and the decomposition of a PSM into subtasks are the most common relations in a PSM architecture. However, we suggest to extend fl This work has been funded by NWO/SION within project 612-322-316, Evolutionary design in knowledge-based systems (the REVISE-project). Participants in the REVISE-project are: the TWIST group at the University of Twente, the SWI department of the University of Amsterdam, the AI department of the Vrije Universiteit van Amsterdam and the STEVIN group at the University of Twente. these relations with the notions of task refinement and method refinement. These notions represent intermediate decisions in a task-method structure, in which the competence of a task or method is refined without immediately paying attention to the operationalization in terms of subtasks. Explicit representation of this kind of intermediate decisions helps to make and represent decisions in a more piecemeal fashion.
1-hop neighbor's text information: A performance model for knowledge-based systems. : Most techniques for verification and validation are directed at functional properties of programs. However, other properties of programs are also essential. This paper describes a model for the average computing time of a KADS knowledge-based system based on its structure. An example taken from an existing knowledge-based system is used to demonstrate the use of the cost-model in designing the system.
Target text information: Learning control knowledge in models of expertise. : During the development and the life-cycle of knowledge-based systems the requirements on the system and the knowledge in the system will change. One of the types of knowledge affected by changing requirements is control-knowledge, which prescribes the ordering of problem-solving steps. Machine-learning can aid developers of knowledge-based systems in adapting their systems to changing requirements. A number of machine-learning techniques for learning control-knowledge have been applied to problem-solvers (Prodigy-EBL, LEX). In knowledge engineering, the focus has shifted to the construction of knowledge-level models of problem-solving instead of directly constructing a knowledge-based system in a problem-solver. In this paper we describe work in progress on how to apply machine learning techniques to the KADS model of expertise.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
2
|
Case Based
|
cora
| 2,331
|
test
|
1-hop neighbor's text information: A modification to evidential probability. : Selecting the right reference class and the right interval when faced with conflicting candidates and no possibility of establishing subset style dominance has been a problem for Kyburg's Evidential Probability system. Various methods have been proposed by Loui and Kyburg to solve this problem in a way that is both intuitively appealing and justifiable within Kyburg's framework. The scheme proposed in this paper leads to stronger statistical assertions without sacrificing too much of the intuitive appeal of Kyburg's latest proposal.
Target text information: Balls and Urns: We use a simple and illustrative example to expose some of the main ideas of Evidential Probability. Specifically, we show how the use of an acceptance rule naturally leads to the use of intervals to represent probabilities, how change of opinion due to experience can be facilitated, and how probabilities concerning compound experiments or events can be computed given the proper knowledge of the underlying distributions.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
6
|
Probabilistic Methods
|
cora
| 797
|
train
|
1-hop neighbor's text information: Exloration bonuses and dual control. : Finding the Bayesian balance between exploration and exploitation in adaptive optimal control is in general intractable. This paper shows how to compute suboptimal estimates based on a certainty equivalence approximation arising from a form of dual control. This systematizes and extends existing uses of exploration bonuses in reinforcement learning (Sutton, 1990). The approach has two components: a statistical model of uncertainty in the world and a way of turning this into exploratory behaviour.
1-hop neighbor's text information: Learning to predict by the methods of temporal differences. : This article introduces a class of incremental learning procedures specialized for prediction|that is, for using past experience with an incompletely known system to predict its future behavior. Whereas conventional prediction-learning methods assign credit by means of the difference between predicted and actual outcomes, the new methods assign credit by means of the difference between temporally successive predictions. Although such temporal-difference methods have been used in Samuel's checker player, Holland's bucket brigade, and the author's Adaptive Heuristic Critic, they have remained poorly understood. Here we prove their convergence and optimality for special cases and relate them to supervised-learning methods. For most real-world prediction problems, temporal-difference methods require less memory and less peak computation than conventional methods; and they produce more accurate predictions. We argue that most problems to which supervised learning is currently applied are really prediction problems of the sort to which temporal-difference methods can be applied to advantage.
1-hop neighbor's text information: On the convergence of stochastic iterative dynamic programming algorithms. : This project was supported in part by a grant from the McDonnell-Pew Foundation, by a grant from ATR Human Information Processing Research Laboratories, by a grant from Siemens Corporation, and by grant N00014-90-J-1942 from the Office of Naval Research. The project was also supported by NSF grant ASC-9217041 in support of the Center for Biological and Computational Learning at MIT, including funds provided by DARPA under the HPCC program. Michael I. Jordan is a NSF Presidential Young Investigator.
Target text information: Q-Learning for Bandit Problems: Multi-armed bandits may be viewed as decompositionally-structured Markov decision processes (MDP's) with potentially very large state sets. A particularly elegant methodology for computing optimal policies was developed over twenty ago by Gittins [Gittins & Jones, 1974]. Gittins' approach reduces the problem of finding optimal policies for the original MDP to a sequence of low-dimensional stopping problems whose solutions determine the optimal policy through the so-called "Gittins indices." Katehakis and Veinott [Katehakis & Veinott, 1987] have shown that the Gittins index for a task in state i may be interpreted as a particular component of the maximum-value function associated with the "restart-in-i" process, a simple MDP to which standard solution methods for computing optimal policies, such as successive approximation, apply. This paper explores the problem of learning the Gittins indices on-line without the aid of a process model; it suggests utilizing task-state-specific Q-learning agents to solve their respective restart-in-state-i subproblems, and includes an example in which the online reinforcement learning approach is applied to a simple problem of stochastic scheduling|one instance drawn from a wide class of problems that may be formulated as bandit problems.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
5
|
Reinforcement Learning
|
cora
| 225
|
test
|
1-hop neighbor's text information: Limits of Instruction-Level Parallelism, : This paper examines the limits to instruction level parallelism that can be found in programs, in particular the SPEC95 benchmark suite. Apart from using a more recent version of the SPEC benchmark suite, it differs from earlier studies in removing non-essential true dependencies that occur as a result of the compiler employing a stack for subroutine linkage. This is a subtle limitation to parallelism that is not readily evident as it appears as a true dependency on the stack pointer. Other methods can be used that do not employ a stack to remove this dependency. In this paper we show that its removal exposes far more parallelism than has been seen previously. We refer to this type of parallelism as "parallelism at a distance" because it requires impossibly large instruction windows for detection. We conclude with two observations: 1) that a single instruction window characteristic of superscalar machines is inadequate for detecting parallelism at a distance; and 2) in order to take advantage of this parallelism the compiler must be involved, or separate threads must be explicitly programmed.
1-hop neighbor's text information: A hardware mechanism for dynamic reordering of memory references. :
1-hop neighbor's text information: Task selection for a Multiscalar processor. : The Multiscalar architecture advocates a distributed processor organization and task-level speculation to exploit high degrees of instruction level parallelism (ILP) in sequential programs without impeding improvements in clock speeds. The main goal of this paper is to understand the key implications of the architectural features of distributed processor organization and task-level speculation for compiler task selection from the point of view of performance. We identify the fundamental performance issues to be: control ow speculation, data communication, data dependence speculation, load imbalance, and task overhead. We show that these issues are intimately related to a few key characteristics of tasks: task size, inter-task control ow, and inter-task data dependence. We describe compiler heuristics to select tasks with favorable characteristics. We report experimental results to show that the heuristics are successful in boosting overall performance by establishing larger ILP windows.
Target text information: The Expandable Split Window Paradigm for Exploiting Fine-Grain Parallelism, : We propose a new processing paradigm, called the Expandable Split Window (ESW) paradigm, for exploiting fine-grain parallelism. This paradigm considers a window of instructions (possibly having dependencies) as a single unit, and exploits fine-grain parallelism by overlapping the execution of multiple windows. The basic idea is to connect multiple sequential processors, in a decoupled and decentralized manner, to achieve overall multiple issue. This processing paradigm shares a number of properties of the restricted dataflow machines, but was derived from the sequential von Neumann architecture. We also present an implementation of the Expandable Split Window execution model, and preliminary performance results.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
0
|
Rule Learning
|
cora
| 2,074
|
test
|
1-hop neighbor's text information: Hierarchical recurrent networks for long-term dependencies. : We have already shown that extracting long-term dependencies from sequential data is difficult, both for deterministic dynamical systems such as recurrent networks, and probabilistic models such as hidden Markov models (HMMs) or input/output hidden Markov models (IOHMMs). In practice, to avoid this problem, researchers have used domain specific a-priori knowledge to give meaning to the hidden or state variables representing past context. In this paper, we propose to use a more general type of a-priori knowledge, namely that the temporal dependencies are structured hierarchically. This implies that long-term dependencies are represented by variables with a long time scale. This principle is applied to a recurrent network which includes delays and multiple time scales. Experiments confirm the advantages of such structures. A similar approach is proposed for HMMs and IOHMMs.
1-hop neighbor's text information: Learning complex, extended sequences using the principle of history compression. : Previous neural network learning algorithms for sequence processing are computationally expensive and perform poorly when it comes to long time lags. This paper first introduces a simple principle for reducing the descriptions of event sequences without loss of information. A consequence of this principle is that only unexpected inputs can be relevant. This insight leads to the construction of neural architectures that learn to `divide and conquer' by recursively decomposing sequences. I describe two architectures. The first functions as a self-organizing multi-level hierarchy of recurrent networks. The second, involving only two recurrent networks, tries to collapse a multi-level predictor hierarchy into a single recurrent net. Experiments show that the system can require less computation per time step and many fewer training sequences than conventional training algorithms for recurrent nets.
1-hop neighbor's text information: Discovering solutions with low kolmogorov complexity and high generalization capability. : Many machine learning algorithms aim at finding "simple" rules to explain training data. The expectation is: the "simpler" the rules, the better the generalization on test data (! Occam's razor). Most practical implementations, however, use measures for "simplicity" that lack the power, universality and elegance of those based on Kolmogorov complexity and Solomonoff's algorithmic probability. Likewise, most previous approaches (especially those of the "Bayesian" kind) suffer from the problem of choosing appropriate priors. This paper addresses both issues. It first reviews some basic concepts of algorithmic complexity theory relevant to machine learning, and how the Solomonoff-Levin distribution (or universal prior) deals with the prior problem. The universal prior leads to a probabilistic method for finding "algorithmically simple" problem solutions with high generalization capability. The method is based on Levin complexity (a time-bounded generalization of Kolmogorov complexity) and inspired by Levin's optimal universal search algorithm. With a given problem, solution candidates are computed by efficient "self-sizing" programs that influence their own runtime and storage size. The probabilistic search algorithm finds the "good" programs (the ones quickly computing algorithmically probable solutions fitting the training data). Simulations focus on the task of discovering "algorithmically simple" neural networks with low Kolmogorov complexity and high generalization capability. It is demonstrated that the method, at least with certain toy problems where it is computationally feasible, can lead to generalization results unmatchable by previous neural net algorithms. Much remains do be done, however, to make large scale applications and "incremental learning" feasible.
Target text information: Guessing can outperform many long time lag algorithms. : Numerous recent papers focus on standard recurrent nets' problems with long time lags between relevant signals. Some propose rather sophisticated, alternative methods. We show: many problems used to test previous methods can be solved more quickly by random weight guessing.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 1,799
|
val
|
1-hop neighbor's text information: Hierarchical reinforcement learning with the MAXQ value function decomposition. : This paper presents a new approach to hierarchical reinforcement learning based on the MAXQ decomposition of the value function. The MAXQ decomposition has both a procedural semanticsas a subroutine hierarchyand a declarative semanticsas a representation of the value function of a hierarchical policy. MAXQ unifies and extends previous work on hierarchical reinforcement learning by Singh, Kaelbling, and Dayan and Hinton. Conditions under which the MAXQ decomposition can represent the optimal value function are derived. The paper defines a hierarchical Q learning algorithm, proves its convergence, and shows experimentally that it can learn much faster than ordinary flat Q learning. Finally, the paper discusses some interesting issues that arise in hierarchical reinforcement learning including the hierarchical credit assignment problem and non-hierarchical execution of the MAXQ hierarchy.
Target text information: Reinforcement learning with hierarchies of machines. : We present a new approach to reinforcement learning in which the policies considered by the learning process are constrained by hierarchies of partially specified machines. This allows for the use of prior knowledge to reduce the search space and provides a framework in which knowledge can be transferred across problems and in which component solutions can be recombined to solve larger and more complicated problems. Our approach can be seen as providing a link between reinforcement learning and behavior-based or teleo-reactive approaches to control. We present provably convergent algorithms for problem-solving and learning with hierarchical machines and demonstrate their effectiveness on a problem with several thousand states.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
5
|
Reinforcement Learning
|
cora
| 2,675
|
test
|
1-hop neighbor's text information: Discretization of continuous Markov chains and MCMC convergence assessment: We show in this paper that continuous state space Markov chains can be rigorously discretized into finite Markov chains. The idea is to subsample the continuous chain at renewal times related to small sets which control the discretization. Once a finite Markov chain is derived from the MCMC output, general convergence properties on finite state spaces can be exploited for convergence assessment in several directions. Our choice is based on a divergence criterion derived from Kemeny and Snell (1960), which is first evaluated on parallel chains with a stopping time, and then implemented, more efficiently, on two parallel chains only, using Birkhoff's pointwise ergodic theorem for stopping rules. The performance of this criterion is illustrated on three standard examples.
1-hop neighbor's text information: D.M. (1998) Convergence controls for MCMC algorithms, with applications to hidden Markov chains. : In complex models like hidden Markov chains, the convergence of the MCMC algorithms used to approximate the posterior distribution and the Bayes estimates of the parameters of interest must be controlled in a robust manner. We propose in this paper a series of on-line controls, which rely on classical non-parametric tests, to evaluate independence from the start-up distribution, stability of the Markov chain, and asymptotic normality. These tests lead to graphical control spreadsheets which are presented in the set-up of normal mixture hidden Markov chains to compare the full Gibbs sampler with an aggregated Gibbs sampler based on the forward-backward formulae.
1-hop neighbor's text information: Diagnosing convergence of Markov chain Monte Carlo algorithms. : We motivate the use of convergence diagnostic techniques for Markov Chain Monte Carlo algorithms and review various methods proposed in the MCMC literature. A common notation is established and each method is discussed with particular emphasis on implementational issues and possible extensions. The methods are compared in terms of their interpretability and applicability and recommendations are provided for particular classes of problems.
Target text information: (1997) MCMC Convergence Diagnostic via the Central Limit Theorem. : Markov Chain Monte Carlo (MCMC) methods, as introduced by Gelfand and Smith (1990), provide a simulation based strategy for statistical inference. The application fields related to these methods, as well as theoretical convergence properties, have been intensively studied in the recent literature. However, many improvements are still expected to provide workable and theoretically well-grounded solutions to the problem of monitoring the convergence of actual outputs from MCMC algorithms (i.e. the convergence assessment problem). In this paper, we introduce and discuss a methodology based on the Central Limit Theorem for Markov chains to assess convergence of MCMC algorithms. Instead of searching for approximate stationarity, we primarily intend to control the precision of estimates of the invariant probability measure, or of integrals of functions with respect to this measure, through confidence regions based on normal approximation. The first proposed control method tests the normality hypothesis for normalized averages of functions of the Markov chain over independent parallel chains. This normality control provides good guarantees that the whole state space has been explored, even in multimodal situations. It can lead to automated stopping rules. A second tool connected with the normality control is based on graphical monitoring of the stabilization of the variance after n iterations near the limiting variance appearing in the CLT. Both methods require no knowledge of the sampler driving the chain. In this paper, we mainly focus on finite state Markov chains, since this setting allows us to derive consistent estimates of both the limiting variance and the variance after n iterations. Heuristic procedures based on Berry-Esseen bounds are investigated. An extension to the continuous case is also proposed. Numerical simulations illustrating the performance of these methods are given for several examples: a finite chain with multimodal invariant probability, a finite state random walk for which the theoretical rate of convergence to stationarity is known, and a continuous state chain with multimodal invariant probability issued from a Gibbs sampler.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
6
|
Probabilistic Methods
|
cora
| 2,277
|
train
|
1-hop neighbor's text information: An Empirical Comparison of Voting Classification Algorithms: Bagging, Boosting, and Variants. : Methods for voting classification algorithms, such as Bagging and AdaBoost, have been shown to be very successful in improving the accuracy of certain classifiers for artificial and real-world datasets. We review these algorithms and describe a large empirical study comparing several variants in conjunction with a decision tree inducer (three variants) and a Naive-Bayes inducer. The purpose of the study is to improve our understanding of why and when these algorithms, which use perturbation, reweighting, and combination techniques, affect classification error. We provide a bias and variance decomposition of the error to show how different methods and variants influence these two terms. This allowed us to determine that Bagging reduced variance of unstable methods, while boosting methods (AdaBoost and Arc-x4) reduced both the bias and variance of unstable methods but increased the variance for Naive-Bayes, which was very stable. We observed that Arc-x4 behaves differently than AdaBoost if reweighting is used instead of resampling, indicating a fundamental difference. Voting variants, some of which are introduced in this paper, include: pruning versus no pruning, use of probabilistic estimates, weight perturbations (Wagging), and backfitting of data. We found that Bagging improves when probabilistic estimates in conjunction with no-pruning are used, as well as when the data was backfit. We measure tree sizes and show an interesting positive correlation between the increase in the average tree size in AdaBoost trials and its success in reducing the error. We compare the mean-squared error of voting methods to non-voting methods and show that the voting methods lead to large and significant reductions in the mean-squared errors. Practical problems that arise in implementing boosting algorithms are explored, including numerical instabilities and underflows. We use scatterplots that graphically show how AdaBoost reweights instances, emphasizing not only "hard" areas but also outliers and noise.
1-hop neighbor's text information: A Decision-theoretic Generalization of On-line Learning and an Application to Boosting. : We consider the problem of dynamically apportioning resources among a set of options in a worst-case on-line framework. The model we study can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting. We show that the multiplicative weight-update rule of Littlestone and Warmuth [10] can be adapted to this model yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems. We show how the resulting learning algorithm can be applied to a variety of problems, including gambling, multiple-outcome prediction, repeated games and prediction of points in R n
1-hop neighbor's text information: MAJORITY VOTE CLASSIFIERS: THEORY AND APPLICATIONS:
Target text information: Prediction games and arcing algorithms. : Technical Report 504 December 19, 1997 Statistics Department University of California Berkeley, CA. (4720 Abstract The theory behind the success of adaptive reweighting and combining algorithms (arcing) such as Adaboost (Freund and Schapire [1995].[1996]) and others in reducing generalization error has not been well understood. By formulating prediction, both classification and regression, as a game where one player makes a selection from instances in the training set and the other a convex linear combination of predictors from a finite set, existing arcing algorithms are shown to be algorithms for finding good game strategies. An optimal game strategy finds a combined predictor that minimizes the maximum of the error over the training set. A bound on the generalization error for the combined predictors in terms of their maximum error is proven that is sharper than bounds to date. Arcing algorithms are described that converge to the optimal strategy. Schapire et.al. [1997] offered an explanation of why Adaboost works in terms of its ability to reduce the margin. Comparing Adaboost to our optimal arcing algorithm shows that their explanation is not valid and that the answer lies elsewhere. In this situation the VC-type bounds are misleading. Some empirical results are given to explore the situation.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
4
|
Theory
|
cora
| 2,085
|
test
|
1-hop neighbor's text information: Introduction to the Theory of Neural Computa 92 tion. : Neural computation, also called connectionism, parallel distributed processing, neural network modeling or brain-style computation, has grown rapidly in the last decade. Despite this explosion, and ultimately because of impressive applications, there has been a dire need for a concise introduction from a theoretical perspective, analyzing the strengths and weaknesses of connectionist approaches and establishing links to other disciplines, such as statistics or control theory. The Introduction to the Theory of Neural Computation by Hertz, Krogh and Palmer (subsequently referred to as HKP) is written from the perspective of physics, the home discipline of the authors. The book fulfills its mission as an introduction for neural network novices, provided that they have some background in calculus, linear algebra, and statistics. It covers a number of models that are often viewed as disjoint. Critical analyses and fruitful comparisons between these models
1-hop neighbor's text information: June 1994 T o app ear in Neural Computation A Coun terexample to T emp: Sutton's TD( ) metho d aims to provide a represen tation of the cost function in an absorbing Mark ov chain with transition costs. A simple example is given where the represen tation obtained dep ends on . For = 1 the represen tation is optimal with resp ect to a least squares error criterion, but as decreases towards 0 the represen tation becomes progressiv ely worse and, in some cases, very poor. The example suggests a need to understand better the circumstances under which TD(0) and Q-learning obtain satisfactory neural net work-based compact represen tations of the cost function. A variation of TD(0) is also prop osed, which performs b etter on the example.
1-hop neighbor's text information: Mathematical programming in neural networks. : This paper highlights the role of mathematical programming, particularly linear programming, in training neural networks. A neural network description is given in terms of separating planes in the input space that suggests the use of linear programming for determining these planes. A more standard description in terms of a mean square error in the output space is also given, which leads to the use of unconstrained minimization techniques for training a neural network. The linear programming approach is demonstrated by a brief description of a system for breast cancer diagnosis that has been in use for the last four years at a major medical facility.
Target text information: "Serial and Parallel Backpropagation 18 References Convergence Via Nonmonotone Perturbed Minimization," : The fundamental backpropagation (BP) algorithm for training artificial neural networks is cast as a deterministic nonmonotone perturbed gradient method . Under certain natural assumptions, such as the series of learning rates diverging while the series of their squares converging, it is established that every accumulation point of the online BP iterates is a stationary point of the BP error function. The results presented cover serial and parallel online BP, modified BP with a momentum term, and BP with weight decay.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 1,693
|
test
|
1-hop neighbor's text information: Learning active classifiers. : Many classification algorithms are "passive", in that they assign a class-label to each instance based only on the description given, even if that description is incomplete. In contrast, an active classifier can | at some cost | obtain the values of missing attributes, before deciding upon a class label. The expected utility of using an active classifier depends on both the cost required to obtain the additional attribute values and the penalty incurred if it outputs the wrong classification. This paper considers the problem of learning near-optimal active classifiers, using a variant of the probably-approximately-correct (PAC) model. After defining the framework | which is perhaps the main contribution of this paper | we describe a situation where this task can be achieved efficiently, but then show that the task is often intractable.
Target text information: Exploiting the omission of irrelevant data. : Most learning algorithms work most effectively when their training data contain completely specified labeled samples. In many diagnostic tasks, however, the data will include the values of only some of the attributes; we model this as a blocking process that hides the values of those attributes from the learner. While blockers that remove the values of critical attributes can handicap a learner, this paper instead focuses on blockers that remove only irrelevant attribute values, i.e., values that are not needed to classify an instance, given the values of the other unblocked attributes. We first motivate and formalize this model of "superfluous-value blocking", and then demonstrate that these omissions can be useful, by proving that certain classes that seem hard to learn in the general PAC model | viz., decision trees and DNF formulae | are trivial to learn in this setting. We also show that this model can be extended to deal with (1) theory revision (i.e., modifying an existing formula); (2) blockers that occasionally include superfluous values or exclude required values; and (3) other cor ruptions of the training data.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
4
|
Theory
|
cora
| 1,045
|
test
|
1-hop neighbor's text information: Extended selection mechanisms in genetic algorithms. :
Target text information: Pruning backpropagation neural networks using modern stochastic optimization techniques: Approaches combining genetic algorithms and neural networks have received a great deal of attention in recent years. As a result, much work has been reported in two major areas of neural network design: training and topology optimization. This paper focuses on the key issues associated with the problem of pruning a multilayer perceptron using genetic algorithms and simulated annealing. The study presented considers a number of aspects associated with network training that may alter the behavior of a stochastic topology optimizer. Enhancements are discussed that can improve topology searches. Simulation results for the two mentioned stochastic optimization methods applied to nonlinear system identification are presented and compared with a simple random search.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
3
|
Genetic Algorithms
|
cora
| 469
|
val
|
1-hop neighbor's text information: An Interactive Planning Architecture The Forest Fire Fighting case: This paper describes an interactive planning system that was developed inside an Intelligent Decision Support System aimed at supporting an operator when planning the initial attack to forest fires. The planning architecture rests on the integration of case-based reasoning techniques with constraint reasoning techniques exploited, mainly, for performing temporal reasoning on temporal metric information. Temporal reasoning plays a central role in supporting interactive functions that are provided to the user when performing two basic steps of the planning process: plan adaptation and resource scheduling. A first prototype was integrated with a situation assessment and a resource allocation manager subsystem and is currently being tested.
1-hop neighbor's text information: "Planning in a Complex Real Domain", : Dimensions of complexity raised during the definition of a system aimed at supporting the planning of initial attack to forest fires are presented and discussed. The complexity deriving from the highly dynamic and unpredictable domain of forest fire, the one realated to the individuation and integration of planning techniques suitable to this domain, the complexity of addressing the problem of taking into account the role of the user to be supported by the system and finally the complexity of an architecture able to integrate different subsystems. In particular we focus on the severe constraints to the definition of a planning approach posed by the fire fighting domain, constraints which cannot be satisfied completely by any of the current planning paradigms. We propose an approach based on the integratation of skeletal planning and case based reasoning techniques with constraint reasoning. More specifically temporal constraints are used in two steps of the planning process: plan fitting and adaptation, and resource scheduling. Work on the development of the system software architecture with a OOD methodology is in progress.
Target text information: a platform for emergencies management systems. : This paper describe the functional architecture of CHARADE a software platform devoted to the development of a new generation of intelligent environmental decision support systems. The CHARADE platform is based on the a task-oriented approach to system design and on the exploitation of a new architecture for problem solving, that integrates case-based reasoning and constraint reasoning. The platform is developed in an objectoriented environment and upon that a demonstrator will be developed for managing first intervention attack to forest fires.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
2
|
Case Based
|
cora
| 954
|
train
|
1-hop neighbor's text information: Beyond independence: Conditions for the optimality of the simple bayesian classifier. : The simple Bayesian classifier (SBC) is commonly thought to assume that attributes are independent given the class, but this is apparently contradicted by the surprisingly good performance it exhibits in many domains that contain clear attribute dependences. No explanation for this has been proposed so far. In this paper we show that the SBC does not in fact assume attribute independence, and can be optimal even when this assumption is violated by a wide margin. The key to this finding lies in the distinction between classification and probability estimation: correct classification can be achieved even when the probability estimates used contain large errors. We show that the previously-assumed region of optimality of the SBC is a second-order infinitesimal fraction of the actual one. This is followed by the derivation of several necessary and several sufficient conditions for the optimality of the SBC. For example, the SBC is optimal for learning arbitrary conjunctions and disjunctions, even though they violate the independence assumption. The paper also reports empirical evidence of the SBC's competitive performance in domains containing substantial degrees of attribute dependence.
1-hop neighbor's text information: Feature subset selection using the wrapper method: Overfitting and dynamic search space. : In the wrapper approach to feature subset selection, a search for an optimal set of features is made using the induction algorithm as a black box. The estimated future performance of the algorithm is the heuristic guiding the search. Statistical methods for feature subset selection including forward selection, backward elimination, and their stepwise variants can be viewed as simple hill-climbing techniques in the space of feature subsets. We utilize best-first search to find a good feature subset and discuss overfitting problems that may be associated with searching too many feature subsets. We introduce compound operators that dynamically change the topology of the search space to better utilize the information available from the evaluation of feature subsets. We show that compound operators unify previous approaches that deal with relevant and irrelevant features. The improved feature subset selection yields significant improvements for real-world datasets when using the ID3 and the Naive-Bayes induction algorithms.
1-hop neighbor's text information: "An analysis of bayesian classifiers," : In this paper we present an average-case analysis of the Bayesian classifier, a simple probabilistic induction algorithm that fares remarkably well on many learning tasks. Our analysis assumes a monotone conjunctive target concept, Boolean attributes that are independent of each other and that follow a single distribution, and the absence of attribute noise. We first calculate the probability that the algorithm will induce an arbitrary pair of concept descriptions; we then use this expression to compute the probability of correct classification over the space of instances. The analysis takes into account the number of training instances, the number of relevant and irrelevant attributes, the distribution of these attributes, and the level of class noise. In addition, we explore the behavioral implications of the analysis by presenting predicted learning curves for a number of artificial domains. We also give experimental results on these domains as a check on our reasoning. Finally, we discuss some unresolved questions about the behavior of Bayesian classifiers and outline directions for future research. Note: Without acknowledgements and references, this paper fits into 12 pages with dimensions 5.5 inches fi 7.5 inches using 12 point LaTeX type. However, we find the current format more desirable. We have not submitted the paper to any other conference or journal.
Target text information: Visualizing the simple bayesian classifier. In KDD Workshop on Issues in the Integration of Data Mining and Data Visualization. : The simple Bayesian classifier (SBC), sometimes called Naive-Bayes, is built based on a conditional independence model of each attribute given the class. The model was previously shown to be surprisingly robust to obvious violations of this independence assumption, yielding accurate classification models even when there are clear conditional dependencies. The SBC can serve as an excellent tool for initial exploratory data analysis when coupled with a visualizer that makes its structure comprehensible. We describe such a visual representation of the SBC model that has been successfully implemented. We describe the requirements we had for such a visualization and the design decisions we made to satisfy them.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
6
|
Probabilistic Methods
|
cora
| 2,686
|
test
|
1-hop neighbor's text information: Bayesian and information-theoretic priors for Bayesian network parameters. : We consider Bayesian and information-theoretic approaches for determining non-informative prior distributions in a parametric model family. The information-theoretic approaches are based on the recently modified definition of stochastic complexity by Rissanen, and on the Minimum Message Length (MML) approach by Wallace. The Bayesian alternatives include the uniform prior, and the equivalent sample size priors. In order to be able to empirically compare the different approaches in practice, the methods are instantiated for a model family of practical importance, the family of Bayesian networks.
1-hop neighbor's text information: MML and Bayesianism: similarities and differences. : Tech Report 207 Department of Computer Science, Monash University, Clayton, Vic. 3168, Australia Abstract: This paper continues the introduction to minimum encoding inductive inference given by Oliver and Hand. This series of papers was written with the objective of providing an introduction to this area for statisticians. We describe the message length estimates used in Wallace's Minimum Message Length (MML) inference and Rissanen's Minimum Description Length (MDL) inference. The differences in the message length estimates of the two approaches are explained. The implications of these differences for applications are discussed.
Target text information: Stochastic Complexity Based Estimation of Missing Elements in Questionnaire Data: In this paper we study a new information-theoretically justified approach to missing data estimation for multivariate categorical data. The approach discussed is a model-based imputation procedure relative to a model class (i.e., a functional form for the probability distribution of the complete data matrix), which in our case is the set of multinomial models with some independence assumptions. Based on the given model class assumption an information-theoretic criterion can be derived to select between the different complete data matrices. Intuitively this general criterion, called stochastic complexity, represents the shortest code length needed for coding the complete data matrix relative to the model class chosen. Using this information-theoretic criteria, the missing data problem is reduced to a search problem, i.e., finding the data completion with minimal stochastic complexity. In the experimental part of the paper we present empirical results of the approach using two real data sets, and compare these results to those achived by commonly used techniques such as case deletion and imputating sample averages.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
6
|
Probabilistic Methods
|
cora
| 639
|
test
|
1-hop neighbor's text information: An adaptation of Relief for attribute estimation in regression: Heuristic measures for estimating the quality of attributes mostly assume the independence of attributes so in domains with strong dependencies between attributes their performance is poor. Relief and its extension ReliefF are capable of correctly estimating the quality of attributes in classification problems with strong dependencies between attributes. By exploiting local information provided by different contexts they provide a global view. We present the analysis of Reli-efF which lead us to its adaptation to regression (continuous class) problems. The experiments on artificial and real-world data sets show that Re-gressional ReliefF correctly estimates the quality of attributes in various conditions, and can be used for non-myopic learning of the regression trees. Regressional ReliefF and ReliefF provide a unified view on estimating the attribute quality in regression and classification.
1-hop neighbor's text information: Prognosing the Survival Time of the Patients with the Anaplastic Thyroid Carcinoma with Machine Learning: Anaplastic thyroid carcinoma is a rare but very aggressive tumor. Many factors that might influence the survival of patients have been suggested. The aim of our study was to determine which of the factors, known at the time of admission to the hospital, might predict survival of patients with anaplastic thyroid carcinoma. Our aim was also to assess the relative importance of the factors and to identify potentially useful decision and regression trees generated by machine learning algorithms. Our study included 126 patients (90 females and 36 males; mean age was 66.7 years) with anaplastic thyroid carcinoma treated at the Institute of Oncology Ljubljana from 1972 to 1992. Patients were classified into categories according to 11 attributes: sex, age, history, physical findings, extent of disease on admission, and tumor morphology. In this paper we compare the machine learning approach with the previous statistical evaluations on the problem (uni-variate and multivariate analysis) and show that it can provide more thorough analysis and improve understanding of the data.
1-hop neighbor's text information: a stochastic approach to Inductive Logic Programming. : Current systems in the field of Inductive Logic Programming (ILP) use, primarily for the sake of efficiency, heuristically guided search techniques. Such greedy algorithms suffer from local optimization problem. Present paper describes a system named SFOIL, that tries to alleviate this problem by using a stochastic search method, based on a generalization of simulated annealing, called Markovian neural network. Various tests were performed on benchmark, and real-world domains. The results show both, advantages and weaknesses of stochastic approach.
Target text information: `Overcoming the myopia of inductive learning algorithms with relieff\', : Current inductive machine learning algorithms typically use greedy search with limited looka-head. This prevents them to detect significant conditional dependencies between the attributes that describe training objects. Instead of myopic impurity functions and lookahead, we propose to use RELI-EFF, an extension of RELIEF developed by Kira and Rendell [10], [11], for heuristic guidance of inductive learning algorithms. We have reimplemented Assistant, a system for top down induction of decision trees, using RELIEFF as an estimator of attributes at each selection step. The algorithm is tested on several artificial and several real world problems and the results are compared with some other well known machine learning algorithms. Excellent results on artificial data sets and two real world problems show the advantage of the presented approach to inductive learning.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
0
|
Rule Learning
|
cora
| 2,513
|
val
|
1-hop neighbor's text information: Estimating dependency structure as a hidden variable. : This paper introduces a probability model, the mixture of trees that can account for sparse, dynamically changing dependence relationships. We present a family of efficient algorithms that use EM and the Minimum Spanning Tree algorithm to find the ML and MAP mixture of trees for a variety of priors, including the Dirichlet and the MDL priors.
1-hop neighbor's text information: Comparing predictive inference methods for discrete domains. : Predictive inference is seen here as the process of determining the predictive distribution of a discrete variable, given a data set of training examples and the values for the other problem domain variables. We consider three approaches for computing this predictive distribution, and assume that the joint probability distribution for the variables belongs to a set of distributions determined by a set of parametric models. In the simplest case, the predictive distribution is computed by using the model with the maximum a posteriori (MAP) posterior probability. In the evidence approach, the predictive distribution is obtained by averaging over all the individual models in the model family. In the third case, we define the predictive distribution by using Rissanen's new definition of stochastic complexity. Our experiments performed with the family of Naive Bayes models suggest that when using all the data available, the stochastic complexity approach produces the most accurate predictions in the log-score sense. However, when the amount of available training data is decreased, the evidence approach clearly outperforms the two other approaches. The MAP predictive distribution is clearly inferior in the log-score sense to the two more sophisticated approaches, but for the 0/1-score the MAP approach may still in some cases produce the best results.
Target text information: Constructing bayesian finite mixture models by the EM algorithm. : Email: [email protected] Report C-1996-9, University of Helsinki, Department of Computer Science. Abstract In this paper we explore the use of finite mixture models for building decision support systems capable of sound probabilistic inference. Finite mixture models have many appealing properties: they are computationally efficient in the prediction (reasoning) phase, they are universal in the sense that they can approximate any problem domain distribution, and they can handle multimod-ality well. We present a formulation of the model construction problem in the Bayesian framework for finite mixture models, and describe how Bayesian inference is performed given such a model. The model construction problem can be seen as missing data estimation and we describe a realization of the Expectation-Maximization (EM) algorithm for finding good models. To prove the feasibility of our approach, we report crossvalidated empirical results on several publicly available classification problem datasets, and compare our results to corresponding results obtained by alternative techniques, such as neural networks and decision trees. The comparison is based on the best results reported in the literature on the datasets in question. It appears that using the theoretically sound Bayesian framework suggested here the other reported results can be outperformed with a relatively small effort.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
6
|
Probabilistic Methods
|
cora
| 1,218
|
test
|
1-hop neighbor's text information: Improving elevator performance using reinforcement learning. : This paper describes the application of reinforcement learning (RL) to the difficult real world problem of elevator dispatching. The elevator domain poses a combination of challenges not seen in most RL research to date. Elevator systems operate in continuous state spaces and in continuous time as discrete event dynamic systems. Their states are not fully observable and they are nonstationary due to changing passenger arrival rates. In addition, we use a team of RL agents, each of which is responsible for controlling one elevator car. The team receives a global reinforcement signal which appears noisy to each agent due to the effects of the actions of the other agents, the random nature of the arrivals and the incomplete observation of the state. In spite of these complications, we show results that in simulation surpass the best of the heuristic elevator control algorithms of which we are aware. These results demonstrate the power of RL on a very large scale stochastic dynamic optimization problem of practical utility.
1-hop neighbor's text information: Discovering solutions with low kolmogorov complexity and high generalization capability. : Many machine learning algorithms aim at finding "simple" rules to explain training data. The expectation is: the "simpler" the rules, the better the generalization on test data (! Occam's razor). Most practical implementations, however, use measures for "simplicity" that lack the power, universality and elegance of those based on Kolmogorov complexity and Solomonoff's algorithmic probability. Likewise, most previous approaches (especially those of the "Bayesian" kind) suffer from the problem of choosing appropriate priors. This paper addresses both issues. It first reviews some basic concepts of algorithmic complexity theory relevant to machine learning, and how the Solomonoff-Levin distribution (or universal prior) deals with the prior problem. The universal prior leads to a probabilistic method for finding "algorithmically simple" problem solutions with high generalization capability. The method is based on Levin complexity (a time-bounded generalization of Kolmogorov complexity) and inspired by Levin's optimal universal search algorithm. With a given problem, solution candidates are computed by efficient "self-sizing" programs that influence their own runtime and storage size. The probabilistic search algorithm finds the "good" programs (the ones quickly computing algorithmically probable solutions fitting the training data). Simulations focus on the task of discovering "algorithmically simple" neural networks with low Kolmogorov complexity and high generalization capability. It is demonstrated that the method, at least with certain toy problems where it is computationally feasible, can lead to generalization results unmatchable by previous neural net algorithms. Much remains do be done, however, to make large scale applications and "incremental learning" feasible.
1-hop neighbor's text information: Markov games as a framework for multi-agent reinforcement learning. : In the Markov decision process (MDP) formalization of reinforcement learning, a single adaptive agent interacts with an environment defined by a probabilistic transition function. In this solipsistic view, secondary agents can only be part of the environment and are therefore fixed in their behavior. The framework of Markov games allows us to widen this view to include multiple adaptive agents with interacting or competing goals. This paper considers a step in this direction in which exactly two agents with diametrically opposed goals share an environment. It describes a Q-learning-like algorithm for finding optimal policies and demonstrates its application to a simple two-player game in which the optimal policy is probabilistic.
Target text information: Learning team strategies with multiple policy-sharing agents: A soccer case study. : We use simulated soccer to study multiagent learning. Each team's players (agents) share action set and policy but may behave differently due to position-dependent inputs. All agents making up a team are rewarded or punished collectively in case of goals. We conduct simulations with varying team sizes, and compare two learning algorithms: TD-Q learning with linear neural networks (TD-Q) and Probabilistic Incremental Program Evolution (PIPE). TD-Q is based on evaluation functions (EFs) mapping input/action pairs to expected reward, while PIPE searches policy space directly. PIPE uses adaptive "probabilistic prototype trees" to synthesize programs that calculate action probabilities from current inputs. Our results show that TD-Q encounters several difficulties in learning appropriate shared EFs. PIPE, however, does not depend on EFs and can find good policies faster and more reliably. This suggests that in multiagent learning scenarios direct search through policy space can offer advantages over EF-based approaches.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
5
|
Reinforcement Learning
|
cora
| 1,158
|
test
|
1-hop neighbor's text information: DIAGNOSING AND CORRECTING SYSTEM ANOMALIES WITH A ROBUST CLASSIFIER: If a robust statistical model has been developed to classify the ``health'' of a system, a well-known Taylor series approximation technique forms the basis of a diagnostic/recovery procedure that can be initiated when the system's health degrades or fails altogether. This procedure determines a ranked set of probable causes for the degraded health state, which can be used as a prioritized checklist for isolating system anomalies and quantifying corrective action. The diagnostic/recovery procedure is applicable to any classifier known to be robust; it can be applied to both neural network and traditional parametric pattern classifiers generated by a supervised learning procedure in which an empirical risk/benefit measure is optimized. We describe the procedure mathematically and demonstrate its ability to detect and diagnose the cause(s) of faults in NASA's Deep Space Communications Complex at Goldstone, California.
1-hop neighbor's text information: Hampshire (1992), A Differential Theory of Learning for Statistical Pattern Recognition with Connectionist Models, : We describe a new theory of differential learning by which a broad family of pattern classifiers (including many well-known neural network paradigms) can learn stochastic concepts efficiently. We describe the relationship between a classifier's ability to generalize well to unseen test examples and the efficiency of the strategy by which it learns. We list a series of proofs that differential learning is efficient in its information and computational resource requirements, whereas traditional probabilistic learning strategies are not. The proofs are illustrated by a simple example that lends itself to closed-form analysis. We conclude with an optical character recognition task for which three different types of differentially generated classifiers generalize significantly better than their probabilistically generated counterparts.
Target text information: Why Error Measures are Sub-Optimal for Training Neural Network Pattern Classifiers. : We outline a differential theory of learning for statistical pattern classification. When applied to neural networks, the theory leads to an efficient differential learning strategy based on classification figure-of-merit (CFM) objective functions [5]. Differential learning guarantees the highest probability of generalization for a classifier with limited functional complexity, trained with a limited number of examples. The theory is significant for this and two other reasons: We demonstrate the importance of differential learning's efficiency with a simple pattern recognition task that lends itself to closed-form analysis. We conclude with a practical application of the theory in which a differentially trained perceptron diagnoses a crippling joint disorder from magnetic resonance images better than both its probabilistically trained counterpart and more complex probabilistically trained multi-layer perceptrons. The recent renaissance of connectionism has led to a considerable amount of research regarding generalization in neural network pattern classifiers that are trained in a supervised fashion. Most of this research has been done by computational learning theorists and statisticians intent on matching the functional complexity of the classifier with the size of the training sample in order to avoid the well-known curse of dimensionality (see for example the work of Barron, Baum, Haussler, and Vapnik much of which is summarized in [8]). Yet relatively little attention has been paid to the effect that the objective function (used to drive the supervised learning procedure) has on discrimination and generalization [6, 1, 5, 7, 2, 3]. fl Copyright c fl1992 by J. B. Hampshire II and B. V. K. V. Kumar: all rights reserved. Copyright is automatically extended to IEEE if this submission is accepted for presentation/publication. This research was funded by the Air Force Office of Scientific Research (grant AFOSR-89-0551) and supported by a supercomputing grant from the National Science Foundation's Pittsburgh Supercomputing Center (grant CCR920002P). The views and conclusions contained in this submission are the authors' and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Air Force, the National Science Foundation, or the U.S. Government.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 1,415
|
test
|
1-hop neighbor's text information: (1995a) Ensemble learning and evidence maximization. : Ensemble learning by variational free energy minimization is a tool introduced to neural networks by Hinton and van Camp in which learning is described in terms of the optimization of an ensemble of parameter vectors. The optimized ensemble is an approximation to the posterior probability distribution of the parameters. This tool has now been applied to a variety of statistical inference problems. In this paper I study a linear regression model with both parameters and hyper-parameters. I demonstrate that the evidence approximation for the optimization of regularization constants can be derived in detail from a free energy minimization view point.
1-hop neighbor's text information: Developments in probabilistic modelling with neural networks| ensemble learning. : In this paper I give a review of ensemble learning using a simple example.
Target text information: "Free energy minimization algorithm for decoding and cryptanalysis", : where A is a binary matrix. Our task is to infer s given z and A, and given assumptions about the statistical properties of s and n. This problem arises in the decoding of a noisy communication z which was transmitted using an error-correcting code based on parity checks of the original signal s, and in the inference of the sequence of a linear feedback shift register (LFSR) from a noisy observation of the sequence [1]. P (zjA) I assume the decoder's aim is to find the most probable s. For large N an exhaustive search over the 2 N possible sequences s is not feasible. One way to attack such a combinatorial problem is to create a related continuous optimization problem in which the discrete variables are replaced by real variables [2, 3, 4]. Here I derive a continuous representation in terms of a free energy approximation [5] to the awkward posterior distribution (3).
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 2,452
|
test
|
1-hop neighbor's text information: Automatic Feature Extraction in Machine Learning: This thesis presents a machine learning model capable of extracting discrete classes out of continuous valued input features. This is done using a neurally inspired novel competitive classifier (CC) which feeds the discrete classifications forward to a supervised machine learning model. The supervised learning model uses the discrete classifications and perhaps other information available to solve a problem. The supervised learner then generates feedback to guide the CC into potentially more useful classifications of the continuous valued input features. Two supervised learning models are combined with the CC creating ASOCS-AFE and ID3-AFE. Both models are simulated and the results are analyzed. Based on these results, several areas of future research are proposed.
1-hop neighbor's text information: "Induction of Decision Trees," :
Target text information: BRACE: A Paradigm For the Discretization of Continuously Valued Data, : Discretization of continuously valued data is a useful and necessary tool because many learning paradigms assume nominal data. A list of objectives for efficient and effective discretization is presented. A paradigm called BRACE (Boundary Ranking And Classification Evaluation) that attempts to meet the objectives is presented along with an algorithm that follows the paradigm. The paradigm meets many of the objectives, with potential for extension to meet the remainder. Empirical results have been promising. For these reasons BRACE has potential as an effective and efficient method for discretization of continuously valued data. A further advantage of BRACE is that it is general enough to be extended to other types of clustering/unsupervised learning.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
2
|
Case Based
|
cora
| 1,527
|
test
|
1-hop neighbor's text information: Revising Bayesian networks parameters using backpropagation. : The problem of learning Bayesian networks with hidden variables is known to be a hard problem. Even the simpler task of learning just the conditional probabilities on a Bayesian network with hidden variables is hard. In this paper, we present an approach that learns the conditional probabilities on a Bayesian network with hidden variables by transforming it into a multi-layer feedforward neural network (ANN). The conditional probabilities are mapped onto weights in the ANN, which are then learned using standard backpropagation techniques. To avoid the problem of exponentially large ANNs, we focus on Bayesian networks with noisy-or and noisy-and nodes. Experiments on real world classification problems demonstrate the effectiveness of our technique.
1-hop neighbor's text information: Mining and Model Simplicity: A Case Study in Diagnosis: Proceedings of the 2nd International Conference on Knowledge Discovery and Data Mining (KDD), 1996. The official version of this paper has been published by the American Association for Artificial Intelligence (http://www.aaai.org) c fl 1996, American Association for Artificial Intelligence. All rights reserved. Abstract We describe the results of performing data mining on a challenging medical diagnosis domain, acute abdominal pain. This domain is well known to be difficult, yielding little more than 60% predictive accuracy for most human and machine diagnosticians. Moreover, many researchers argue that one of the simplest approaches, the naive Bayesian classifier, is optimal. By comparing the performance of the naive Bayesian classifier to its more general cousin, the Bayesian network classifier, and to selective Bayesian classifiers with just 10% of the total attributes, we show that the simplest models perform at least as well as the more complex models. We argue that simple models like the selective naive Bayesian classifier will perform as well as more complicated models for similarly complex domains with relatively small data sets, thereby calling into question the extra expense necessary to induce more complex models.
1-hop neighbor's text information: Efficient learning of selective Bayesian network classifiers. : In this paper, we present a computation-ally efficient method for inducing selective Bayesian network classifiers. Our approach is to use information-theoretic metrics to efficiently select a subset of attributes from which to learn the classifier. We explore three conditional, information-theoretic met-rics that are extensions of metrics used extensively in decision tree learning, namely Quin-lan's gain and gain ratio metrics and Man-taras's distance metric. We experimentally show that the algorithms based on gain ratio and distance metric learn selective Bayesian networks that have predictive accuracies as good as or better than those learned by existing selective Bayesian network induction approaches (K2-AS), but at a significantly lower computational cost. We prove that the subset-selection phase of these information-based algorithms has polynomial complexity, as compared to the worst-case exponential time complexity of the corresponding phase in K2-AS.
Target text information: Learning Bayesian networks using feature selection. : This paper introduces a novel enhancement for learning Bayesian networks with a bias for small, high-predictive-accuracy networks. The new approach selects a subset of features which maximizes predictive accuracy prior to the network learning phase. We examine explicitly the effects of two aspects of the algorithm, feature selection and node ordering. Our approach generates networks which are com-putationally simpler to evaluate and which display predictive accuracy comparable to that of Bayesian networks which model all attributes.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
6
|
Probabilistic Methods
|
cora
| 2,638
|
test
|
1-hop neighbor's text information: On Learning Read-k-Satisfy-j DNF: We study the learnability of Read-k-Satisfy-j (RkSj) DNF formulas. These are boolean formulas in disjunctive normal form (DNF), in which the maximum number of occurrences of a variable is bounded by k, and the number of terms satisfied by any assignment is at most j. After motivating the investigation of this class of DNF formulas, we present an algorithm that for any unknown RkSj DNF formula to be learned, with high probability finds a logically equivalent DNF formula using the well-studied protocol of equivalence and membership queries. The algorithm runs in polynomial time for k j = O( log n log log n ), where n is the number of input variables.
1-hop neighbor's text information: Learning with queries but incomplete information. : We investigate learning with membership and equivalence queries assuming that the information provided to the learner is incomplete. By incomplete we mean that some of the membership queries may be answered by I don't know. This model is a worst-case version of the incomplete membership query model of Angluin and Slonim. It attempts to model practical learning situations, including an experiment of Lang and Baum that we describe, where the teacher may be unable to answer reliably some queries that are critical for the learning algorithm. We present algorithms to learn monotone k-term DNF with membership queries only, and to learn monotone DNF with membership and equivalence queries. Compared to the complete information case, the query complexity increases by an additive term linear in the number of I don't know answers received. We also observe that the blowup in the number of queries can in general be exponential for both our new model and the incomplete membership model.
1-hop neighbor's text information: Learning conjunctions of Horn clauses. :
Target text information: Learning Boolean read-once formulas with arbitrary symmetric and constant fan-in gates. : A read-once formula is a boolean formula in which each variable occurs at most once. Such formulas are also called -formulas or boolean trees. This paper treats the problem of exactly identifying an unknown read-once formula using specific kinds of queries. The main results are a polynomial time algorithm for exact identification of monotone read-once formulas using only membership queries, and a polynomial time algorithm for exact identification of general read-once formulas using equivalence and membership queries (a protocol based on the notion of a minimally adequate teacher [1]). Our results improve on Valiant's previous results for read-once formulas [26]. We also show that no polynomial time algorithm using only membership queries or only equivalence queries can exactly identify all read-once formulas.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
4
|
Theory
|
cora
| 1,300
|
test
|
1-hop neighbor's text information: An empirical evaluation of bagging and boosting. : An ensemble consists of a set of independently trained classifiers (such as neural networks or decision trees) whose predictions are combined when classifying novel instances. Previous research has shown that an ensemble as a whole is often more accurate than any of the single classifiers in the ensemble. Bagging (Breiman 1996a) and Boosting (Freund & Schapire 1996) are two relatively new but popular methods for producing ensembles. In this paper we evaluate these methods using both neural networks and decision trees as our classification algorithms. Our results clearly show two important facts. The first is that even though Bagging almost always produces a better classifier than any of its individual component classifiers and is relatively impervious to overfitting, it does not generalize any better than a baseline neural-network ensemble method. The second is that Boosting is a powerful technique that can usually produce better ensembles than Bagging; however, it is more susceptible to noise and can quickly overfit a data set.
1-hop neighbor's text information: Pruning recurrent neural networks for improved generalization performance. : Determining the architecture of a neural network is an important issue for any learning task. For recurrent neural networks no general methods exist that permit the estimation of the number of layers of hidden neurons, the size of layers or the number of weights. We present a simple pruning heuristic which significantly improves the generalization performance of trained recurrent networks. We illustrate this heuristic by training a fully recurrent neural network on positive and negative strings of a regular grammar. We also show that if rules are extracted from networks trained to recognize these strings, that rules extracted after pruning are more consistent with the rules to be learned. This performance improvement is obtained by pruning and retraining the networks. Simulations are shown for training and pruning a recurrent neural net on strings generated by two regular grammars, a randomly-generated 10-state grammar and an 8-state triple parity grammar. Further simulations indicate that this pruning method can gives generalization performance superior to that obtained by training with weight decay.
1-hop neighbor's text information: The Sources of Increased Accuracy for Two Proposed Boosting Algorithms: We introduce two boosting algorithms that aim to increase the generalization accuracy of a given classifier by incorporating it as a level-0 component in a stacked generalizer. Both algorithms construct a complementary level-0 classifier that can only generate coarse hypotheses for the training data. We show that the two algorithms boost generalization accuracy on a representative collection of data sets. The two algorithms are distinguished in that one of them modifies the class targets of selected training instances in order to train the complementary classifier. We show that the two algorithms achieve approximately equal generalization accuracy, but that they create complementary classifiers that display different degrees of accuracy and diversity. Our study provides evidence that it may be useful to investigate families of boosting algorithms that incorporate varying levels of accuracy and diversity, so as to achieve an appropriate mix for a given task and domain.
Target text information: "A framework of combining symbolic and neural learning," : The primary goal of inductive learning is to generalize well that is, induce a function that accurately produces the correct output for future inputs. Hansen and Salamon showed that, under certain assumptions, combining the predictions of several separately trained neural networks will improve generalization. One of their key assumptions is that the individual networks should be independent in the errors they produce. In the standard way of performing backpropagation this assumption may be violated, because the standard procedure is to initialize network weights in the region of weight space near the origin. This means that backpropagation's gradient-descent search may only reach a small subset of the possible local minima. In this paper we present an approach to initializing neural networks that uses competitive learning to intelligently create networks that are originally located far from the origin of weight space, thereby potentially increasing the set of reachable local minima. We report experiments on two real-world datasets where combinations of networks initialized with our method generalize better than combina tions of networks initialized the traditional way.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 2,591
|
test
|
1-hop neighbor's text information: Using Neural Networks to Automatically Refine Expert System Knowledge Bases: Experiments in the NYNEX MAX Domain: In this paper we describe our study of applying knowledge-based neural networks to the problem of diagnosing faults in local telephone loops. Currently, NYNEX uses an expert system called MAX to aid human experts in diagnosing these faults; however, having an effective learning algorithm in place of MAX would allow easy portability between different maintenance centers, and easy updating when the phone equipment changes. We find that (i) machine learning algorithms have better accuracy than MAX, (ii) neural networks perform better than decision trees, (iii) neural network ensembles perform better than standard neural networks, (iv) knowledge-based neural networks perform better than standard neural networks, and (v) an ensemble of knowledge-based neural networks performs the best.
1-hop neighbor's text information: Actively searching for an effective neural-network ensemble. : A neural-network ensemble is a very successful technique where the outputs of a set of separately trained neural network are combined to form one unified prediction. An effective ensemble should consist of a set of networks that are not only highly correct, but ones that make their errors on different parts of the input space as well; however, most existing techniques only indirectly address the problem of creating such a set. We present an algorithm called Addemup that uses genetic algorithms to explicitly search for a highly diverse set of accurate trained networks. Addemup works by first creating an initial population, then uses genetic operators to continually create new networks, keeping the set of networks that are highly accurate while disagreeing with each other as much as possible. Experiments on four real-world domains show that Addemup is able to generate a set of trained networks that is more accurate than several existing ensemble approaches. Experiments also show that Addemup is able to effectively incorporate prior knowledge, if available, to improve the quality of its ensemble.
Target text information: Learning from bad data. : The data describing resolutions to telephone network local loop "troubles," from which we wish to learn rules for dispatching technicians, are notoriously unreliable. Anecdotes abound detailing reasons why a resolution entered by a technician would not be valid, ranging from sympathy to fear to ignorance to negligence to management pressure. In this paper, we describe four different approaches to dealing with the problem of "bad" data in order first to determine whether machine learning has promise in this domain, and then to determine how well machine learning might perform. We then offer evidence that machine learning can help to build a dispatching method that will perform better than the system currently in place.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 1,603
|
val
|
1-hop neighbor's text information: Rationality and its Roles in Reasoning (extended version), : The economic theory of rationality promises to equal mathematical logic in its importance for the mechanization of reasoning. We survey the growing literature on how the basic notions of probability, utility, and rational choice, coupled with practical limitations on information and resources, influence the design and analysis of reasoning and representation systems.
1-hop neighbor's text information: Representing preferences as ceteris paribus comparatives. : Decision-theoretic preferences specify the relative desirability of all possible outcomes of alternative plans. In order to express general patterns of preference holding in a domain, we require a language that can refer directly to preferences over classes of outcomes as well as individuals. We present the basic concepts of a theory of meaning for such generic compar-atives to facilitate their incremental capture and exploitation in automated reasoning systems. Our semantics lifts comparisons of individuals to comparisons of classes "other things being equal" by means of contextual equivalences, equivalence relations among individuals that vary with the context of application. We discuss implications of the theory for represent ing preference information.
1-hop neighbor's text information: Rational belief revision (preliminary report). : Theories of rational belief revision recently proposed by Gardenfors and Nebel illuminate many important issues but impose unnecessarily strong standards for correct revisions and make strong assumptions about what information is available to guide revisions. We reconstruct these theories according to an economic standard of rationality in which preferences are used to select among alternative possible revisions. By permitting multiple partial specifications of preferences in ways closely related to preference-based nonmonotonic logics, the reconstructed theory employs information closer to that available in practice and offers more flexible ways of selecting revisions. We formally compare this notion of rational belief revision with those of Gardenfors and Nebel, adapt results about universal default theories to prove that there is no universal method of rational belief revision, and examine formally how different limitations on rationality affect belief revision.
Target text information: Toward Rational Planning and Replanning Rational Reason Maintenance, Reasoning Economies, and Qualitative Preferences formal notions: Efficiency dictates that plans for large-scale distributed activities be revised incrementally, with parts of plans being revised only if the expected utility of identifying and revising the subplans improves on the expected utility of using the original plan. The problems of identifying and reconsidering the subplans affected by changed circumstances or goals are closely related to the problems of revising beliefs as new or changed information is gained. But traditional techniques of reason maintenance|the standard method for belief revision|choose revisions arbitrarily and enforce global notions of consistency and groundedness which may mean reconsidering all beliefs or plan elements at each step. To address these problems, we developed (1) revision methods aimed at revising only those beliefs and plans worth revising, and tolerating incoherence and ungroundedness when these are judged less detrimental than a costly revision effort, (2) an artificial market economy in planning and revision tasks for arriving at overall judgments of worth, and (3) a representation for qualitative preferences that permits capture of common forms of dominance information. We view the activities of intelligent agents as stemming from interleaved or simultaneous planning, replanning, execution, and observation subactivities. In this model of the plan construction process, the agents continually evaluate and revise their plans in light of what happens in the world. Planning is necessary for the organization of large-scale activities because decisions about actions to be taken in the future have direct impact on what should be done in the shorter term. But even if well-constructed, the value of a plan decays as changing circumstances, resources, information, or objectives render the original course of action inappropriate. When changes occur before or during execution of the plan, it may be necessary to construct a new plan by starting from scratch or by revising a previous plan. only the portions of the plan actually affected by the changes. Given the information accrued during plan execution, which remaining parts of the original plan should be salvaged and in what ways should other parts be changed? Incremental replanning first involves localizing the potential changes or conflicts by identifying the subset of the extant beliefs and plans in which they occur. It then involves choosing which of the identified beliefs and plans to keep and which to change. For greatest efficiency, the choices of what portion of the plan to revise and how to revise it should be based on coherent expectations about and preferences among the consequences of different alternatives so as to be rational in the sense of decision theory (Savage 1972). Our work toward mechanizing rational planning and replanning has focussed on four main issues: This paper focusses on the latter three issues; for our approach to the first, see (Doyle 1988; 1992). Replanning in an incremental and local manner requires that the planning procedures routinely identify the assumptions made during planning and connect plan elements with these assumptions, so that replan-ning may seek to change only those portions of a plan dependent upon assumptions brought into question by new information. Consequently, the problem of revising plans to account for changed conditions has much
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
6
|
Probabilistic Methods
|
cora
| 160
|
train
|
1-hop neighbor's text information: MML mixture mod-elling of Multi-state, Poisson, von Mises circular and Gaussian distributions. : Minimum Message Length (MML) is an invariant Bayesian point estimation technique which is also consistent and efficient. We provide a brief overview of MML inductive inference (Wallace and Boulton (1968), Wallace and Freeman (1987)), and how it has both an information-theoretic and a Bayesian interpretation. We then outline how MML is used for statistical parameter estimation, and how the MML mixture mod-elling program, Snob (Wallace and Boulton (1968), Wal-lace (1986), Wallace and Dowe(1994)) uses the message lengths from various parameter estimates to enable it to combine parameter estimation with selection of the number of components. The message length is (to within a constant) the logarithm of the posterior probability of the theory. So, the MML theory can also be regarded as the theory with the highest posterior probability. Snob currently assumes that variables are uncorrelated, and permits multi-variate data from Gaussian, discrete multi-state, Poisson and von Mises circular distributions.
1-hop neighbor's text information: MML and Bayesianism: similarities and differences. : Tech Report 207 Department of Computer Science, Monash University, Clayton, Vic. 3168, Australia Abstract: This paper continues the introduction to minimum encoding inductive inference given by Oliver and Hand. This series of papers was written with the objective of providing an introduction to this area for statisticians. We describe the message length estimates used in Wallace's Minimum Message Length (MML) inference and Rissanen's Minimum Description Length (MDL) inference. The differences in the message length estimates of the two approaches are explained. The implications of these differences for applications are discussed.
Target text information: Single factor analysis by MML estimation. : The Minimum Message Length (MML) technique is applied to the problem of estimating the parameters of a multivariate Gaussian model in which the correlation structure is modelled by a single common factor. Implicit estimator equations are derived and compared with those obtained from a Maximum Likelihood (ML) analysis. Unlike ML, the MML estimators remain consistent when used to estimate both the factor loadings and factor scores. Tests on simulated data show the MML estimates to be on av erage more accurate than the ML estimates when the former exist. If the data show little evidence for a factor, the MML estimate collapses. It is shown that the condition for the existence of an MML estimate is essentially that the log likelihood ratio in favour of the factor model exceed the value expected under the null (no-factor) hypotheses.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
6
|
Probabilistic Methods
|
cora
| 1,704
|
test
|
1-hop neighbor's text information: Markov chain Monte Carlo methods based on "slicing" the density function. : Technical Report No. 9722, Department of Statistics, University of Toronto Abstract. One way to sample from a distribution is to sample uniformly from the region under the plot of its density function. A Markov chain that converges to this uniform distribution can be constructed by alternating uniform sampling in the vertical direction with uniform sampling from the horizontal `slice' defined by the current vertical position. Variations on such `slice sampling' methods can easily be implemented for univariate distributions, and can be used to sample from a multivariate distribution by updating each variable in turn. This approach is often easier to implement than Gibbs sampling, and may be more efficient than easily-constructed versions of the Metropolis algorithm. Slice sampling is therefore attractive in routine Markov chain Monte Carlo applications, and for use by software that automatically generates a Markov chain sampler from a model specification. One can also easily devise overrelaxed versions of slice sampling, which sometimes greatly improve sampling efficiency by suppressing random walk behaviour. Random walks can also be avoided in some slice sampling schemes that simultaneously update all variables.
1-hop neighbor's text information: (1995) "Suppressing random walks in Markov chain Monte Carlo using ordered overrelaxation", : Technical Report No. 9508, Department of Statistics, University of Toronto Markov chain Monte Carlo methods such as Gibbs sampling and simple forms of the Metropolis algorithm typically move about the distribution being sampled via a random walk. For the complex, high-dimensional distributions commonly encountered in Bayesian inference and statistical physics, the distance moved in each iteration of these algorithms will usually be small, because it is difficult or impossible to transform the problem to eliminate dependencies between variables. The inefficiency inherent in taking such small steps is greatly exacerbated when the algorithm operates via a random walk, as in such a case moving to a point n steps away will typically take around n 2 iterations. Such random walks can sometimes be suppressed using "overrelaxed" variants of Gibbs sampling (a.k.a. the heatbath algorithm), but such methods have hitherto been largely restricted to problems where all the full conditional distributions are Gaussian. I present an overrelaxed Markov chain Monte Carlo algorithm based on order statistics that is more widely applicable. In particular, the algorithm can be applied whenever the full conditional distributions are such that their cumulative distribution functions and inverse cumulative distribution functions can be efficiently computed. The method is demonstrated on an inference problem for a simple hierarchical Bayesian model.
Target text information: (1997) "Analysis of a non-reversible Markov chain sampler", : Technical Report BU-1385-M, Biometrics Unit, Cornell University Abstract We analyse the convergence to stationarity of a simple non-reversible Markov chain that serves as a model for several non-reversible Markov chain sampling methods that are used in practice. Our theoretical and numerical results show that non-reversibility can indeed lead to improvements over the diffusive behavior of simple Markov chain sampling schemes. The analysis uses both probabilistic techniques and an explicit diagonalisation. We thank David Aldous, Martin Hildebrand, Brad Mann, and Laurent Saloff-Coste for their help.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
6
|
Probabilistic Methods
|
cora
| 956
|
val
|
1-hop neighbor's text information: Use of Mental Models for Constraining Index Learning in Experience-Based Design. : The power of the case-based method comes from the ability to retrieve the "right" case when a new problem is specified. This implies that learning the "right" indices to a case before storing it for potential reuse is crucial for the success of the method. A hierarchical organization of the case memory raises two distinct but related issues in index learning: learning the indexing vocabulary, and learning the right level of generalization. In this paper we show how the use of structure-behavior-function (SBF) models constrains index learning in the context of experience-based design of physical devices. The SBF model of a design provides the functional and causal explanation of how the structure of the design delivers its function. We describe how the SBF model of a design, together with a specification of the task for which the design case might be reused, provides the vocabulary for indexing the design case in memory. We also discuss how the prior design experiences stored in case-memory help to determine the level of index generalization. The KRITIK2 system implements and evaluates the model-based method for learning indices to design cases.
1-hop neighbor's text information: Discovery of Physical Principles from Design Experiences. : One method for making analogies is to access and instantiate abstract domain principles, and one method for acquiring knowledge of abstract principles is to discover them from experience. We view generalization over experiences in the absence of any prior knowledge of the target principle as the task of hypothesis formation, a subtask of discovery. Also, we view the use of the hypothesized principles for analogical design as the task of hypothesis testing, another subtask of discovery. In this paper, we focus on discovery of physical principles by generalization over design experiences in the domain of physical devices. Some important issues in generalization from experiences are what to generalize from an experience, how far to generalize, and what methods to use. We represent a reasoner's comprehension of specific designs in the form of structure-behavior-function (SBF) models. An SBF model provides a functional and causal explanation of the working of a device. We represent domain principles as device-independent behavior-function (BF) models. We show that (i) the function of a device determines what to generalize from its SBF model, (ii) the SBF model itself suggests how far to generalize, and (iii) the typology of functions indicates what method to use.
1-hop neighbor's text information: Functional representation as design rationale. : Design rationale is a record of design activity: of alternatives available, choices made, the reasons for them, and explanations of how a proposed design is intended to work. We describe a representation called the Functional Representation (FR) that has been used to represent how a device's functions arise causally from the functions of its components and their interconnections. We propose that FR can provide the basis for capturing the causal aspects of the design rationale. We briefly discuss the use of FR for a number of tasks in which we would expect the design rationale to be useful: generation of diagnostic knowledge, design verification and redesign.
Target text information: Model-Based Learning of Structural Indices to Design Cases. : A major issue in case-basedsystems is retrieving the appropriate cases from memory to solve a given problem. This implies that a case should be indexed appropriately when stored in memory. A case-based system, being dynamic in that it stores cases for reuse, needs to learn indices for the new knowledge as the system designers cannot envision that knowledge. Irrespective of the type of indexing (structural or functional), a hierarchical organization of the case memory raises two distinct but related issues in index learning: learning the indexing vocabulary and learning the right level of generalization. In this paper we show how structure-behavior-function (SBF) models help in learning structural indices to design cases in the domain of physical devices. The SBF model of a design provides the functional and causal explanation of how the structure of the design delivers its function. We describe how the SBF model of a design provides both the vocabulary for structural indexing of design cases and the inductive biases for index generalization. We further discuss how model-based learning can be integrated with similarity-based learning (that uses prior design cases) for learning the level of index generalization.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
2
|
Case Based
|
cora
| 2,633
|
test
|
1-hop neighbor's text information: "A Survey of Evolutionary Strategies," :
1-hop neighbor's text information: A formal analysis of the role of multi--point crossover in genetic algorithms. : On the basis of early theoretical and empirical studies, genetic algorithms have typically used 1 and 2-point crossover operators as the standard mechanisms for implementing recombination. However, there have been a number of recent studies, primarily empirical in nature, which have shown the benefits of crossover operators involving a higher number of crossover points. From a traditional theoretical point of view, the most surprising of these new results relate to uniform crossover, which involves on the average L / 2 crossover points for strings of length L. In this paper we extend the existing theoretical results in an attempt to provide a broader explanatory and predictive theory of the role of multi-point crossover in genetic algorithms. In particular, we extend the traditional disruption analysis to include two general forms of multi-point crossover: n-point crossover and uniform crossover. We also analyze two other aspects of multi-point crossover operators, namely, their recombination potential and exploratory power. The results of this analysis provide a much clearer view of the role of multi-point crossover in genetic algorithms. The implications of these results on implementation issues and performance are discussed, and several directions for further research are suggested.
1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming.
Target text information: : A Genetic Algorithm Tutorial Darrell Whitley Technical Report CS-93-103 (Revised) November 10, 1993
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
3
|
Genetic Algorithms
|
cora
| 877
|
test
|
1-hop neighbor's text information: Jordan (1996). A variational approach to Bayesian logistic regression problems and their extensions. : We consider a logistic regression model with a Gaussian prior distribution over the parameters. We show that accurate variational techniques can be used to obtain a closed form posterior distribution over the parameters given the data thereby yielding a posterior predictive model. The results are readily extended to (binary) belief networks. For belief networks we also derive closed form posteriors in the presence of missing values. Finally, we show that the dual of the regression problem gives a latent variable density model, the variational formulation of which leads to exactly solvable EM updates.
1-hop neighbor's text information: Mean field theory for sigmoid belief networks. : We develop a mean field theory for sigmoid belief networks based on ideas from statistical mechanics. Our mean field theory provides a tractable approximation to the true probability distribution in these networks; it also yields a lower bound on the likelihood of evidence. We demonstrate the utility of this framework on a benchmark problem in statistical pattern recognition|the classification of handwritten digits.
1-hop neighbor's text information: Exploiting tractable substructures in intractable networks. : We develop a refined mean field approximation for inference and learning in probabilistic neural networks. Our mean field theory, unlike most, does not assume that the units behave as independent degrees of freedom; instead, it exploits in a principled way the existence of large substructures that are computationally tractable. To illustrate the advantages of this framework, we show how to incorporate weak higher order interactions into a first-order hidden Markov model, treating the corrections (but not the first order structure) within mean field theory.
Target text information: Computing upper and lower bounds on likelihoods in intractable networks. : We present deterministic techniques for computing upper and lower bounds on marginal probabilities in sigmoid and noisy-OR networks. These techniques become useful when the size of the network (or clique size) precludes exact computations. We illustrate the tightness of the bounds by numerical experi ments.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
6
|
Probabilistic Methods
|
cora
| 1,919
|
test
|
1-hop neighbor's text information: Continuous-valued Xof-N attributes versus nominal Xof-N attributes for constructive induction: a case study. : An Xof-N is a set containing one or more attribute-value pairs. For a given instance, its value corresponds to the number of its attribute-value pairs that are true. In this paper, we explore the characteristics and performance of continuous-valued Xof-N attributes versus nominal Xof-N attributes for constructive induction. Nominal Xof-Ns are more representationally powerful than continuous-valued Xof-Ns, but the former suffer the "fragmentation" problem, although some mechanisms such as subsetting can help to solve the problem. Two approaches to constructive induction using continuous-valued Xof-Ns are described. Continuous-valued Xof-Ns perform better than nominal ones on domains that need Xof-Ns with only one cut point. On domains that need Xof-N representations with more than one cut point, nominal Xof-Ns perform better than continuous-valued ones. Experimental results on a set of artificial and real-world domains support these statements.
1-hop neighbor's text information: Constructing Conjunctions using Systematic Search on Decision Trees: This paper investigates a dynamic path-based method for constructing conjunctions as new attributes for decision tree learning. It searches for conditions (attribute-value pairs) from paths to form new attributes. Compared with other hypothesis-driven new attribute construction methods, the new idea of this method is that it carries out systematic search with pruning over each path of a tree to select conditions for generating a conjunction. Therefore, conditions for constructing new attributes are dynamically decided during search. Empirically evaluation in a set of artificial and real-world domains shows that the dynamic path-based method can improve the performance of selective decision tree learning in terms of both higher prediction accuracy and lower theory complexity. In addition, it shows some performance advantages over a fixed path-based method and a fixed rule-based method for learning decision trees.
1-hop neighbor's text information: Constructing nominal X-of-N attributes. : Most constructive induction researchers focus only on new boolean attributes. This paper reports a new constructive induction algorithm, called XofN, that constructs new nominal attributes in the form of Xof-N representations. An Xof-N is a set containing one or more attribute-value pairs. For a given instance, its value corresponds to the number of its attribute-value pairs that are true. The promising preliminary experimental results, on both artificial and real-world domains, show that constructing new nominal attributes in the form of Xof-N representations can significantly improve the performance of selective induction in terms of both higher prediction accuracy and lower theory complexity.
Target text information: Constructing conjunctive tests for decision trees. : This paper discusses an approach of constructing new attributes based on decision trees and production rules. It can improve the concepts learned in the form of decision trees by simplifying them and improving their predictive accuracy. In addition, this approach can distinguish relevant primitive attributes from irrelevant primitive attributes.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
4
|
Theory
|
cora
| 1,884
|
val
|
1-hop neighbor's text information: "Learning robot behaviors using genetic algorithms," : Genetic Algorithms are used to learn navigation and collision avoidance behaviors for robots. The learning is performed under simulation, and the resulting behaviors are then used to control the The approach to learning behaviors for robots described here reflects a particular methodology for learning via a simulation model. The motivation is that making mistakes on real systems may be costly or dangerous. In addition, time constraints might limit the number of experiences during learning in the real world, while in many cases, the simulation model can be made to run faster than real time. Since learning may require experimenting with behaviors that might occasionally produce unacceptable results if applied to the real world, or might require too much time in the real environment, we assume that hypothetical behaviors will be evaluated in a simulation model (the off-line system). As illustrated in Figure 1, the current best behavior can be placed in the real, on-line system, while learning continues in the off-line system [1]. The learning algorithm was designed to learn useful behaviors from simulations of limited fidelity. The expectation is that behaviors learned in these simulations will be useful in real-world environments. Previous studies have illustrated that knowledge learned under simulation is robust and might be applicable to the real world if the simulation is more general (i.e. has more noise, more varied conditions, etc.) than the real world environment [2]. Where this is not possible, it is important to identify the differences between the simulation and the world and note the effect upon the learning process. The research reported here continues to examine this hypothesis. The next section very briefly explains the learning algorithm (and gives pointers to where more extensive documentation can be found). After that, the actual robot is described. Then we describe the simulation of the robot. The task _______________ actual robot.
1-hop neighbor's text information: "Learning sequential decision rules using simulation models and competition," : The problem of learning decision rules for sequential tasks is addressed, focusing on the problem of learning tactical decision rules from a simple flight simulator. The learning method relies on the notion of competition and employs genetic algorithms to search the space of decision policies. Several experiments are presented that address issues arising from differences between the simulation model on which learning occurs and the target environment on which the decision rules are ultimately tested.
1-hop neighbor's text information: Explanations of empirically derived reactive plans. : Given an adequate simulation model of the task environment and payoff function that measures the quality of partially successful plans, competition-based heuristics such as genetic algorithms can develop high performance reactive rules for interesting sequential decision tasks. We have previously described an implemented system, called SAMUEL, for learning reactive plans and have shown that the system can successfully learn rules for a laboratory scale tactical problem. In this paper, we describe a method for deriving explanations to justify the success of such empirically derived rule sets. The method consists of inferring plausible subgoals and then explaining how the reactive rules trigger a sequence of actions (i.e., a stra tegy) to satisfy the subgoals.
Target text information: An enhancer for reactive plans. : This paper describes our method for improving the comprehensibility, accuracy, and generality of reactive plans. A reactive plan is a set of reactive rules. Our method involves two phases: (1) formulate explanations of execution traces, and then (2) generate new reactive rules from the explanations. Since the explanation phase has been previously described, the primary focus of this paper is the rule generation phase. This latter phase consists of taking a subset of the explanations and using these explanations to generate a set of new reactive rules to add to the original set. The particular subset of the explanations that is chosen yields rules that provide new domain knowledge for handling knowledge gaps in the original rule set. The original rule set, in a complimentary manner, provides expertise to fill the gaps where the domain knowledge provided by the new rules is incomplete.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
2
|
Case Based
|
cora
| 2,200
|
test
|
1-hop neighbor's text information: "Using DNA to solve NP-Complete Problems", : A strategy for using Genetic Algorithms (GAs) to solve NP-complete problems is presented. The key aspect of the approach taken is to exploit the observation that, although all NP-complete problems are equally difficult in a general computational sense, some have much better GA representations than others, leading to much more successful use of GAs on some NP-complete problems than on others. Since any NP-complete problem can be mapped into any other one in polynomial time, the strategy described here consists of identifying a canonical NP-complete problem on which GAs work well, and solving other NP-complete problems indirectly by mapping them onto the canonical problem. Initial empirical results are presented which support the claim that the Boolean Satisfiability Problem (SAT) is a GA-effective canonical problem, and that other NP-complete problems with poor GA representations can be solved efficiently by mapping them first onto SAT problems.
1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming.
Target text information: HGA: A Hardware-Based Genetic Algorithm: This paper presents the HGA, a genetic algorithm written in VHDL and intended for a hardware implementation. Due to pipelining, parallelization, and no function call overhead, a hardware GA yields a significant speedup over a software GA, which is especially useful when the GA is used for real-time applications, e.g. disk scheduling and image registration. Since a general-purpose GA requires that the fitness function be easily changed, the hardware implementation must exploit the reprogrammability of certain types of field-programmable gate arrays (FPGAs), which are programmed via a bit pattern stored in a static RAM and are thus easily reconfigured. After presenting some background on VHDL, this paper takes the reader through the HGA's code. We then describe some applications of the HGA that are feasible given the state-of-the-art in FPGA technology and summarize some possible extensions of the design. Finally, we review some other work in hardware-based GAs.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
3
|
Genetic Algorithms
|
cora
| 98
|
test
|
1-hop neighbor's text information: LEARNING FOR DECISION MAKING: The FRD Approach and a Comparative Study Machine Learning and Inference Laboratory: This paper concerns the issue of what is the best form for learning, representing and using knowledge for decision making. The proposed answer is that such knowledge should be learned and represented in a declarative form. When needed for decision making, it should be efficiently transferred to a procedural form that is tailored to the specific decision making situation. Such an approach combines advantages of the declarative representation, which facilitates learning and incremental knowledge modification, and the procedural representation, which facilitates the use of knowledge for decision making. This approach also allows one to determine decision structures that may avoid attributes that unavailable or difficult to measure in any given situation. Experimental investigations of the system, FRD-1, have demonstrated that decision structures obtained via the declarative route often have not only higher predictive accuracy but are also are simpler than those learned directly from facts.
1-hop neighbor's text information: Pessimistic Decision Tree Pruning Based on Tree Size. : In this work we develop a new criteria to perform pessimistic decision tree pruning. Our method is theoretically sound and is based on theoretical concepts such as uniform convergence and the Vapnik-Chervonenkis dimension. We show that our criteria is very well motivated, from the theory side, and performs very well in practice. The accuracy of the new criteria is comparable to that of the current method used in C4.5.
1-hop neighbor's text information: Top-down pruning in relational learn-ing. : Pruning is an effective method for dealing with noise in Machine Learning. Recently pruning algorithms, in particular Reduced Error Pruning, have also attracted interest in the field of Inductive Logic Programming. However, it has been shown that these methods can be very inefficient, because most of the time is wasted for generating clauses that explain noisy examples and subsequently pruning these clauses. We introduce a new method which searches for good theories in a top-down fashion to get a better starting point for the pruning algorithm. Experiments show that this approach can significantly lower the complexity of the task as well as increase predictive accuracy.
Target text information: An empirical comparison of selection measures for decision-tree induction. : Ourston and Mooney, 1990b ] D. Ourston and R. J. Mooney. Improving shared rules in multiple category domain theories. Technical Report AI90-150, Artificial Intelligence Labora tory, University of Texas, Austin, TX, December 1990.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
4
|
Theory
|
cora
| 2,559
|
train
|
1-hop neighbor's text information: Symbolic representation of neural networks. : An early and shorter version of this paper has been accepted for presenta tion at IJCAI'95.
Target text information: Evaluation and Ordering of Rules Extracted from Feedforward Networks: Rules extracted from trained feedforward networks can be used for explanation, validation, and cross-referencing of network output decisions. This paper introduces a rule evaluation and ordering mechanism that orders rules extracted from feedforward networks based on three performance measures. Detailed experiments using three rule extraction techniques as applied to the Wisconsin breast cancer database, illustrate the power of the proposed methods. Moreover, a method of integrating the output decisions of both the extracted rule-based system and the corresponding trained network is proposed. The integrated system provides further improvements.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 253
|
val
|
1-hop neighbor's text information: Improving the quality of automatic DNA sequence assembly using fluorescent tracedata classifications. In States, D.J., : Virtually all large-scale sequencing projects use automatic sequence-assembly programs to aid in the determination of DNA sequences. The computer-generated assemblies require substantial handediting to transform them into submissions for GenBank. As the size of sequencing projects increases, it becomes essential to improve the quality of the automated assemblies so that this time-consuming handediting may be reduced. Current ABI sequencing technology uses base calls made from fluorescently-labeled DNA fragments run on gels. We present a new representation for the fluorescent trace data associated with individual base calls. This representation can be used before, during, and after fragment assembly to improve the quality of assemblies. We demonstrate one such use end-trimming of suboptimal data that results in a significant improvement in the quality of subsequent assemblies.
Target text information: Increasing Consensus Accuracy in DNA Fragment Assemblies by Incorporating Fluorescent Trace Representations: We present a new method for determining the consensus sequence in DNA fragment assemblies. The new method, Trace-Evidence, directly incorporates aligned ABI trace information into consensus calculations via our previously described representation, TraceData Classifications. The new method extracts and sums evidence indicated by the representation to determine consensus calls. Using the Trace-Evidence method results in automatically produced consensus sequences that are more accurate and less ambiguous than those produced with standard majority- voting methods. Additionally, these improvements are achieved with less coverage than required by the standard methods using Trace-Evidence and a coverage of only three, error rates are as low as those with a coverage of over ten sequences.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 103
|
test
|
1-hop neighbor's text information: Four Challenges for a Computational Model of Legal Precedent: Identifying the open research issues in a field is a necessary step for progress in that field. This paper describes four open research problems in computational models of precedent-based legal reasoning: relating case representation to precedent use; modeling the selection and construction of both arguments based on pairwise case comparison and multiple-precedent arguments; modeling the process whereby purposes, policies, and principles are used in case similarity assessment; and extending the applicability of precedents to tasks other than classification.
1-hop neighbor's text information: "Concept learning and Heuristic Classification in Weak-Theory Domains," :
Target text information: Reasoning with portions of precedents. : This paper argues that the task of matching in case-based reasoning can often be improved by comparing new cases to portions of precedents. An example is presented that illustrates how combining portions of multiple precedents can permit new cases to be resolved that would be indeterminate if new cases could only be compared to entire precedents. A system that uses of portions of precedents for legal analysis in the domain of Texas worker's compensation law, GREBE, is described, and examples of GREBE's analysis that combine reasoning steps from multiple precedents are presented.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
2
|
Case Based
|
cora
| 1,445
|
test
|
1-hop neighbor's text information: Island Model Genetic Algorithms and Linearly Separable Problems: Parallel Genetic Algorithms have often been reported to yield better performance than Genetic Algorithms which use a single large panmictic population. In the case of the Island Model Genetic Algorithm, it has been informally argued that having multiple subpopulations helps to preserve genetic diversity, since each island can potentially follow a different search trajectory through the search space. On the other hand, linearly separable functions have often been used to test Island Model Genetic Algorithms; it is possible that Island Models are particular well suited to separable problems. We look at how Island Models can track multiple search trajectories using the infinite population models of the simple genetic algorithm. We also introduce a simple model for better understanding when Island Model Genetic Algorithms may have an advantage when processing linearly separable problems.
1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming.
Target text information: Modeling Hybrid Genetic Algorithms. : An exact model of a simple genetic algorithm is developed for permutation based representations. Permutation based representations are used for scheduling problems and combinatorial problems such as the Traveling Salesman Problem. A remapping function is developed to remap the model to all permutations in the search space. The mixing matrices for various permutation based operators are also developed.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
3
|
Genetic Algorithms
|
cora
| 1,443
|
test
|
1-hop neighbor's text information: Hierarchical Mixtures of Experts and the EM Algorithm, : We present a tree-structured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIM's). Learning is treated as a maximum likelihood problem; in particular, we present an Expectation-Maximization (EM) algorithm for adjusting the parameters of the architecture. We also develop an on-line learning algorithm in which the parameters are updated incrementally. Comparative simulation results are presented in the robot dynamics domain. *We want to thank Geoffrey Hinton, Tony Robinson, Mitsuo Kawato and Daniel Wolpert for helpful comments on the manuscript. This project was supported in part by a grant from the McDonnell-Pew Foundation, by a grant from ATR Human Information Processing Research Laboratories, by a grant from Siemens Corporation, by by grant IRI-9013991 from the National Science Foundation, and by grant N00014-90-J-1942 from the Office of Naval Research. The project was also supported by NSF grant ASC-9217041 in support of the Center for Biological and Computational Learning at MIT, including funds provided by DARPA under the HPCC program, and NSF grant ECS-9216531 to support an Initiative in Intelligent Control at MIT. Michael I. Jordan is a NSF Presidential Young Investigator.
1-hop neighbor's text information: Irrelevant features and the subset selection problem. : We address the problem of finding a subset of features that allows a supervised induction algorithm to induce small high-accuracy concepts. We examine notions of relevance and irrelevance, and show that the definitions used in the machine learning literature do not adequately partition the features into useful categories of relevance. We present definitions for irrelevance and for two degrees of relevance. These definitions improve our understanding of the behavior of previous subset selection algorithms, and help define the subset of features that should be sought. The features selected should depend not only on the features and the target concept, but also on the induction algorithm. We describe a method for feature subset selection using cross-validation that is applicable to any induction algorithm, and discuss experiments conducted with ID3 and C4.5 on artificial and real datasets.
Target text information: Expectation-Based Selective Attention. : Reliable vision-based control of an autonomous vehicle requires the ability to focus attention on the important features in an input scene. Previous work with an autonomous lane following system, ALVINN [Pomerleau, 1993], has yielded good results in uncluttered conditions. This paper presents an artificial neural network based learning approach for handling difficult scenes which will confuse the ALVINN system. This work presents a mechanism for achieving task-specific focus of attention by exploiting temporal coherence. A saliency map, which is based upon a computed expectation of the contents of the inputs in the next time step, indicates which regions of the input retina are important for performing the task. The saliency map can be used to accentuate the features which are important for the task, and de-emphasize those which are not.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 1,648
|
val
|
1-hop neighbor's text information: Representation of spatial orientation by the intrinsic dynamics of the head-direction cell ensemble: a theory. : The head-direction (HD) cells found in the limbic system in freely moving rats represent the instantaneous head direction of the animal in the horizontal plane regardless of the location of the animal. The internal direction represented by these cells uses both self-motion information for inertially based updating and familiar visual landmarks for calibration. Here, a model of the dynamics of the HD cell ensemble is presented. The stability of a localized static activity profile in the network and a dynamic shift mechanism are explained naturally by synaptic weight distribution components with even and odd symmetry, respectively. Under symmetric weights or symmetric reciprocal connections, a stable activity profile close to the known directional tuning curves will emerge. By adding a slight asymmetry to the weights, the activity profile will shift continuously without disturbances to its shape, and the shift speed can be accurately controlled by the strength of the odd-weight component. The generic formulation of the shift mechanism is determined uniquely within the current theoretical framework. The attractor dynamics of the system ensures modality-independence of the internal representation and facilitates the correction for cumulative error by the putative local-view detectors. The model offers a specific one-dimensional example of a computational mechanism in which a truly world-centered representation can be derived from observer-centered sensory inputs by integrating self-motion information.
1-hop neighbor's text information: A neural model of the cortical representation of egocentric distance. :
Target text information: Interpreting neuronal population activity by reconstruction: A unified framework with application to hippocampal place cells: Physical variables such as the orientation of a line in the visual field or the location of the body in space are coded as activity levels in populations of neurons. Reconstruction or decoding is an inverse problem in which the physical variables are estimated from observed neural activity. Reconstruction is useful first in quantifying how much information about the physical variables is present in the population, and second, in providing insight into how the brain might use distributed representations in solving related computational problems such as visual object recognition and spatial navigation. Two classes of reconstruction methods, namely, probabilistic or Bayesian methods and basis function methods, are discussed. They include important existing methods as special cases, such as population vector coding, optimal linear estimation and template matching. As a representative example for the reconstruction problem, different methods were applied to multi-electrode spike train data from hippocampal place cells in freely moving rats. The reconstruction accuracy of the trajectories of the rats was compared for the different methods. Bayesian methods were especially accurate when a continuity constraint was enforced, and the best errors were within a factor of two of the the information-theoretic limit on how accurate any reconstruction can be, which were comparable with the intrinsic experimental errors in position tracking. In addition, the reconstruction analysis uncovered some interesting aspects of place cell activity, such as the tendency for erratic jumps of the reconstructed trajectory when the animal stopped running. In general, the theoretical values of the minimal achievable reconstruction errors quantify how accurately a physical variable is encoded in the neuronal population in the sense of mean square error, regardless of the method used for reading out the information. One related result is that the theoretical accuracy is independent of the width of the Gaussian tuning function only in two dimensions. Finally, all the reconstruction methods considered in this paper can be implemented by a unified neural network architecture, which the brain could feasibly use to solve related problems.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 586
|
test
|
1-hop neighbor's text information: (1998) A new sequential simulated annealing method. : Let H be a function not explicitly defined, but approximable by a sequence (H n ) n0 of functional estimators. In this context we propose a new sequential algorithm to optimise asymptotically H using stepwise estimators H n . We prove under mild conditions the almost sure convergence in law of this algorithm.
Target text information: A SEQUENTIAL METROPOLIS-HASTINGS ALGORITHM: This paper deals with the asymptotic properties of the Metropolis-Hastings algorithm, when the distribution of interest is unknown, but can be approximated by a sequential estimator of its density. We prove that, under very simple conditions, the rate of convergence of the Metropolis-Hastings algorithm is the same as that of the sequential estimator when the latter is introduced as the reversible measure for the Metropolis-Hastings Kernel. This problem is a natural extension of previous a work on a new simulated annealing algorithm with a sequential estimator of the energy.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
6
|
Probabilistic Methods
|
cora
| 627
|
test
|
1-hop neighbor's text information: Kritik: An early case-based design system. In Maher, M.L. & Pu, : In the late 1980s, we developed one of the early case-based design systems called Kritik. Kritik autonomously generated preliminary (conceptual, qualitative) designs for physical devices by retrieving and adapting past designs stored in its case memory. Each case in the system had an associated structure-behavior-function (SBF) device model that explained how the structure of the device accomplished its functions. These casespecific device models guided the process of modifying a past design to meet the functional specification of a new design problem. The device models also enabled verification of the design modifications. Kritik2 is a new and more complete implementation of Kritik. In this paper, we take a retrospective view on Kritik. In early papers, we had described Kritik as integrating case-based and model-based reasoning. In this integration, Kritik also grounds the computational process of case-based reasoning in the SBF content theory of device comprehension. The SBF models not only provide methods for many specific tasks in case-based design such as design adaptation and verification, but they also provide the vocabulary for the whole process of case-based design, from retrieval of old cases to storage of new ones. This grounding, we believe, is essential for building well-constrained theories of case-based design.
1-hop neighbor's text information: (1992) Generic Teleological Mechanisms and their Use in Case Adaptation, : In experience-based (or case-based) reasoning, new problems are solved by retrieving and adapting the solutions to similar problems encountered in the past. An important issue in experience-based reasoning is to identify different types of knowledge and reasoning useful for different classes of case-adaptation tasks. In this paper, we examine a class of non-routine case-adaptation tasks that involve patterned insertions of new elements in old solutions. We describe a model-based method for solving this task in the context of the design of physical devices. The method uses knowledge of generic teleological mechanisms (GTMs) such as cascading. Old designs are adapted to meet new functional specifications by accessing and instantiating the appropriate GTM. The Kritik2 system evaluates the computational feasibility and sufficiency of this method for design adaptation.
1-hop neighbor's text information: METHOD-SPECIFIC KNOWLEDGE COMPILATION: TOWARDS PRACTICAL DESIGN SUPPORT SYSTEMS: Modern knowledge systems for design typically employ multiple problem-solving methods which in turn use different kinds of knowledge. The construction of a heterogeneous knowledge system that can support practical design thus raises two fundamental questions: how to accumulate huge volumes of design information, and how to support heterogeneous design processing? Fortunately, partial answers to both questions exist separately. Legacy databases already contain huge amounts of general-purpose design information. In addition, modern knowledge systems typically characterize the kinds of knowledge needed by specific problem-solving methods quite precisely. This leads us to hypothesize method-specific data-to-knowledge compilation as a potential mechanism for integrating heterogeneous knowledge systems and legacy databases for design. In this paper, first we outline a general computational architecture called HIPED for this integration. Then, we focus on the specific issue of how to convert data accessed from a legacy database into a form appropriate to the problem-solving method used in a heterogeneous knowledge system. We describe an experiment in which a legacy knowledge system called Interactive Kritik is integrated with an ORACLE database using IDI as the communication tool. The limited experiment indicates the computational feasibility of method-specific data-to-knowledge compilation, but also raises additional research issues.
Target text information: A Model-Based Approach to Blame Assignment in Design. : We analyze the blame-assignment task in the context of experience-based design and redesign of physical devices. We identify three types of blame-assignment tasks that differ in the types of information they take as input: the design does not achieve a desired behavior of the device, the design results in an undesirable behavior, a specific structural element in the design misbehaves. We then describe a model-based approach for solving the blame-assignment task. This approach uses structure-behavior-function models that capture a designer's comprehension of the way a device works in terms of causal explanations of how its structure results in its behaviors. We also address the issue of indexing the models in memory. We discuss how the three types of blame-assignment tasks require different types of indices for accessing the models. Finally we describe the KRITIK2 system that implements and evaluates this model-based approach to blame assignment.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
2
|
Case Based
|
cora
| 1,696
|
train
|
1-hop neighbor's text information: "The Evolution of Agents that Build Mental Models and Create Simple Plans Using Genetic Programming," : An essential component of an intelligent agent is the ability to notice, encode, store, and utilize information about its environment. Traditional approaches to program induction have focused on evolving functional or reactive programs. This paper presents MAPMAKER, an approach to the automatic generation of agents that discover information about their environment, encode this information for later use, and create simple plans utilizing the stored mental models. In this approach, agents are multipart computer programs that communicate through a shared memory. Both the programs and the representation scheme are evolved using genetic programming. An illustrative problem of 'gold' collection is used to demonstrate the approach in which one part of a program makes a map of the world and stores it in memory, and the other part uses this map to find the gold The results indicate that the approach can evolve programs that store simple representations of their environments and use these representations to produce simple plans. 1. Introduction
1-hop neighbor's text information: Context preserving crossover in genetic programming. : This paper introduces two new crossover operators for Genetic Programming (GP). Contrary to the regular GP crossover, the operators presented attempt to preserve the context in which subtrees appeared in the parent trees. A simple coordinate scheme for nodes in an S-expression tree is proposed, and crossovers are only allowed between nodes with exactly or partially matching coordinates.
1-hop neighbor's text information: Data Structures and Genetic Programming, : It is established good software engineering practice to ensure that programs use memory via abstract data structures such as stacks, queues and lists. These provide an interface between the program and memory, freeing the program of memory management details which are left to the data structures to implement. The main result presented herein is that GP can automatically generate stacks and queues. Typically abstract data structures support multiple operations, such as put and get. We show that GP can simultaneously evolve all the operations of a data structure by implementing each such operation with its own independent program tree. That is, the chromosome consists of a fixed number of independent program trees. Moreover, crossover only mixes genetic material of program trees that implement the same operation. Program trees interact with each other only via shared memory and shared "Automatically Defined Functions" (ADFs).
Target text information: A bibliography for genetic programming. : In real world applications, software engineers recognise the use of memory must be organised via data structures and that software using the data must be independant of the data structures' implementation details. They achieve this by using abstract data structures, such as records, files and buffers. We demonstrate that genetic programming can automatically implement simple abstract data structures, considering in detail the task of evolving a list. We show general and reasonably efficient implementations can be automatically generated from simple primitives. A model for maintaining evolved code is demonstrated using the list problem. Much published work on genetic programming (GP) evolves functions without side-effects to learn patterns in test data. In contrast human written programs often make extensive and explicit use of memory. Indeed memory in some form is required for a programming system to be Turing Complete, i.e. for it to be possible to write any (computable) program in that system. However inclusion of memory can make the interactions between parts of programs much more complex and so make it harder to produce programs. Despite this it has been shown GP can automatically create programs which explicitly use memory [Teller 1994]. In both normal and genetic programming considerable benefits have been found in adopting a structured approach. For example [Koza 1994] shows the introduction of evolvable code modules (automatically defined functions, ADFs) can greatly help GP to reach a solution. We suggest that a corresponding structured approach to use of data will similarly have significant advantage to GP. Earlier work has demonstrated that genetic programming can automatically generate simple abstract data structures, namely stacks and queues [Langdon 1995a]. That is, GP can evolve programs that organise memory (accessed via simple read and write primitives) into data structures which can be used by external software without it needing to know how they are implemented. This chapter shows it is possible to evolve a list data structure from basic primitives. [Aho, Hopcroft and Ullman 1987] suggest three different ways to implement a list but these experiments show GP can evolve its own implementation. This requires all the list components to agree on one implementation as they co-evolve together. Section 20.3 describes the GP architecture, including use of Pareto multiple component fitness scoring (20.3.4) and measures aimed at speeding the GP search (20.3.5). The evolved solutions are described in Section 20.4. Section 20.5 presents a candidate model for maintaining evolved software. This is followed by a discussion of what we have learned (20.6) and conclusions that can be drawn (20.7).
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
3
|
Genetic Algorithms
|
cora
| 2,101
|
test
|
1-hop neighbor's text information: Proben1: A set of neural network benchmark problems and benchmarking rules. : Proben1 is a collection of problems for neural network learning in the realm of pattern classification and function approximation plus a set of rules and conventions for carrying out benchmark tests with these or similar problems. Proben1 contains 15 data sets from 12 different domains. All datasets represent realistic problems which could be called diagnosis tasks and all but one consist of real world data. The datasets are all presented in the same simple format, using an attribute representation that can directly be used for neural network training. Along with the datasets, Proben1 defines a set of rules for how to conduct and how to document neural network benchmarking. The purpose of the problem and rule collection is to give researchers easy access to data for the evaluation of their algorithms and networks and to make direct comparison of the published results feasible. This report describes the datasets and the benchmarking rules. It also gives some basic performance measures indicating the difficulty of the various problems. These measures can be used as baselines for comparison.
1-hop neighbor's text information: Fast pruning using principal components. : We present a new algorithm for eliminating excess parameters and improving network generalization after supervised training. The method, "Principal Components Pruning (PCP)", is based on principal component analysis of the node activations of successive layers of the network. It is simple, cheap to implement, and effective. It requires no network retraining, and does not involve calculating the full Hessian of the cost function. Only the weight and the node activity correlation matrices for each layer of nodes are required. We demonstrate the efficacy of the method on a regression problem using polynomial basis functions, and on an economic time series prediction problem using a two-layer, feedforward network.
1-hop neighbor's text information: Asymptotic statistical theory of overtraining and cross-validation. : A statistical theory for overtraining is proposed. The analysis treats realizable stochastic neural networks, trained with Kullback-Leibler loss in the asymptotic case. It is shown that the asymptotic gain in the generalization error is small if we perform early stopping, even if we have access to the optimal stopping time. Considering cross-validation stopping we answer the question: In what ratio the examples should be divided into training and testing sets in order to obtain the optimum performance. In the non-asymptotic region cross-validated early stopping always decreases the generalization error. Our large scale simulations done on a CM5 are in nice agreement with our analytical findings.
Target text information: Early stopping | but when? In Orr and Muller [1]. : Validation can be used to detect when overfitting starts during supervised training of a neural network; training is then stopped before convergence to avoid the overfitting ("early stopping"). The exact criterion used for validation-based early stopping, however, is usually chosen in an ad-hoc fashion or training is stopped interactively. This trick describes how to select a stopping criterion in a systematic fashion; it is a trick for either speeding learning procedures or improving generalization, whichever is more important in the particular situation. An empirical investigation on multi-layer perceptrons shows that there exists a tradeoff between training time and generalization: From the given mix of 1296 training runs using different 12 problems and 24 different network architectures I conclude slower stopping criteria allow for small improvements in generalization (here: about 4% on average), but cost much more training time (here: about factor 4 longer on average).
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 2,199
|
val
|
1-hop neighbor's text information: Rule induction with CN2: some recent improvements. : The CN2 algorithm induces an ordered list of classification rules from examples using entropy as its search heuristic. In this short paper, we describe two improvements to this algorithm. Firstly, we present the use of the Laplacian error estimate as an alternative evaluation function and secondly, we show how unordered as well as ordered rules can be generated. We experimentally demonstrate significantly improved performances resulting from these changes, thus enhancing the usefulness of CN2 as an inductive tool. Comparisons with Quinlan's C4.5 are also made.
1-hop neighbor's text information: Constructive induction using a non-greedy strategy for feature selection. : We present a method for feature construction and selection that finds a minimal set of conjunctive features that are appropriate to perform the classification task. For problems where this bias is appropriate, the method outperforms other constructive induction algorithms and is able to achieve higher classification accuracy. The application of the method in the search for minimal multi-level boolean expressions is presented and analyzed with the help of some examples.
1-hop neighbor's text information: A hypothesis-driven constructive induction approach to expanding neural networks. : With most machine learning methods, if the given knowledge representation space is inadequate then the learning process will fail. This is also true with methods using neural networks as the form of the representation space. To overcome this limitation, an automatic construction method for a neural network is proposed. This paper describes the BP-HCI method for a hypothesis-driven constructive induction in a neural network trained by the backpropagation algorithm. The method searches for a better representation space by analyzing the hypotheses generated in each step of an iterative learning process. The method was applied to ten problems, which include, in particular, exclusive-or, MONK2, parity-6BIT and inverse parity-6BIT problems. All problems were successfully solved with the same initial set of parameters; the extension of representation space was no more than necessary extension for each problem.
Target text information: What do Constructive Learners Really Learn?: In constructive induction (CI), the learner's problem representation is modified as a normal part of the learning process. This may be necessary if the initial representation is inadequate or inappropriate. However, the distinction between constructive and non-constructive methods appears to be highly ambiguous. Several conventional definitions of the process of constructive induction appear to include all conceivable learning processes. In this paper I argue that the process of constructive learning should be identified with that of relational learning (i.e., I suggest that
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
4
|
Theory
|
cora
| 403
|
test
|
1-hop neighbor's text information: Parallel gradient distribution in unconstrained optimization. : A parallel version is proposed for a fundamental theorem of serial unconstrained optimization. The parallel theorem allows each of k parallel processors to use simultaneously a different algorithm, such as a descent, Newton, quasi-Newton or a conjugate gradient algorithm. Each processor can perform one or many steps of a serial algorithm on a portion of the gradient of the objective function assigned to it, independently of the other processors. Eventually a synchronization step is performed which, for differentiable convex functions, consists of taking a strong convex combination of the k points found by the k processors. For nonconvex, as well as convex, differentiable functions, the best point found by the k processors is taken, or any better point. The fundamental result that we establish is that any accumulation point of the parallel algorithm is stationary for the nonconvex case, and is a global solution for the convex case. Computational testing on the Thinking Machines CM-5 multiprocessor indicate a speedup of the order of the number of processors employed.
1-hop neighbor's text information: Error-stabilty properties of generalized gradient-type algorithms. : We present a unified framework for convergence analysis of the generalized subgradient-type algorithms in the presence of perturbations. One of the principal novel features of our analysis is that perturbations need not tend to zero in the limit. It is established that the iterates of the algorithms are attracted, in a certain sense, to an "-stationary set of the problem, where " depends on the magnitude of perturbations. Characterization of those attraction sets is given in the general (nonsmooth and nonconvex) case. The results are further strengthened for convex, weakly sharp and strongly convex problems. Our analysis extends and unifies previously known results on convergence and stability properties of gradient and subgradient methods, including their incremental, parallel and "heavy ball" modifications. fl The first author is supported in part by CNPq grant 300734/95-6. Research of the second author was supported in part by the International Science Foundation Grant NBY000, the International Science Foundation and Russian Goverment Grant NBY300 and the Russian Foundation for Fundamental Research Grant N 95-01-01448. y Instituto de Matematica Pura e Aplicada, Estrada Dona Castorina 110, Jardim Bot^anico, Rio de Janeiro, RJ, CEP 22460-320, Brazil. Email : [email protected]. z Operations Research Department, Faculty of Computational Mathematics and Cybernetics, Moscow State University, Moscow, Russia, 119899.
Target text information: New inexact parallel variable distribution algorithms. : We consider the recently proposed parallel variable distribution (PVD) algorithm of Ferris and Mangasarian [4] for solving optimization problems in which the variables are distributed among p processors. Each processor has the primary responsibility for updating its block of variables while allowing the remaining "secondary" variables to change in a restricted fashion along some easily computable directions. We propose useful generalizations that consist, for the general unconstrained case, of replacing exact global solution of the subproblems by a certain natural sufficient descent condition, and, for the convex case, of inexact subproblem solution in the PVD algorithm. These modifications are the key features of the algorithm that has not been analyzed before. The proposed modified algorithms are more practical and make it easier to achieve good load balancing among the parallel processors. We present a general framework for the analysis of this class of algorithms and derive some new and improved linear convergence results for problems with weak sharp minima of order 2 and strongly convex problems. We also show that nonmonotone synchronization schemes are admissible, which further improves flexibility of PVD approach.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 1,154
|
test
|
1-hop neighbor's text information: On Bayesian analysis of mixtures with an unknown number of components. : New methodology for fully Bayesian mixture analysis is developed, making use of reversible jump Markov chain Monte Carlo methods, that are capable of jumping between the parameter subspaces corresponding to different numbers of components in the mixture. A sample from the full joint distribution of all unknown variables is thereby generated, and this can be used as a basis for a thorough presentation of many aspects of the posterior distribution. The methodology is applied here to the analysis of univariate normal mixtures, using a hierarchical prior model that offers an approach to dealing with weak prior information while avoiding the mathematical pitfalls of using improper priors in the mixture context.
1-hop neighbor's text information: Model selection and accounting for model uncertainty in graphical models using Occam\'s window. : We consider the problem of model selection and accounting for model uncertainty in high-dimensional contingency tables, motivated by expert system applications. The approach most used currently is a stepwise strategy guided by tests based on approximate asymptotic P -values leading to the selection of a single model; inference is then conditional on the selected model. The sampling properties of such a strategy are complex, and the failure to take account of model uncertainty leads to underestimation of uncertainty about quantities of interest. In principle, a panacea is provided by the standard Bayesian formalism which averages the posterior distributions of the quantity of interest under each of the models, weighted by their posterior model probabilities. Furthermore, this approach is optimal in the sense of maximising predictive ability. However, this has not been used in practice because computing the posterior model probabilities is hard and the number of models is very large (often greater than 10 11 ). We argue that the standard Bayesian formalism is unsatisfactory and we propose an alternative Bayesian approach that, we contend, takes full account of the true model uncertainty by averaging over a much smaller set of models. An efficient search algorithm is developed for finding these models. We consider two classes of graphical models that arise in expert systems: the recursive causal models and the decomposable fl David Madigan is Assistant Professor of Statistics and Adrian E. Raftery is Professor of Statistics and Sociology, Department of Statistics, GN-22, University of Washington, Seattle, WA 98195. Madigan's research was partially supported by the Graduate School Research Fund, University of Washington and by the NSF. Raftery's research was supported by ONR Contract no. N-00014-91-J-1074. The authors are grateful to Gregory Cooper, Leo Goodman, Shelby Haberman, David Hinkley, Graham Upton, Jon Wellner, Nanny Wermuth, Jeremy York, Walter Zucchini and two anonymous referees for helpful comments and discussions, and to Michael R. Butler for providing the data for the scrotal swellings example.
1-hop neighbor's text information: Graphical Models in Applied Multivariate Statistics. :
Target text information: Decomposable Graphical Gaussian Model Determination. : We propose a methodology for Bayesian model determination in decomposable graphical Gaussian models. To achieve this aim we consider a hyper inverse Wishart prior distribution on the concentration matrix for each given graph. To ensure compatibility across models, such prior distributions are obtained by marginalisation from the prior conditional on the complete graph. We explore alternative structures for the hyperparameters of the latter, and their consequences for the model. Model determination is carried out by implementing a reversible jump MCMC sampler. In particular, the dimension-changing move we propose involves adding or dropping an edge from the graph. We characterise the set of moves which preserve the decomposability of the graph, giving a fast algorithm for maintaining the junction tree representation of the graph at each sweep. As state variable, we propose to use the incomplete variance-covariance matrix, containing only the elements for which the corresponding element of the inverse is nonzero. This allows all computations to be performed locally, at the clique level, which is a clear advantage for the analysis of large and complex data-sets. Finally, the statistical and computational performance of the procedure is illustrated by means of both artificial and real multidimensional data-sets.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
6
|
Probabilistic Methods
|
cora
| 2,255
|
test
|
1-hop neighbor's text information: Limited Dual Path Execution. : This work presents a hybrid branch predictor scheme that uses a limited form of dual path execution along with dynamic branch prediction to improve execution times. The ability to execute down both paths of a conditional branch enables the branch penalty to be minimized; however, relying exclusively on dual path execution is infeasible due because instruction fetch rates far exceed the capability of the pipeline to retire a single branch before others must be processed. By using confidence information, available in the dynamic branch prediction state tables, a limited form of dual path execution becomes feasible. This reduces the burden on the branch predictor by allowing predictions of low confidence to be avoided. In this study we present a new approach to gather branch prediction confidence with little or no overhead, and use this confidence mechanism to determine whether dual path execution or branch prediction should be used. Comparing this hybrid predictor model to the dynamic branch predictor shows a dramatic decrease in misprediction rate, which translates to an reduction in runtime of over 20%. These results imply that dual path execution, which often is thought to be an excessively resource consuming method, may be a worthy approach if restricted with an appropriate predicting set.
Target text information: The effects of predicated execution on branch prediction. : As microprocessor designs move towards deeper pipelines and support for multiple instruction issue, steps must be taken to alleviate the negative impact of branch operations on processor performance. One approach is to use branch prediction hardware and perform speculative execution of the instructions following an unresolved branch. Another technique is to eliminate certain branch instructions altogether by translating the instructions following a forward branch into predicate form. Both these techniques are employed in many current processor designs. This paper investigates the relationship between branch prediction techniques and branch predication. In particular, we are interested in how using predication to remove a certain class of poorly predicted branches affects the prediction accuracy of the remaining branches. A variety of existing predication models for eliminating branch operations are presented, and the effect that eliminating branches has on branch prediction schemes ranging from simple prediction mechanisms to the newer more sophisticated branch predictors is studied. We also examine the impact of predication on basic block size, and how the two techniques used together affect overall processor performance.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
0
|
Rule Learning
|
cora
| 1,682
|
test
|
1-hop neighbor's text information: A family of txed-point algorithms for independent component analysis. : Independent Component Analysis (ICA) is a statistical signal processing technique whose main applications are blind source separation, blind deconvolution, and feature extraction. Estimation of ICA is usually performed by optimizing a 'contrast' function based on higher-order cumulants. In this paper, it is shown how almost any error function can be used to construct a contrast function to perform the ICA estimation. In particular, this means that one can use contrast functions that are robust against outliers. As a practical method for tnding the relevant extrema of such contrast functions, a txed-point iteration scheme is then introduced. The resulting algorithms are quite simple and converge fast and reliably. These algorithms also enable estimation of the independent components one-by-one, using a simple deation scheme.
1-hop neighbor's text information: Simple neuron models for independent component analysis. : Recently, several neural algorithms have been introduced for Independent Component Analysis. Here we approach the problem from the point of view of a single neuron. First, simple Hebbian-like learning rules are introduced for estimating one of the independent components from sphered data. Some of the learning rules can be used to estimate an independent component which has a negative kurtosis, and the others estimate a component of positive kurtosis. Next, a two-unit system is introduced to estimate an independent component of any kurtosis. The results are then generalized to estimate independent components from non-sphered (raw) mixtures. To separate several independent components, a system of several neurons with linear negative feedback is used. The convergence of the learning rules is rigorously proven without any unnecessary hypotheses on the distributions of the independent components.
Target text information: One-unit learning rules for independent component analysis, : Neural one-unit learning rules for the problem of Independent Component Analysis (ICA) and blind source separation are introduced. In these new algorithms, every ICA neuron develops into a separator that tnds one of the independent components. The learning rules use very simple constrained Hebbian/anti-Hebbian learning in which decorrelating feedback may be added. To speed up the convergence of these stochastic gradient descent rules, a novel com putationally ecient txed-point algorithm is introduced.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 2,056
|
test
|
1-hop neighbor's text information: Extracting support data for a given task. : We report a novel possibility for extracting a small subset of a data base which contains all the information necessary to solve a given classification task: using the Support Vector Algorithm to train three different types of handwritten digit classifiers, we observed that these types of classifiers construct their decision surface from strongly overlapping small ( 4%) subsets of the data base. This finding opens up the possibility of compressing data bases significantly by disposing of the data which is not important for the solution of a given task. In addition, we show that the theory allows us to predict the classifier that will have the best generalization ability, based solely on performance on the training set and characteristics of the learning machines. This finding is important for cases where the amount of available data is limited.
Target text information: Nonlinear component analysis as a kernel eigenvalue problem. : A new method for performing a nonlinear form of Principal Component Analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map; for instance the space of all possible 5-pixel products in 16fi16 images. We give the derivation of the method and present first experimental results on polynomial feature extraction for pattern recognition.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 2,573
|
test
|
1-hop neighbor's text information: A theory of inferred causation. : This paper concerns the empirical basis of causation, and addresses the following issues: We propose a minimal-model semantics of causation, and show that, contrary to common folklore, genuine causal influences can be distinguished from spurious covariations following standard norms of inductive reasoning. We also establish a sound characterization of the conditions under which such a distinction is possible. We provide an effective algorithm for inferred causation and show that, for a large class of data the algorithm can uncover the direction of causal influences as defined above. Finally, we ad dress the issue of non-temporal causation.
1-hop neighbor's text information: Theory refinement on Bayesian networks. : Theory refinement is the task of updating a domain theory in the light of new cases, to be done automatically or with some expert assistance. The problem of theory refinement under uncertainty is reviewed here in the context of Bayesian statistics, a theory of belief revision. The problem is reduced to an incremental learning task as follows: the learning system is initially primed with a partial theory supplied by a domain expert, and thereafter maintains its own internal representation of alternative theories which is able to be interrogated by the domain expert and able to be incrementally refined from data. Algorithms for refinement of Bayesian networks are presented to illustrate what is meant by "partial theory", "alternative theory representation", etc. The algorithms are an incremental variant of batch learning algorithms from the literature so can work well in batch and incremental mode.
Target text information: A Parallel Learning Algorithm for Bayesian Inference Networks: We present a new parallel algorithm for learning Bayesian inference networks from data. Our learning algorithm exploits both properties of the MDL-based score metric, and a distributed, asynchronous, adaptive search technique called nagging. Nagging is intrinsically fault tolerant, has dynamic load balancing features, and scales well. We demonstrate the viability, effectiveness, and scalability of our approach empirically with several experiments using on the order of 20 machines. More specifically, we show that our distributed algorithm can provide optimal solutions for larger problems as well as good solutions for Bayesian networks of up to 150 variables.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
6
|
Probabilistic Methods
|
cora
| 452
|
test
|
1-hop neighbor's text information: A formal analysis of the role of multi--point crossover in genetic algorithms. : On the basis of early theoretical and empirical studies, genetic algorithms have typically used 1 and 2-point crossover operators as the standard mechanisms for implementing recombination. However, there have been a number of recent studies, primarily empirical in nature, which have shown the benefits of crossover operators involving a higher number of crossover points. From a traditional theoretical point of view, the most surprising of these new results relate to uniform crossover, which involves on the average L / 2 crossover points for strings of length L. In this paper we extend the existing theoretical results in an attempt to provide a broader explanatory and predictive theory of the role of multi-point crossover in genetic algorithms. In particular, we extend the traditional disruption analysis to include two general forms of multi-point crossover: n-point crossover and uniform crossover. We also analyze two other aspects of multi-point crossover operators, namely, their recombination potential and exploratory power. The results of this analysis provide a much clearer view of the role of multi-point crossover in genetic algorithms. The implications of these results on implementation issues and performance are discussed, and several directions for further research are suggested.
1-hop neighbor's text information: (1995) Genetic algorithms with multi-parent recombination. : In this paper we investigate genetic algorithms where more than two parents are involved in the recombination operation. In particular, we introduce gene scanning as a reproduction mechanism that generalizes classical crossovers, such as n-point crossover or uniform crossover, and is applicable to an arbitrary number (two or more) of parents. We performed extensive tests for optimizing numerical functions, the TSP and graph coloring to observe the effect of different numbers of parents. The experiments show that 2-parent recombination is outperformed when using more parents on the classical DeJong functions. For the other problems the results are not conclusive, in some cases 2 parents are optimal, while in some others more parents are better.
Target text information: Raising GA performance by simultaneous tuning of selective pressure and recombination disruptiveness. : In many Genetic Algorithms applications the objective is to find a (near-)optimal solution using a limited amount of computation. Given these requirements it is difficult to find a good balance between exploration and exploitation. Usually such a balance is found by tuning the various parameters (like the selective pressure, population size, the mutation- and crossover rate) of the Genetic Algorithm. As an alternative we propose simultaneous tuning of the selective pressure and the disruptiveness of the recombination operators. Our experiments show that the combination of a proper selective pressure and a highly disruptive recombination operator yields superior performance. The reduction mechanism used in a Steady-State GA has a strong influence on the optimal crossover disruptiveness. Using the worst fitness deletion strategy the building blocks present in the current best individuals are always preserved. This releases the crossover operator from the burden to maintain good building blocks and allows us to tune crossover disruptiveness to improve the search for better individuals.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
3
|
Genetic Algorithms
|
cora
| 2,301
|
test
|
1-hop neighbor's text information: Shifting inductive bias with success-story algorithm, adaptive Levin search, and incremental self-improvement. : We study task sequences that allow for speeding up the learner's average reward intake through appropriate shifts of inductive bias (changes of the learner's policy). To evaluate long-term effects of bias shifts setting the stage for later bias shifts we use the "success-story algorithm" (SSA). SSA is occasionally called at times that may depend on the policy itself. It uses backtracking to undo those bias shifts that have not been empirically observed to trigger long-term reward accelerations (measured up until the current SSA call). Bias shifts that survive SSA represent a lifelong success history. Until the next SSA call, they are considered useful and build the basis for additional bias shifts. SSA allows for plugging in a wide variety of learning algorithms. We plug in (1) a novel, adaptive extension of Levin search and (2) a method for embedding the learner's policy modification strategy within the policy itself (incremental self-improvement). Our inductive transfer case studies involve complex, partially observable environments where traditional reinforcement learning fails.
1-hop neighbor's text information: Reinforcement learning with self-modifying policies. : A learner's modifiable components are called its policy. An algorithm that modifies the policy is a learning algorithm. If the learning algorithm has modifiable components represented as part of the policy, then we speak of a self-modifying policy (SMP). SMPs can modify the way they modify themselves etc. They are of interest in situations where the initial learning algorithm itself can be improved by experience | this is what we call "learning to learn". How can we force some (stochastic) SMP to trigger better and better self-modifications? The success-story algorithm (SSA) addresses this question in a lifelong reinforcement learning context. During the learner's life-time, SSA is occasionally called at times computed according to SMP itself. SSA uses backtracking to undo those SMP-generated SMP-modifications that have not been empirically observed to trigger lifelong reward accelerations (measured up until the current SSA call | this evaluates the long-term effects of SMP-modifications setting the stage for later SMP-modifications). SMP-modifications that survive SSA represent a lifelong success history. Until the next SSA call, they build the basis for additional SMP-modifications. Solely by self-modifications our SMP/SSA-based learners solve a complex task in a partially observable environment (POE) whose state space is far bigger than most reported in the POE literature.
1-hop neighbor's text information: Discovering solutions with low kolmogorov complexity and high generalization capability. : Many machine learning algorithms aim at finding "simple" rules to explain training data. The expectation is: the "simpler" the rules, the better the generalization on test data (! Occam's razor). Most practical implementations, however, use measures for "simplicity" that lack the power, universality and elegance of those based on Kolmogorov complexity and Solomonoff's algorithmic probability. Likewise, most previous approaches (especially those of the "Bayesian" kind) suffer from the problem of choosing appropriate priors. This paper addresses both issues. It first reviews some basic concepts of algorithmic complexity theory relevant to machine learning, and how the Solomonoff-Levin distribution (or universal prior) deals with the prior problem. The universal prior leads to a probabilistic method for finding "algorithmically simple" problem solutions with high generalization capability. The method is based on Levin complexity (a time-bounded generalization of Kolmogorov complexity) and inspired by Levin's optimal universal search algorithm. With a given problem, solution candidates are computed by efficient "self-sizing" programs that influence their own runtime and storage size. The probabilistic search algorithm finds the "good" programs (the ones quickly computing algorithmically probable solutions fitting the training data). Simulations focus on the task of discovering "algorithmically simple" neural networks with low Kolmogorov complexity and high generalization capability. It is demonstrated that the method, at least with certain toy problems where it is computationally feasible, can lead to generalization results unmatchable by previous neural net algorithms. Much remains do be done, however, to make large scale applications and "incremental learning" feasible.
Target text information: A computer scientist\'s view of life, the universe, and everything. : Is the universe computable? If so, it may be much cheaper in terms of information requirements to compute all computable universes instead of just ours. I apply basic concepts of Kolmogorov complexity theory to the set of possible universes, and chat about perceived and true randomness, life, generalization, and learning in a given universe. Assumptions. A long time ago, the Great Programmer wrote a program that runs all possible universes on His Big Computer. "Possible" means "computable": (1) Each universe evolves on a discrete time scale. (2) Any universe's state at a given time is describable by a finite number of bits. One of the many universes is ours, despite some who evolved in it and claim it is incomputable. Computable universes. Let T M denote an arbitrary universal Turing machine with unidirectional output tape. T M 's input and output symbols are "0", "1", and "," (comma). T M 's possible input programs can be ordered alphabetically: "" (empty program), "0", "1", ",", "00", "01", "0,", "10", "11", "1,", ",0", ",1", ",,", "000", etc. Let A k denote T M 's k-th program in this list. Its output will be a finite or infinite string over the alphabet f "0","1",","g. This sequence of bitstrings separated by commas will be interpreted as the evolution E k of universe U k . If E k includes at least one comma, then let U l k represents U k 's state at the l-th time step of E k (k; l 2 f1; 2; : : : ; g). E k is represented by the sequence U 1 k corresponds to U k 's big bang. Different algorithms may compute the same universe. Some universes are finite (those whose programs cease producing outputs at some point), others are not. I don't know about ours. TM not important. The choice of the Turing machine is not important. This is due to the compiler theorem: for each universal Turing machine C there exists a constant prefix C 2 f "0","1",","g fl such that for all possible programs p, C's output in response to program C p is identical to T M 's output in response to p. The prefix C is the compiler that compiles programs for T M into equivalent programs for C. k denote the l-th (possibly empty) bitstring before the l-th comma. U l
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
5
|
Reinforcement Learning
|
cora
| 1,327
|
test
|
1-hop neighbor's text information: (1995) Genetic algorithms with multi-parent recombination. : In this paper we investigate genetic algorithms where more than two parents are involved in the recombination operation. In particular, we introduce gene scanning as a reproduction mechanism that generalizes classical crossovers, such as n-point crossover or uniform crossover, and is applicable to an arbitrary number (two or more) of parents. We performed extensive tests for optimizing numerical functions, the TSP and graph coloring to observe the effect of different numbers of parents. The experiments show that 2-parent recombination is outperformed when using more parents on the classical DeJong functions. For the other problems the results are not conclusive, in some cases 2 parents are optimal, while in some others more parents are better.
1-hop neighbor's text information: On the Effectiveness of Evolutionary Search in High-Dimensional NK-Landscapes: NK-landscapes offer the ability to assess the performance of evolutionary algorithms on problems with different degrees of epistasis. In this paper, we study the performance of six algorithms in NK-landscapes with low and high dimension while keeping the amount of epistatic interactions constant. The results show that compared to genetic local search algorithms, the performance of standard genetic algorithms employing crossover or mutation significantly decreases with increasing problem size. Furthermore, with increasing K, crossover based algorithms are in both cases outperformed by mutation based algorithms. However, the relative performance differences between the algorithms grow significantly with the dimension of the search space, indicating that it is important to consider high-dimensional landscapes for evaluating the performance of evolutionary algorithms.
1-hop neighbor's text information: Performance of multi-parent crossover operators on numerical function optimization problems. : The multi-parent scanning crossover, generalizing the traditional uniform crossover, and diagonal crossover, generalizing 1-point (n-point) crossovers, were introduced in [5]. In subsequent publications, see [6, 18, 19], several aspects of multi-parent recombination are discussed. Due to space limitations, however, a full overview of experimental results showing the performance of multi-parent GAs on numerical optimization problems has never been published. This technical report is meant to fill this gap and make results available.
Target text information: Eiben and C.A. Schippers. Multi-parent\'s niche: n-ary crossovers on NK-landscapes. : Using the multi-parent diagonal and scanning crossover in GAs reproduction operators obtain an adjustable arity. Hereby sexuality becomes a graded feature instead of a Boolean one. Our main objective is to relate the performance of GAs to the extent of sexuality used for reproduction on less arbitrary functions then those reported in the current literature. We investigate GA behaviour on Kauffman's NK-landscapes that allow for systematic characterization and user control of ruggedness of the fitness landscape. We test GAs with a varying extent of sexuality, ranging from asexual to 'very sexual'. Our tests were performed on two types of NK-landscapes: landscapes with random and landscapes with nearest neighbour epistasis. For both landscape types we selected landscapes from a range of ruggednesses. The results confirm the superiority of (very) sexual recombination on mildly epistatic problems.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
3
|
Genetic Algorithms
|
cora
| 2,049
|
test
|
1-hop neighbor's text information: Scheduling and mapping: Software pipelining in the presence of structural hazards. : Recently, software pipelining methods based on an ILP (Integer Linear Programming) framework have been successfully applied to derive rate-optimal schedules for architectures involving clean pipelines | pipelines without structural hazards. The problem for architectures beyond such clean pipelines remains open. One challenge is how, under a unified ILP framework, to simultaneously represent resource constraints for unclean pipelines, and the assignment or mapping of operations from a loop to those pipelines. In this paper we provide a framework which does exactly this, and in addition constructs rate-optimal software pipelined schedules.
1-hop neighbor's text information: Minimizing register requirements under resource-constrained rate-optimal software pipelining. : In this paper we address the following software pipelin-ing problem: given a loop and a machine architecture with a fixed number of processor resources (e.g. function units), how can one construct a software-pipelined schedule which runs on the given architecture at the maximum possible iteration rate (a la rate-optimal) while minimizing the number of registers? The main contributions of this paper are: * First, we demonstrate that such problem can be described by a simple mathematical formulation with precise optimization objectives under periodic linear scheduling framework. The mathematical formulation provides a clear picture which permits one to visualize the overall solution space (for rate-optimal schedules) under different sets of con straints. * Secondly, we show that a precise mathematical formulation and its solution does make a significant performance difference! We evaluated the performance of our method against three other leading contemporary heuristic methods: Huff 's Slack Scheduling [9], Wang, Eisenbeis, Jourdan and Su's FRLC [23], and Gasperoni and Schwiegelshohn's modified list scheduling [6]. Experimental results show that the method described in this paper performed significantly better than these methods.
1-hop neighbor's text information: Optimum modulo schedules for minimum register requirements. : Modulo scheduling is an efficient technique for exploiting instruction level parallelism in a variety of loops, resulting in high performance code but increased register requirements. We present a combined approach that schedules the loop operations for minimum register requirements, given a modulo reservation table. Our method determines optimal register requirements for machines with finite resources and for general dependence graphs. This method demonstrates the potential of lifetime-sensitive modulo scheduling and is useful in evaluating the performance of lifetime-sensitive modulo scheduling heuristics.
Target text information: Abstract: This paper is a scientific comparison of two code generation techniques with identical goals generation of the best possible software pipelined code for computers with instruction level parallelism. Both are variants of modulo scheduling, a framework for generation of software pipelines pioneered by Rau and Glaser [RaGl81], but are otherwise quite dissimilar. One technique was developed at Silicon Graphics and is used in the MIPSpro compiler. This is the production compiler for SGI s systems which are based on the MIPS R8000 processor [Hsu94]. It is essentially a branchandbound enumeration of possible schedules with extensive pruning. This method is heuristic because of the way it prunes and also because of the interaction between register allocation and scheduling. 1 The second technique aims to produce optimal results by formulating the scheduling and register allocation problem as an integrated integer linear programming (ILP 1 ) problem. This idea has received much recent exposure in the literature [AlGoGa95, Feautrier94, GoAlGa94a, GoAlGa94b, Eichenberger95], but to our knowledge all previous implementations have been too preliminary for detailed measurement and evaluation. In particular, we believe this to be the first published measurement of runtime performance for ILP based generation of software pipelines. A particularly valuable result of this study was evaluation of the heuristic pipelining technology in the SGI compiler . One of the motivations behind the McGill research was the hope that optimal software pipelining, while not in itself practical for use in production compilers, would be useful for their evaluation and validation. Our comparison has indeed provided a quantitative validation of the SGI compilers pipeliner, leading us to increased confidence in both techniques.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
0
|
Rule Learning
|
cora
| 880
|
test
|
1-hop neighbor's text information: Markov decision pro cesses in large state spaces. : In this paper we propose a new framework for studying Markov decision processes (MDPs), based on ideas from statistical mechanics. The goal of learning in MDPs is to find a policy that yields the maximum expected return over time. In choosing policies, agents must therefore weigh the prospects of short-term versus long-term gains. We study a simple MDP in which the agent must constantly decide between exploratory jumps and local reward mining in state space. The number of policies to choose from grows exponentially with the size of the state space, N . We view the expected returns as defining an energy landscape over policy space. Methods from statistical mechanics are used to analyze this landscape in the thermodynamic limit N ! 1. We calculate the overall distribution of expected returns, as well as the distribution of returns for policies at a fixed Hamming distance from the optimal one. We briefly discuss the problem of learning optimal policies from empirical estimates of the expected return. As a first step, we relate our findings for the entropy to the limit of high-temperature learning. Numerical simulations support the theoretical results.
1-hop neighbor's text information: Reinforcement Learning Algorithms for Average-Payoff Markovian Decision Processes. : Reinforcement learning (RL) has become a central paradigm for solving learning-control problems in robotics and artificial intelligence. RL researchers have focussed almost exclusively on problems where the controller has to maximize the discounted sum of payoffs. However, as emphasized by Schwartz (1993), in many problems, e.g., those for which the optimal behavior is a limit cycle, it is more natural and com-putationally advantageous to formulate tasks so that the controller's objective is to maximize the average payoff received per time step. In this paper I derive new average-payoff RL algorithms as stochastic approximation methods for solving the system of equations associated with the policy evaluation and optimal control questions in average-payoff RL tasks. These algorithms are analogous to the popular TD and Q-learning algorithms already developed for the discounted-payoff case. One of the algorithms derived here is a significant variation of Schwartz's R-learning algorithm. Preliminary empirical results are presented to validate these new algorithms.
1-hop neighbor's text information: Learning to predict by the methods of temporal differences. : This article introduces a class of incremental learning procedures specialized for prediction|that is, for using past experience with an incompletely known system to predict its future behavior. Whereas conventional prediction-learning methods assign credit by means of the difference between predicted and actual outcomes, the new methods assign credit by means of the difference between temporally successive predictions. Although such temporal-difference methods have been used in Samuel's checker player, Holland's bucket brigade, and the author's Adaptive Heuristic Critic, they have remained poorly understood. Here we prove their convergence and optimality for special cases and relate them to supervised-learning methods. For most real-world prediction problems, temporal-difference methods require less memory and less peak computation than conventional methods; and they produce more accurate predictions. We argue that most problems to which supervised learning is currently applied are really prediction problems of the sort to which temporal-difference methods can be applied to advantage.
Target text information: Learning curves bounds for Markov decision processes with undiscounted rewards. : Markov decision processes (MDPs) with undis-counted rewards represent an important class of problems in decision and control. The goal of learning in these MDPs is to find a policy that yields the maximum expected return per unit time. In large state spaces, computing these averages directly is not feasible; instead, the agent must estimate them by stochastic exploration of the state space. In this case, longer exploration times enable more accurate estimates and more informed decision-making. The learning curve for an MDP measures how the agent's performance depends on the allowed exploration time, T . In this paper we analyze these learning curves for a simple control problem with undiscounted rewards. In particular, methods from statistical mechanics are used to calculate lower bounds on the agent's performance in the thermodynamic limit T ! 1, N ! 1, ff = T =N (finite), where T is the number of time steps allotted per policy evaluation and N is the size of the state space. In this limit, we provide a lower bound on the return of policies that appear optimal based on imperfect statistics.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
5
|
Reinforcement Learning
|
cora
| 2,635
|
test
|
1-hop neighbor's text information: Exponential convergence of Langevin diffusions and their discrete approximations. : In this paper we consider a continous time method of approximating a given distribution using the Langevin diffusion dL t = dW t + 1 2 r log (L t )dt: We find conditions under which this diffusion converges exponentially quickly to or does not: in one dimension, these are essentially that for distributions with exponential tails of the form (x) / exp(fljxj fi ), 0 < fi < 1, exponential convergence occurs if and only if fi 1. We then consider conditions under which the discrete approximations to the diffusion converge. We first show that even when the diffusion itself converges, naive discretisations need not do so. We then consider a "Metropolis-adjusted" version of the algorithm, and find conditions under which this also converges at an exponential rate: perhaps surprisingly, even the Metropolised version need not converge exponentially fast even if the diffusion does. We briefly discuss a truncated form of the algorithm which, in practice, should avoid the difficulties of the other forms.
1-hop neighbor's text information: Geometric ergodicity and hybrid Markov chains. : Various notions of geometric ergodicity for Markov chains on general state spaces exist. In this paper, we review certain relations and implications among them. We then apply these results to a collection of chains commonly used in Markov chain Monte Carlo simulation algorithms, the so-called hybrid chains. We prove that under certain conditions, a hybrid chain will "inherit" the geometric ergodicity of its constituent parts. Acknowledgements. We thank Charlie Geyer for a number of very useful comments regarding spectral theory and central limit theorems. We thank Alison Gibbs, Phil Reiss, Peter Rosenthal, and Richard Tweedie for very helpful discussions. We thank the referee and the editor for many excellent suggestions.
1-hop neighbor's text information: Rates of convergence of the Hastings and Metropolis algorithms. : We apply recent results in Markov chain theory to Hastings and Metropolis algorithms with either independent or symmetric candidate distributions, and provide necessary and sufficient conditions for the algorithms to converge at a geometric rate to a prescribed distribution . In the independence case (in IR k ) these indicate that geometric convergence essentially occurs if and only if the candidate density is bounded below by a multiple of ; in the symmetric case (in IR only) we show geometric convergence essentially occurs if and only if has geometric tails. We also evaluate recently developed computable bounds on the rates of convergence in this context: examples show that these theoretical bounds can be inherently extremely conservative, although when the chain is stochastically monotone the bounds may well be effective.
Target text information: A note on acceptance rate criteria for CLTs for Hastings-Metropolis algorithms: This note considers positive recurrent Markov chains where the probability of remaining in the current state is arbitrarily close to 1. Specifically, conditions are given which ensure the non-existence of central limit theorems for ergodic averages of functionals of the chain. The results are motivated by applications for Metropolis-Hastings algorithms which are constructed in terms of a rejection probability, (where a rejection involves remaining at the current state). Two examples for commonly used algorithms are given, for the independence sampler and the Metropolis adjusted Langevin algorithm. The examples are rather specialised, although in both cases, the problems which arise are typical of problems commonly occurring for the particular algorithm being used. 0 I would like to thank Kerrie Mengersen Jeff Rosenthal and Richard Tweedie for useful conversations on the subject of this paper.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
6
|
Probabilistic Methods
|
cora
| 623
|
test
|
1-hop neighbor's text information: Neural network exploration using optimal experiment design. : We consider the question "How should one act when the only goal is to learn as much as possible?" Building on the theoretical results of Fedorov [1972] and MacKay [1992], we apply techniques from Optimal Experiment Design (OED) to guide the query/action selection of a neural network learner. We demonstrate that these techniques allow the learner to minimize its generalization error by exploring its domain efficiently and completely. We conclude that, while not a panacea, OED-based query/action has much to offer, especially in domains where its high computational costs can be tolerated. This report describes research done at the Center for Biological and Computational Learning and the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the Center is provided in part by a grant from the National Science Foundation under contract ASC-9217041. The author was also funded by ATR Human Information Processing Laboratories, Siemens Corporate Research and NSF grant CDA-9309300.
1-hop neighbor's text information: Exloration bonuses and dual control. : Finding the Bayesian balance between exploration and exploitation in adaptive optimal control is in general intractable. This paper shows how to compute suboptimal estimates based on a certainty equivalence approximation arising from a form of dual control. This systematizes and extends existing uses of exploration bonuses in reinforcement learning (Sutton, 1990). The approach has two components: a statistical model of uncertainty in the world and a way of turning this into exploratory behaviour.
1-hop neighbor's text information: Information-Based Objective Functions for Active Data Selection, : Learning can be made more efficient if we can actively select particularly salient data points. Within a Bayesian learning framework, objective functions are discussed which measure the expected informativeness of candidate measurements. Three alternative specifications of what we want to gain information about lead to three different criteria for data selection. All these criteria depend on the assumption that the hypothesis space is correct, which may prove to be their main weakness.
Target text information: "What is the best thing to do right now?" : getting beyond greedy exploration\', :
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 2,393
|
val
|
1-hop neighbor's text information: Dynamic Programming and Markov Processes. : The problem of maximizing the expected total discounted reward in a completely observable Markovian environment, i.e., a Markov decision process (mdp), models a particular class of sequential decision problems. Algorithms have been developed for making optimal decisions in mdps given either an mdp specification or the opportunity to interact with the mdp over time. Recently, other sequential decision-making problems have been studied prompting the development of new algorithms and analyses. We describe a new generalized model that subsumes mdps as well as many of the recent variations. We prove some basic results concerning this model and develop generalizations of value iteration, policy iteration, model-based reinforcement-learning, and Q-learning that can be used to make optimal decisions in the generalized model under various assumptions. Applications of the theory to particular models are described, including risk-averse mdps, exploration-sensitive mdps, sarsa, Q-learning with spreading, two-player games, and approximate max picking via sampling. Central to the results are the contraction property of the value operator and a stochastic-approximation theorem that reduces asynchronous convergence to synchronous convergence.
1-hop neighbor's text information: Neural network exploration using optimal experiment design. : We consider the question "How should one act when the only goal is to learn as much as possible?" Building on the theoretical results of Fedorov [1972] and MacKay [1992], we apply techniques from Optimal Experiment Design (OED) to guide the query/action selection of a neural network learner. We demonstrate that these techniques allow the learner to minimize its generalization error by exploring its domain efficiently and completely. We conclude that, while not a panacea, OED-based query/action has much to offer, especially in domains where its high computational costs can be tolerated. This report describes research done at the Center for Biological and Computational Learning and the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the Center is provided in part by a grant from the National Science Foundation under contract ASC-9217041. The author was also funded by ATR Human Information Processing Laboratories, Siemens Corporate Research and NSF grant CDA-9309300.
1-hop neighbor's text information: Q-Learning for Bandit Problems: Multi-armed bandits may be viewed as decompositionally-structured Markov decision processes (MDP's) with potentially very large state sets. A particularly elegant methodology for computing optimal policies was developed over twenty ago by Gittins [Gittins & Jones, 1974]. Gittins' approach reduces the problem of finding optimal policies for the original MDP to a sequence of low-dimensional stopping problems whose solutions determine the optimal policy through the so-called "Gittins indices." Katehakis and Veinott [Katehakis & Veinott, 1987] have shown that the Gittins index for a task in state i may be interpreted as a particular component of the maximum-value function associated with the "restart-in-i" process, a simple MDP to which standard solution methods for computing optimal policies, such as successive approximation, apply. This paper explores the problem of learning the Gittins indices on-line without the aid of a process model; it suggests utilizing task-state-specific Q-learning agents to solve their respective restart-in-state-i subproblems, and includes an example in which the online reinforcement learning approach is applied to a simple problem of stochastic scheduling|one instance drawn from a wide class of problems that may be formulated as bandit problems.
Target text information: Exloration bonuses and dual control. : Finding the Bayesian balance between exploration and exploitation in adaptive optimal control is in general intractable. This paper shows how to compute suboptimal estimates based on a certainty equivalence approximation arising from a form of dual control. This systematizes and extends existing uses of exploration bonuses in reinforcement learning (Sutton, 1990). The approach has two components: a statistical model of uncertainty in the world and a way of turning this into exploratory behaviour.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
5
|
Reinforcement Learning
|
cora
| 2,507
|
test
|
1-hop neighbor's text information: Keeping neural networks simple by minimizing the description length of the weights. : Supervised neural networks generalize well if there is much less information in the weights than there is in the output vectors of the training cases. So during learning, it is important to keep the weights simple by penalizing the amount of information they contain. The amount of information in a weight can be controlled by adding Gaussian noise and the noise level can be adapted during learning to optimize the trade-off between the expected squared error of the network and the amount of information in the weights. We describe a method of computing the derivatives of the expected squared error and of the amount of information in the noisy weights in a network that contains a layer of non-linear hidden units. Provided the output units are linear, the exact derivatives can be computed efficiently without time-consuming Monte Carlo simulations. The idea of minimizing the amount of information that is required to communicate the weights of a neural network leads to a number of interesting schemes for encoding the weights.
1-hop neighbor's text information: Bayesian Methods for Adaptive Models. :
1-hop neighbor's text information: Bayesian nonlinear modelling for the prediction competition. : The 1993 energy prediction competition involved the prediction of a series of building energy loads from a series of environmental input variables. Non-linear regression using `neural networks' is a popular technique for such modeling tasks. Since it is not obvious how large a time-window of inputs is appropriate, or what preprocessing of inputs is best, this can be viewed as a regression problem in which there are many possible input variables, some of which may actually be irrelevant to the prediction of the output variable. Because a finite data set will show random correlations between the irrelevant inputs and the output, any conventional neural network (even with reg-ularisation or `weight decay') will not set the coefficients for these junk inputs to zero. Thus the irrelevant variables will hurt the model's performance. The Automatic Relevance Determination (ARD) model puts a prior over the regression parameters which embodies the concept of relevance. This is done in a simple and `soft' way by introducing multiple regularisation constants, one associated with each input. Using Bayesian methods, the regularisation constants for junk inputs are automatically inferred to be large, preventing those inputs from causing significant overfitting.
Target text information: MacKay (1995). Probabilistic networks: new models and new methods. : In this paper I describe the implementation of a probabilistic regression model in BUGS. BUGS is a program that carries out Bayesian inference on statistical problems using a simulation technique known as Gibbs sampling. It is possible to implement surprisingly complex regression models in this environment. I demonstrate the simultaneous inference of an interpolant and an input-dependent noise level.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
4
|
Theory
|
cora
| 1,787
|
test
|
1-hop neighbor's text information: Approximation by scattered shifts of a radial basis function, : The paper studies L 1 (IR d )-norm approximations from a space spanned by a discrete set of translates of a basis function . Attention here is restricted to functions whose Fourier transform is smooth on IR d n0, and has a singularity at the origin. Examples of such basis functions are the thin-plate splines and the multiquadrics, as well as other types of radial basis functions that are employed in Approximation Theory. The above approximation problem is well-understood in case the set of points ffi used for translating forms a lattice in IR d , and many optimal and quasi-optimal approximation schemes can already be found in the literature. In contrast, only few, mostly specific, results are known for a set ffi of scattered points. The main objective of this paper is to provide a general tool for extending approximation schemes that use integer translates of a basis function to the non-uniform case. We introduce a single, relatively simple, conversion method that preserves the approximation orders provided by a large number of schemes presently in the literature (more precisely, to almost all "stationary schemes"). In anticipation of future introduction of new schemes for uniform grids, an effort is made to impose only a few mild conditions on the function , which still allow for a unified error analysis to hold. In the course of the discussion here, the recent results of [BuDL] on scattered center approximation are reproduced and improved upon.
1-hop neighbor's text information: Approximation from shift-invariant subspaces of L 2 (IR d ), CMS TSR #92-2, : A complete characterization is given of closed shift-invariant subspaces of L 2 (IR d ) which provide a specified approximation order. When such a space is principal (i.e., generated by a single function), then this characterization is in terms of the Fourier transform of the generator. As a special case, we obtain the classical Strang-Fix conditions, but without requiring the generating function to decay at infinity. The approximation order of a general closed shift-invariant space is shown to be already realized by a specifiable principal subspace.
Target text information: Negative observations concerning approximations from spaces generated by scattered shifts of functions vanishing at 1: Approximation by scattered shifts f( ff)g ff2A of a basis function are considered, and different methods for localizing these translates are compared. It is argued in the note that the superior localization processes are those that employ the original translates only.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 262
|
test
|
1-hop neighbor's text information: Learning functions in k-DNF from reinforcement. : An agent that must learn to act in the world by trial and error faces the reinforcement learning problem, which is quite different from standard concept learning. Although good algorithms exist for this problem in the general case, they are often quite inefficient and do not exhibit generalization. One strategy is to find restricted classes of action policies that can be learned more efficiently. This paper pursues that strategy by developing algorithms that can efficiently learn action maps that are expressible in k-DNF. The algorithms are compared with existing methods in empirical trials and are shown to have very good performance.
1-hop neighbor's text information: Disambiguation and Grammar as Emergent Soft Constraints: When reading a sentence such as "The diplomat threw the ball in the ballpark for the princess" our interpretation changes from a dance event to baseball and back to dance. Such on-line disambiguation happens automatically and appears to be based on dynamically combining the strengths of association between the keywords and the two senses. Subsymbolic neural networks are very good at modeling such behavior. They learn word meanings as soft constraints on interpretation, and dynamically combine these constraints to form the most likely interpretation. On the other hand, it is very difficult to show how systematic language structures such as relative clauses could be processed in such a system. The network would only learn to associate them to specific contexts and would not be able to process new combinations of them. A closer look at understanding embedded clauses shows that humans are not very systematic in processing grammatical structures either. For example, "The girl who the boy who the girl who lived next door blamed hit cried" is very difficult to understand, whereas "The car that the man who the dog that had rabies bit drives is in the garage" is not. This difference emerges from the same semantic constraints that are at work in the disambiguation task. In this chapter we will show how the subsymbolic parser can be combined with high-level control that allows the system to process novel combinations of relative clauses systematically, while still being sensitive to the semantic constraints.
1-hop neighbor's text information: (1997b) Probabilistic Modeling for Combinatorial Optimization, : Probabilistic models have recently been utilized for the optimization of large combinatorial search problems. However, complex probabilistic models that attempt to capture inter-parameter dependencies can have prohibitive computational costs. The algorithm presented in this paper, termed COMIT, provides a method for using probabilistic models in conjunction with fast search techniques. We show how COMIT can be used with two very different fast search algorithms: hillclimbing and Population-based incremental learning (PBIL). The resulting algorithms maintain many of the benefits of probabilistic modeling, with far less computational expense. Extensive empirical results are provided; COMIT has been successfully applied to jobshop scheduling, traveling salesman, and knapsack problems. This paper also presents a review of probabilistic modeling for combi natorial optimization.
Target text information: Introduction to the Theory of Neural Computa 92 tion. : Neural computation, also called connectionism, parallel distributed processing, neural network modeling or brain-style computation, has grown rapidly in the last decade. Despite this explosion, and ultimately because of impressive applications, there has been a dire need for a concise introduction from a theoretical perspective, analyzing the strengths and weaknesses of connectionist approaches and establishing links to other disciplines, such as statistics or control theory. The Introduction to the Theory of Neural Computation by Hertz, Krogh and Palmer (subsequently referred to as HKP) is written from the perspective of physics, the home discipline of the authors. The book fulfills its mission as an introduction for neural network novices, provided that they have some background in calculus, linear algebra, and statistics. It covers a number of models that are often viewed as disjoint. Critical analyses and fruitful comparisons between these models
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 1,634
|
test
|
1-hop neighbor's text information: Gain adaptation beats least squares. : I present computational results suggesting that gain-adaptation algorithms based in part on connectionist learning methods may improve over least squares and other classical parameter-estimation methods for stochastic time-varying linear systems. The new algorithms are evaluated with respect to classical methods along three dimensions: asymptotic error, computational complexity, and required prior knowledge about the system. The new algorithms are all of the same order of complexity as LMS methods, O(n), where n is the dimensionality of the system, whereas least-squares methods and the Kalman filter are O(n 2 ). The new methods also improve over the Kalman filter in that they do not require a complete statistical model of how the system varies over time. In a simple computational experiment, the new methods are shown to produce asymptotic error levels near that of the optimal Kalman filter and significantly below those of least-squares and LMS methods. The new methods may perform better even than the Kalman filter if there is any error in the filter's model of how the system varies over time.
1-hop neighbor's text information: Baird (1995). Residual algorithms: Reinforcement learning with function approximation. : A number of reinforcement learning algorithms have been developed that are guaranteed to converge to the optimal solution when used with lookup tables. It is shown, however, that these algorithms can easily become unstable when implemented directly with a general function-approximation system, such as a sigmoidal multilayer perceptron, a radial-basis-function system, a memory-based learning system, or even a linear function-approximation system. A new class of algorithms, residual gradient algorithms, is proposed, which perform gradient descent on the mean squared Bellman residual, guaranteeing convergence. It is shown, however, that they may learn very slowly in some cases. A larger class of algorithms, residual algorithms, is proposed that has the guaranteed convergence of the residual gradient algorithms, yet can retain the fast learning speed of direct algorithms. In fact, both direct and residual gradient algorithms are shown to be special cases of residual algorithms, and it is shown that residual algorithms can combine the advantages of each approach. The direct, residual gradient, and residual forms of value iteration, Q-learning, and advantage learning are all presented. Theoretical analysis is given explaining the properties these algorithms have, and simulation results are given that demonstrate these properties.
Target text information: Adapting Bias by Gradient Descent: An Incremental Version of Delta-Bar-Delta, : Appropriate bias is widely viewed as the key to efficient learning and generalization. I present a new algorithm, the Incremental Delta-Bar-Delta (IDBD) algorithm, for the learning of appropriate biases based on previous learning experience. The IDBD algorithm is developed for the case of a simple, linear learning system|the LMS or delta rule with a separate learning-rate parameter for each input. The IDBD algorithm adjusts the learning-rate parameters, which are an important form of bias for this system. Because bias in this approach is adapted based on previous learning experience, the appropriate testbeds are drifting or non-stationary learning tasks. For particular tasks of this type, I show that the IDBD algorithm performs better than ordinary LMS and in fact finds the optimal learning rates. The IDBD algorithm extends and improves over prior work by Jacobs and by me in that it is fully incremental and has only a single free parameter. This paper also extends previous work by presenting a derivation of the IDBD algorithm as gradient descent in the space of learning-rate parameters. Finally, I offer a novel interpretation of the IDBD algorithm as an incremental form of hold-one-out cross validation.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 1,438
|
test
|
1-hop neighbor's text information: On centering neural network weight updates. : Technical Report IDSIA-19-97 Abstract. It has long been known that neural networks can learn faster when their input and hidden unit activities are centered about zero; recently we have extended this approach to also encompass the centering of error signals (Schraudolph and Sejnowski, 1996). Here we generalize this notion to all factors involved in the weight update, leading us to propose centering the slope of hidden unit activation functions as well. Slope centering removes the linear component of backpropagated error; this improves credit assignment in networks with shortcut connections. Benchmark results show that this can speed up learning significantly without adversely affecting the trained network's generalization ability.
1-hop neighbor's text information: "Centering neural network gradient factors", : Technical Report IDSIA-19-97 Abstract. It has long been known that neural networks can learn faster when their input and hidden unit activities are centered about zero; recently we have extended this approach to also encompass the centering of error signals [2]. Here we generalize this notion to all factors involved in the network's gradient, leading us to propose centering the slope of hidden unit activation functions as well. Slope centering removes the linear component of backpropagated error; this improves credit assignment in networks with shortcut connections. Benchmark results show that this can speed up learning significantly without adversely affecting the trained network's generalization ability.
Target text information: "Tempering backpropagation networks: Not all weights are created equal", : Backpropagation learning algorithms typically collapse the network's structure into a single vector of weight parameters to be optimized. We suggest that their performance may be improved by utilizing the structural information instead of discarding it, and introduce a framework for tempering each weight accordingly. In the tempering model, activation and error signals are treated as approximately independent random variables. The characteristic scale of weight changes is then matched to that of the residuals, allowing structural properties such as a node's fan-in and fan-out to affect the local learning rate and backpropagated error. The model also permits calculation of an upper bound on the global learning rate for batch updates, which in turn leads to different update rules for bias vs. non-bias weights.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 2,198
|
test
|
1-hop neighbor's text information: Computational modeling of spatial attention:
1-hop neighbor's text information: A Brief History of Connectionism: Connectionist research is firmly established within the scientific community, especially within the multi-disciplinary field of cognitive science. This diversity, however, has created an environment which makes it difficult for connectionist researchers to remain aware of recent advances in the field, let alone understand how the field has developed. This paper attempts to address this problem by providing a brief guide to connectionist research. The paper begins by defining the basic tenets of connectionism. Next, the development of connectionist research is traced, commencing with connectionism's philosophical predecessors, moving to early psychological and neuropsychological influences, followed by the mathematical and computing contributions to connectionist research. Current research is then reviewed, focusing specifically on the different types of network architectures and learning rules in use. The paper concludes by suggesting that neural network research|at least in cognitive science|should move towards models that incorporate the relevant functional principles inherent in neurobiological systems.
Target text information: The end of the line for a brain-damaged model of unilateral neglect. : For over a century, it has been known that damage to the right hemisphere of the brain can cause patients to be unaware of the contralesional side of space. This condition, known as unilateral neglect, represents a collection of clinically related spatial disorders characterized by the failure in free vision to respond, explore, or orient to stimuli predominantly located on the side of space opposite the damaged hemisphere. Recent studies using the simple task of line bisection, a conventional diagnostic test, have proved surprisingly revealing with respect to the spatial and attentional impairments involved in neglect. In line bisection, the patient is asked to mark the midpoint of a thin horizontal line on a sheet of paper. Neglect patients generally transect far to the right of the center. Extensive studies of line bisection have been conducted, manipulating|among other factors|line length, orientation, and position. We have simulated the pattern of results using an existing computational model of visual perception and selective attention called morsel (Mozer, 1991). morsel has already been used to model data in a related disorder, neglect dyslexia (Mozer & Behrmann, 1990). In this earlier work, morsel was "lesioned" in accordance with the damage we suppose to have occurred in the brains of
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 1,969
|
test
|
1-hop neighbor's text information: Improving rule-based systems through case-based reasoning. : A novel architecture is presented for combining rule-based and case-based reasoning. The central idea is to apply the rules to a target problem to get a first approximation to the answer; but if the problem is judged to be compellingly similar to a known exception of the rules in any aspect of its behavior, then that aspect is modelled after the exception rather than the rules. The architecture is implemented for the full-scale task of pronouncing surnames. Preliminary results suggest that the system performs almost as well as the best commercial systems. However, of more interest than the absolute performance of the system is the result that this performance was better than what could have been achieved with the rules alone. This illustrates the capacity of the architecture to improve on the rule-based system it starts with. The results also demonstrate a beneficial interaction in the system, in that improving the rules speeds up the case-based component.
1-hop neighbor's text information: Using Case-Based Reasoning as a Reinforcement Learning Framework for Optimization with Changing Criteria: Practical optimization problems such as job-shop scheduling often involve optimization criteria that change over time. Repair-based frameworks have been identified as flexible computational paradigms for difficult combinatorial optimization problems. Since the control problem of repair-based optimization is severe, Reinforcement Learning (RL) techniques can be potentially helpful. However, some of the fundamental assumptions made by traditional RL algorithms are not valid for repair-based optimization. Case-Based Reasoning (CBR) compensates for some of the limitations of traditional RL approaches. In this paper, we present a Case-Based Reasoning RL approach, implemented in the C A B I N S system, for repair-based optimization. We chose job-shop scheduling as the testbed for our approach. Our experimental results show that C A B I N S is able to effectively solve problems with changing optimization criteria which are not known to the system and only exist implicitly in a extensional manner in the case base.
1-hop neighbor's text information: "Using case-based reasoning to acquire user scheduling preferences that change over time," : Production/Manufacturing scheduling typically involves the acquisition of user optimization preferences. The ill-structuredness of both the problem space and the desired objectives make practical scheduling problems difficult to formalize and costly to solve, especially when problem configurations and user optimization preferences change over time. This paper advocates an incremental revision framework for improving schedule quality and incorporating user dynamically changing preferences through Case-Based Reasoning. Our implemented system, called CABINS, records situation-dependent tradeoffs and consequences that result from schedule revision to guide schedule improvement. The preliminary experimental results show that CABINS is able to effectively capture both user static and dynamic preferences which are not known to the system and only exist implicitly in a extensional manner in the case base.
Target text information: Case-based Acquisition of User Preferences for Solution Improvement in Ill-Structured Domains, : 1 We have developed an approach to acquire complicated user optimization criteria and use them to guide
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
2
|
Case Based
|
cora
| 1,802
|
test
|
1-hop neighbor's text information: Irrelevant features and the subset selection problem. : We address the problem of finding a subset of features that allows a supervised induction algorithm to induce small high-accuracy concepts. We examine notions of relevance and irrelevance, and show that the definitions used in the machine learning literature do not adequately partition the features into useful categories of relevance. We present definitions for irrelevance and for two degrees of relevance. These definitions improve our understanding of the behavior of previous subset selection algorithms, and help define the subset of features that should be sought. The features selected should depend not only on the features and the target concept, but also on the induction algorithm. We describe a method for feature subset selection using cross-validation that is applicable to any induction algorithm, and discuss experiments conducted with ID3 and C4.5 on artificial and real datasets.
1-hop neighbor's text information: Rule-based machine learning methods for function prediction. : We describe a machine learning method for predicting the value of a real-valued function, given the values of multiple input variables. The method induces solutions from samples in the form of ordered disjunctive normal form (DNF) decision rules. A central objective of the method and representation is the induction of compact, easily interpretable solutions. This rule-based decision model can be extended to search efficiently for similar cases prior to approximating function values. Experimental results on real-world data demonstrate that the new techniques are competitive with existing machine learning and statistical methods and can sometimes yield superior regression performance.
1-hop neighbor's text information: Wrappers for Performance Enhancement and Oblivious Decision Graphs. :
Target text information: Search-based Class Discretization: We present a methodology that enables the use of classification algorithms on regression tasks. We implement this method in system RECLA that transforms a regression problem into a classification one and then uses an existent classification system to solve this new problem. The transformation consists of mapping a continuous variable into an ordinal variable by grouping its values into an appropriate set of intervals. We use misclassification costs as a means to reflect the implicit ordering among the ordinal values of the new variable. We describe a set of alternative discretization methods and, based on our experimental results, justify the need for a search-based approach to choose the best method. Our experimental results confirm the validity of our search-based approach to class discretization, and reveal the accuracy benefits of adding misclassification costs.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
2
|
Case Based
|
cora
| 640
|
test
|
1-hop neighbor's text information: Some varieties of qualitative probability. :
1-hop neighbor's text information: Exploiting causal independence in Bayesian network inference. : A new method is proposed for exploiting causal independencies in exact Bayesian network inference. A Bayesian network can be viewed as representing a factorization of a joint probability into the multiplication of a set of conditional probabilities. We present a notion of causal independence that enables one to further factorize the conditional probabilities into a combination of even smaller factors and consequently obtain a finer-grain factorization of the joint probability. The new formulation of causal independence lets us specify the conditional probability of a variable given its parents in terms of an associative and commutative operator, such as or, sum or max, on the contribution of each parent. We start with a simple algorithm VE for Bayesian network inference that, given evidence and a query variable, uses the factorization to find the posterior distribution of the query. We show how this algorithm can be extended to exploit causal independence. Empirical studies, based on the CPCS networks for medical diagnosis, show that this method is more efficient than previous methods and allows for inference in larger networks than previous algorithms.
1-hop neighbor's text information: State-space abstraction for anytime evaluation of probabilistic networks. : One important factor determining the computa - tional complexity of evaluating a probabilistic network is the cardinality of the state spaces of the nodes. By varying the granularity of the state spaces, one can trade off accuracy in the result for computational efficiency. We present an any - time procedure for approximate evaluation of probabilistic networks based on this idea. On application to some simple networks, the proce - dure exhibits a smooth improvement in approxi - mation quality as computation time increases. This suggests that statespace abstraction is one more useful control parameter for designing real time probabilistic reasoners.
Target text information: Incremental tradeoff resolution in qualitative probabilistic networks. : Qualitative probabilistic reasoning in a Bayesian network often reveals tradeoffs: relationships that are ambiguous due to competing qualitative influences. We present two techniques that combine qualitative and numeric probabilistic reasoning to resolve such tradeoffs, inferring the qualitative relationship between nodes in a Bayesian network. The first approach incrementally marginalizes nodes that contribute to the ambiguous qualitative relationships. The second approach evaluates approximate Bayesian networks for bounds of probability distributions, and uses these bounds to determinate qualitative relationships in question. This approach is also incremental in that the algorithm refines the state spaces of random variables for tighter bounds until the qualitative relationships are resolved. Both approaches provide systematic methods for tradeoff resolution at potentially lower computational cost than application of purely numeric methods.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
6
|
Probabilistic Methods
|
cora
| 2,385
|
val
|
1-hop neighbor's text information: BECOMING AN EXPERT CASE-BASED REASONER: LEARNING TO ADAPT PRIOR CASES: Experience plays an important role in the development of human expertise. One computational model of how experience affects expertise is provided by research on case-based reasoning, which examines how stored cases encapsulating traces of specific prior problem-solving episodes can be retrieved and re-applied to facilitate new problem-solving. Much progress has been made in methods for accessing relevant cases, and case-based reasoning is receiving wide acceptance both as a technology for developing intelligent systems and as a cognitive model of a human reasoning process. However, one important aspect of case-based reasoning remains poorly understood: the process by which retrieved cases are adapted to fit new situations. The difficulty of encoding effective adaptation rules by hand is widely recognized as a serious impediment to the development of fully autonomous case-based reasoning systems. Consequently, an important question is how case-based reasoning systems might learn to improve their expertise at case adaptation. We present a framework for acquiring this expertise by using a combination of general adaptation rules, introspective reasoning, and case-based reasoning about the case adaptation task itself.
1-hop neighbor's text information: Towards a computer model of memory search strategy learning. : Much recent research on modeling memory processes has focused on identifying useful indices and retrieval strategies to support particular memory tasks. Another important question concerning memory processes, however, is how retrieval criteria are learned. This paper examines the issues involved in modeling the learning of memory search strategies. It discusses the general requirements for appropriate strategy learning and presents a model of memory search strategy learning applied to the problem of retrieving relevant information for adapting cases in case-based reasoning. It discusses an implementation of that model, and, based on the lessons learned from that implementation, points towards issues and directions in refining the model.
1-hop neighbor's text information: Issues in goal-driven explanation. : When a reasoner explains surprising events for its internal use, a key motivation for explaining is to perform learning that will facilitate the achievement of its goals. Human explainers use a range of strategies to build explanations, including both internal reasoning and external information search, and goal-based considerations have a profound effect on their choices of when and how to pursue explanations. However, standard AI models of explanation rely on goal-neutral use of a single fixed strategy|generally backwards chaining|to build their explanations. This paper argues that explanation should be modeled as a goal-driven learning process for gathering and transforming information, and discusses the issues involved in developing an active multi-strategy process for goal-driven explanation.
Target text information: Goal-Driven Learning. : In Artificial Intelligence, Psychology, and Education, a growing body of research supports the view that learning is a goal-directed process. Psychological experiments show that people with different goals process information differently; studies in education show that goals have strong effects on what students learn; and functional arguments from machine learning support the necessity of goal-based focusing of learner effort. At the Fourteenth Annual Conference of the Cognitive Science Society, a symposium brought together researchers in AI, psychology, and education to discuss goal-driven learning. This article presents the fundamental points illuminated by the symposium, placing them in the context of open questions and current research di rections in goal-driven learning. fl Technical Report #85, Cognitive Science Program, Indiana University, Bloomington, Indiana, January 1993.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
2
|
Case Based
|
cora
| 2,562
|
test
|
1-hop neighbor's text information: Negative observations concerning approximations from spaces generated by scattered shifts of functions vanishing at 1: Approximation by scattered shifts f( ff)g ff2A of a basis function are considered, and different methods for localizing these translates are compared. It is argued in the note that the superior localization processes are those that employ the original translates only.
1-hop neighbor's text information: Approximation from shift-invariant subspaces of L 2 (IR d ), CMS TSR #92-2, : A complete characterization is given of closed shift-invariant subspaces of L 2 (IR d ) which provide a specified approximation order. When such a space is principal (i.e., generated by a single function), then this characterization is in terms of the Fourier transform of the generator. As a special case, we obtain the classical Strang-Fix conditions, but without requiring the generating function to decay at infinity. The approximation order of a general closed shift-invariant space is shown to be already realized by a specifiable principal subspace.
1-hop neighbor's text information: APPROXIMATION IN L p (R d FROM SPACES SPANNED BY THE PERTURBED INTEGER TRANSLATES OF: The problem of approximating smooth L p -functions from spaces spanned by the integer translates of a radially symmetric function is very well understood. In case the points of translation, ffi, are scattered throughout R d , the approximation problem is only well understood in the "stationary" setting. In this work, we treat the "non-stationary" setting under the assumption that ffi is a small perturbation of Z d . Our results, which are similar in many respects to the known results for the case ffi = Z d , apply specifically to the examples of the Gauss kernel and the Generalized Multiquadric.
Target text information: Approximation by scattered shifts of a radial basis function, : The paper studies L 1 (IR d )-norm approximations from a space spanned by a discrete set of translates of a basis function . Attention here is restricted to functions whose Fourier transform is smooth on IR d n0, and has a singularity at the origin. Examples of such basis functions are the thin-plate splines and the multiquadrics, as well as other types of radial basis functions that are employed in Approximation Theory. The above approximation problem is well-understood in case the set of points ffi used for translating forms a lattice in IR d , and many optimal and quasi-optimal approximation schemes can already be found in the literature. In contrast, only few, mostly specific, results are known for a set ffi of scattered points. The main objective of this paper is to provide a general tool for extending approximation schemes that use integer translates of a basis function to the non-uniform case. We introduce a single, relatively simple, conversion method that preserves the approximation orders provided by a large number of schemes presently in the literature (more precisely, to almost all "stationary schemes"). In anticipation of future introduction of new schemes for uniform grids, an effort is made to impose only a few mild conditions on the function , which still allow for a unified error analysis to hold. In the course of the discussion here, the recent results of [BuDL] on scattered center approximation are reproduced and improved upon.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 1,959
|
test
|
1-hop neighbor's text information: (1997) "Hierarchical mixture models in neurological transmission analysis," : Hierarchically structured mixture models are studied in the context of data analysis and inference on neural synaptic transmission characteristics in mammalian, and other, central nervous systems. Mixture structures arise due to uncertainties about the stochastic mechanisms governing the responses to electro-chemical stimulation of individual neuro-transmitter release sites at nerve junctions. Models attempt to capture scientific features such as the sensitivity of individual synaptic transmission sites to electro-chemical stimuli, and the extent of their electro-chemical responses when stimulated. This is done via suitably structured classes of prior distributions for parameters describing these features. Such priors may be structured to permit assessment of currently topical scientific hypotheses about fundamental neural function. Posterior analysis is implemented via stochastic simulation. Several data analyses are described to illustrate the approach, with resulting neurophysiological insights in some recently generated experimental contexts. Further developments and open questions, both neurophysiological and statistical, are noted. Research partially supported by the NSF under grants DMS-9024793, DMS-9305699 and DMS-9304250. This work represents part of a collaborative project with Dr Dennis A Turner, of Duke University Medical Center and Durham VA. Data was provided by Dr Turner and by Dr Howard V Wheal of Southampton University. A slightly revised version of this paper is published in the Journal of the American Statistical Association (vol 92, pp587-606), under the modified title Hierarchical Mixture Models in Neurological Transmission Analysis. The author is the recipient of the 1997 Mitchell Prize for "the Bayesian analysis of a substantive and concrete problem" based on the work reported in this paper.
Target text information: Mixture Models in the Exploration of Structure-Activity Relationships in Drug Design: We report on a study of mixture modeling problems arising in the assessment of chemical structure-activity relationships in drug design and discovery. Pharmaceutical research laboratories developing test compounds for screening synthesize many related candidate compounds by linking together collections of basic molecular building blocks, known as monomers. These compounds are tested for biological activity, feeding in to screening for further analysis and drug design. The tests also provide data relating compound activity to chemical properties and aspects of the structure of associated monomers, and our focus here is studying such relationships as an aid to future monomer selection. The level of chemical activity of compounds is based on the geometry of chemical binding of test compounds to target binding sites on receptor compounds, but the screening tests are unable to identify binding configurations. Hence potentially critical covari-ate information is missing as a natural latent variable. Resulting statistical models are then mixed with respect to such missing information, so complicating data analysis and inference. This paper reports on a study of a two-monomer, two-binding site framework and associated data. We build structured mixture models that mix linear regression models, predicting chemical effectiveness, with respect to site-binding selection mechanisms. We discuss aspects of modeling and analysis, including problems and pitfalls, and describe results of analyses of a simulated and real data set. In modeling real data, we are led into critical model extensions that introduce hierarchical random effects components to adequately capture heterogeneities in both the site binding mechanisms and in the resulting levels of effectiveness of compounds once bound. Comments on current and potential future directions conclude the report.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
1
|
Neural Networks
|
cora
| 488
|
val
|
1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming.
1-hop neighbor's text information: A formal analysis of the role of multi--point crossover in genetic algorithms. : On the basis of early theoretical and empirical studies, genetic algorithms have typically used 1 and 2-point crossover operators as the standard mechanisms for implementing recombination. However, there have been a number of recent studies, primarily empirical in nature, which have shown the benefits of crossover operators involving a higher number of crossover points. From a traditional theoretical point of view, the most surprising of these new results relate to uniform crossover, which involves on the average L / 2 crossover points for strings of length L. In this paper we extend the existing theoretical results in an attempt to provide a broader explanatory and predictive theory of the role of multi-point crossover in genetic algorithms. In particular, we extend the traditional disruption analysis to include two general forms of multi-point crossover: n-point crossover and uniform crossover. We also analyze two other aspects of multi-point crossover operators, namely, their recombination potential and exploratory power. The results of this analysis provide a much clearer view of the role of multi-point crossover in genetic algorithms. The implications of these results on implementation issues and performance are discussed, and several directions for further research are suggested.
1-hop neighbor's text information: "Using DNA to solve NP-Complete Problems", : A strategy for using Genetic Algorithms (GAs) to solve NP-complete problems is presented. The key aspect of the approach taken is to exploit the observation that, although all NP-complete problems are equally difficult in a general computational sense, some have much better GA representations than others, leading to much more successful use of GAs on some NP-complete problems than on others. Since any NP-complete problem can be mapped into any other one in polynomial time, the strategy described here consists of identifying a canonical NP-complete problem on which GAs work well, and solving other NP-complete problems indirectly by mapping them onto the canonical problem. Initial empirical results are presented which support the claim that the Boolean Satisfiability Problem (SAT) is a GA-effective canonical problem, and that other NP-complete problems with poor GA representations can be solved efficiently by mapping them first onto SAT problems.
Target text information: "Using Problem Generators to Explore the Effects of Epistasis," : In this paper we develop an empirical methodology for studying the behavior of evolutionary algorithms based on problem generators. We then describe three generators that can be used to study the effects of epistasis on the performance of EAs. Finally, we illustrate the use of these ideas in a preliminary exploration of the effects of epistasis on simple GAs.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
3
|
Genetic Algorithms
|
cora
| 1,843
|
test
|
1-hop neighbor's text information: Brian "Case-based Planning with a High-Performance Parallel Memory," : In case-based planning (CBP), previously generated plans are stored as cases in memory and can be reused to solve similar planning problems in the future. CBP can save considerable time over planning from scratch (generative planning), thus offering a potential (heuristic) mechanism for handling intractable problems. One drawback of CBP systems has been the need for a highly structured memory that requires significant domain engineering and complex memory indexing schemes to enable efficient case retrieval. In contrast, our CBP system, CaPER, is based on a massively parallel frame-based AI language and can do extremely fast retrieval of complex cases from a large, unindexed memory. The ability to do fast, frequent retrievals has many advantages: indexing is unnecessary; very large casebases can be used; and memory can be probed in numerous alternate ways, allowing more specific retrieval of stored plans that better fit a target problem with less adaptation. fl Preliminary version of an article appearing in IEEE Expert, February 1994, pp. 8-14. This paper is an extended version of [1].
Target text information: "Protein Sequencing Experiment Planning Using Analogy," : Experiment design and execution is a central activity in the natural sciences. The SeqER system provides a general architecture for the integration of automated planning techniques with a variety of domain knowledge in order to plan scientific experiments. These planning techniques include rule-based methods and, especially, the use of derivational analogy. Derivational analogy allows planning experience, captured as cases, to be reused. Analogy also allows the system to function in the absence of strong domain knowledge. Cases are efficiently and flexibly retrieved from a large casebase using massively parallel methods.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
2
|
Case Based
|
cora
| 1,734
|
test
|
1-hop neighbor's text information: "Exploration and model building in mobile robot domains", : I present first results on COLUMBUS, an autonomous mobile robot. COLUMBUS operates in initially unknown, structured environments. Its task is to explore and model the environment efficiently while avoiding collisions with obstacles. COLUMBUS uses an instance-based learning technique for modeling its environment. Real-world experiences are generalized via two artificial neural networks that encode the characteristics of the robot's sensors, as well as the characteristics of typical environments the robot is assumed to face. Once trained, these networks allow for knowledge transfer across different environments the robot will face over its lifetime. COLUMBUS' models represent both the expected reward and the confidence in these expectations. Exploration is achieved by navigating to low confidence regions. An efficient dynamic programming method is employed in background to find minimal-cost paths that, executed by the robot, maximize exploration. COLUMBUS operates in real-time. It has been operating successfully in an office building environment for periods up to hours.
1-hop neighbor's text information: Case-based reasoning: Foundational issues, methodological variations, and system approaches. : 10 resources, Alan Schultz for installing a WWW server and providing knowledge on CGI scripts, and John Grefenstette for his comments on an earlier version of this paper.
1-hop neighbor's text information: Lazy acquisition of place knowledge. : In this paper we define the task of place learning and describe one approach to this problem. The framework represents distinct places using evidence grids, a probabilistic description of occupancy. Place recognition relies on case-based classification, augmented by a registration process to correct for translations. The learning mechanism is also similar to that in case-based systems, involving the simple storage of inferred evidence grids. Experimental studies with both physical and simulated robots suggest that this approach improves place recognition with experience, that it can handle significant sensor noise, and that it scales well to increasing numbers of places. Previous researchers have studied evidence grids and place learning, but they have not combined these two powerful concepts, nor have they used the experimental methods of machine learning to evaluate their methods' abilities.
Target text information: Lazy Acquisition of Place Knowledge: In this paper we define the task of place learning and describe one approach to this problem. Our framework represents distinct places as evidence grids, a probabilistic description of occupancy. Place recognition relies on nearest neighbor classification, augmented by a registration process to correct for translational differences between the two grids. The learning mechanism is lazy in that it involves the simple storage of inferred evidence grids. Experimental studies with physical and simulated robots suggest that this approach improves place recognition with experience, that it can handle significant sensor noise, that it benefits from improved quality in stored cases, and that it scales well to environments with many distinct places. Additional studies suggest that using historical information about the robot's path through the environment can actually reduce recognition accuracy. Previous researchers have studied evidence grids and place learning, but they have not combined these two powerful concepts, nor have they used systematic experimentation to evaluate their methods' abilities.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
2
|
Case Based
|
cora
| 146
|
test
|
1-hop neighbor's text information: Reinforcement Learning with Soft State Aggregation. : It is widely accepted that the use of more compact representations than lookup tables is crucial to scaling reinforcement learning (RL) algorithms to real-world problems. Unfortunately almost all of the theory of reinforcement learning assumes lookup table representations. In this paper we address the pressing issue of combining function approximation and RL, and present 1) a function approx-imator based on a simple extension to state aggregation (a commonly used form of compact representation), namely soft state aggregation, 2) a theory of convergence for RL with arbitrary, but fixed, soft state aggregation, 3) a novel intuitive understanding of the effect of state aggregation on online RL, and 4) a new heuristic adaptive state aggregation algorithm that finds improved compact representations by exploiting the non-discrete nature of soft state aggregation. Preliminary empirical results are also presented.
1-hop neighbor's text information: Reinforcement learning algorithm for partially observable Markov decision problems. : Increasing attention has been paid to reinforcement learning algorithms in recent years, partly due to successes in the theoretical analysis of their behavior in Markov environments. If the Markov assumption is removed, however, neither generally the algorithms nor the analyses continue to be usable. We propose and analyze a new learning algorithm to solve a certain class of non-Markov decision problems. Our algorithm applies to problems in which the environment is Markov, but the learner has restricted access to state information. The algorithm involves a Monte-Carlo policy evaluation combined with a policy improvement method that is similar to that of Markov decision problems and is guaranteed to converge to a local maximum. The algorithm operates in the space of stochastic policies, a space which can yield a policy that performs considerably better than any deterministic policy. Although the space of stochastic policies is continuous|even for a discrete action space|our algorithm is computationally tractable.
Target text information: Learning without state-estimation in Partially Observable Markovian Decision Processes, : Reinforcement learning (RL) algorithms provide a sound theoretical basis for building learning control architectures for embedded agents. Unfortunately all of the theory and much of the practice (see Barto et al., 1983, for an exception) of RL is limited to Marko-vian decision processes (MDPs). Many real-world decision tasks, however, are inherently non-Markovian, i.e., the state of the environment is only incompletely known to the learning agent. In this paper we consider only partially observable MDPs (POMDPs), a useful class of non-Markovian decision processes. Most previous approaches to such problems have combined computationally expensive state-estimation techniques with learning control. This paper investigates learning in POMDPs without resorting to any form of state estimation. We present results about what TD(0) and Q-learning will do when applied to POMDPs. It is shown that the conventional discounted RL framework is inadequate to deal with POMDPs. Finally we develop a new framework for learning without state-estimation in POMDPs by including stochastic policies in the search space, and by defining the value or utility of a dis tribution over states.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
5
|
Reinforcement Learning
|
cora
| 1,534
|
test
|
1-hop neighbor's text information: The problem with noise and small disjuncts. : Systems that learn from examples often create a disjunctive concept definition. The disjuncts in the concept definition which cover only a few training examples are referred to as small disjuncts. The problem with small disjuncts is that they are more error prone than large disjuncts, but may be necessary to achieve a high level of predictive accuracy [Holte, Acker, and Porter, 1989]. This paper extends previous work done on the problem of small disjuncts by taking noise into account. It investigates the assertion that it is hard to learn from noisy data because it is difficult to distinguish between noise and true exceptions. In the process of evaluating this assertion, insights are gained into the mechanisms by which noise affects learning. Two domains are investigated. The experimental results in this paper suggest that for both Shapiro's chess endgame domain [Shapiro, 1987] and for the Wisconsin breast cancer domain [Wolberg, 1990], the assertion is true, at least for low levels (5-10%) of class noise.
1-hop neighbor's text information: Concept learning and the problem of small disjuncts. :
1-hop neighbor's text information: Selection of relevant features in machine learning. : gers University. Also appears as tech. report ML- TR-7. Minton, S. (1988). Quantitative results concerning the utility of explanation-based learning. In Proceedings of National Conference on Artificial Intelli gence, pages 564-569. St. Paul, MN.
Target text information: Learning with Small Disjuncts, : Systems that learn from examples often create a disjunctive concept definition. The disjuncts in the concept definition which cover only a few training examples are referred to as small disjuncts. The problem with small disjuncts is that they are more error prone than large disjuncts, but may be necessary to achieve a high level of predictive accuracy [Holte, Acker, and Porter, 1989]. This paper extends previous work done on the problem of small disjuncts by investigating the reasons why small disjuncts are more error prone than large disjuncts, and evaluating the impact small disjuncts have on inductive learning. This paper shows that attribute noise, missing attributes, class noise, and training set size can each cause small disjuncts to be more error prone than large disjuncts. This paper also evaluates the impact that these factors have on learning with small disjuncts (i.e., on the error rate). It shows, for two artificial domains, that when low levels of attribute noise are applied only to the training set (the ability to learn the correct noise-free concept is being evaluated), small disjuncts are primarily responsible for making learning difficult.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
4
|
Theory
|
cora
| 1,986
|
test
|
1-hop neighbor's text information: MacKay (1995). Probabilistic networks: new models and new methods. : In this paper I describe the implementation of a probabilistic regression model in BUGS. BUGS is a program that carries out Bayesian inference on statistical problems using a simulation technique known as Gibbs sampling. It is possible to implement surprisingly complex regression models in this environment. I demonstrate the simultaneous inference of an interpolant and an input-dependent noise level.
1-hop neighbor's text information: Bayesian training of backpropagation networks by the hybrid monte carlo method. : It is shown that Bayesian training of backpropagation neural networks can feasibly be performed by the "Hybrid Monte Carlo" method. This approach allows the true predictive distribution for a test case given a set of training cases to be approximated arbitrarily closely, in contrast to previous approaches which approximate the posterior weight distribution by a Gaussian. In this work, the Hybrid Monte Carlo method is implemented in conjunction with simulated annealing, in order to speed relaxation to a good region of parameter space. The method has been applied to a test problem, demonstrating that it can produce good predictions, as well as an indication of the uncertainty of these predictions. Appropriate weight scaling factors are found automatically. By applying known techniques for calculation of "free energy" differences, it should also be possible to compare the merits of different network architectures. The work described here should also be applicable to a wide variety of statistical models other than neural networks.
1-hop neighbor's text information: Consistency of Posterior Distributions for Neural Networks: In this paper we show that the posterior distribution for feedforward neural networks is asymptotically consistent. This paper extends earlier results on universal approximation properties of neural networks to the Bayesian setting. The proof of consistency embeds the problem in a density estimation problem, then uses bounds on the bracketing entropy to show that the posterior is consistent over Hellinger neighborhoods. It then relates this result back to the regression setting. We show consistency in both the setting of the number of hidden nodes growing with the sample size, and in the case where the number of hidden nodes is treated as a parameter. Thus we provide a theoretical justification for using neural networks for nonparametric regression in a Bayesian framework.
Target text information: Bayesian Methods for Adaptive Models. :
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
|
4
|
Theory
|
cora
| 1,369
|
test
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.