content
stringlengths
633
9.91k
label
stringclasses
7 values
category
stringclasses
7 values
dataset
stringclasses
1 value
node_id
int64
0
2.71k
split
stringclasses
3 values
1-hop neighbor's text information: A reduced multipipeline machine description that preserves scheduling constraints. : High performance compilers increasingly rely on accurate modeling of the machine resources to efficiently exploit the instruction level parallelism of an application. In this paper, we propose a reduced machine description that results in faster detection of resource contentions while preserving the scheduling constraints present in the original machine description. The proposed approach reduces a machine description in an automated, error-free, and efficient fashion. Moreover, it fully supports schedulers that backtrack and process operations in arbitrary order. Reduced descriptions for the DEC Alpha 21064, MIPS R3000/R3010, and Cydra 5 result in 4 to 7 times faster detection of resource contentions and require 22 to 90% of the memory storage used by the original machine descriptions. Target text information: Efficient instruction scheduling using finite state automata. : Modern compilers employ sophisticated instruction scheduling techniques to shorten the number of cycles taken to execute the instruction stream. In addition to correctness, the instruction scheduler must also ensure that hardware resources are not oversubscribed in any cycle. For a contemporary processor implementation with multiple pipelines and complex resource usage restrictions, this is not an easy task. The complexity involved in reasoning about such resource hazards is one of the primary factors that constrain the instruction scheduler from performing many aggressive transformations. For example, the ability to do code motion or instruction replacement in the middle of an already scheduled block would be a very powerful transformation if it could be performed efficiently. We extend a technique for detecting pipeline resource hazards based on finite state automata, to support the efficient implementation of such transformations that are essential for aggressive instruction scheduling beyond basic blocks. Although similar code transformations can be supported by other schemes such as reservation tables, our scheme is superior in terms of space and time. A global instruction scheduler that used these techniques was implemented in the KSR compiler. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
0
Rule Learning
cora
2,168
test
1-hop neighbor's text information: "Adding Learning to the Cellular development of Neural Networks: Evolution and the Baldwin Effect," : This paper compares the efficiency of two encoding schemes for Artificial Neural Networks optimized by evolutionary algorithms. Direct Encoding encodes the weights for an a priori fixed neural network architecture. Cellular Encoding encodes both weights and the architecture of the neural network. In previous studies, Direct Encoding and Cellular Encoding have been used to create neural networks for balancing 1 and 2 poles attached to a cart on a fixed track. The poles are balanced by a controller that pushes the cart to the left or the right. In some cases velocity information about the pole and cart is provided as an input; in other cases the network must learn to balance a single pole without velocity information. A careful study of the behavior of these systems suggests that it is possible to balance a single pole with velocity information as an input and without learning to compute the velocity. A new fitness function is introduced that forces the neural network to compute the velocity. By using this new fitness function and tuning the syntactic constraints used with cellular encoding, we achieve a tenfold speedup over our previous study and solve a more difficult problem: balancing two poles when no information about the velocity is provided as input. 1-hop neighbor's text information: Some studies in machine learning using the game of Checkers. : 1-hop neighbor's text information: "Evolution of a time-optimal fly-to controller circuit using genetic programming," : Genetic programming is an automatic programming technique that evolves computer programs to solve, or approximately solve, problems. This paper presents two examples in which genetic programming creates a computer program for controlling a robot so that the robot moves to a specified destination point in minimal time. In the first approach, genetic programming evolves a computer program composed of ordinary a r i t h m e t i c o p e r a t i o n s a n d Target text information: "Automated WYSIWYG Design of both the topology and component val 223 ues of electrical circuits using genetic programming," : Genetic programming was used to evolve both the topology and sizing (numerical values) for each component of a low-distortion, low I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
1,237
val
1-hop neighbor's text information: Polychotomous regression. : Technical Report No. 319 April 7, 1997 University of Washington Department of Statistics Seattle, Washington 98195-4322 Abstract Kooperberg, Bose, and Stone (1997) introduced polyclass, a methodology that uses adaptively selected linear splines and their tensor products to model conditional class probabilities. The authors attempted to develop a methodology that would work well on small and moderate size problems and would scale up to large problems. However, the version of polyclass that was developed for large problems was impractical in that it required two months of cpu time to apply it to a large data set. A modification to this methodology involving the use of the stochastic gradient (on-line) method in fitting polyclass models to given sets of basis functions is developed here that makes the methodology applicable to large data sets. In particular, it is successfully applied to a phoneme recognition problem involving 45 phonemes, 81 features, 150,000 cases in the training sample, 1000 basis functions, and 44,000 unknown parameters. Comparisons with neural networks are made both on the original problem and on a three-vowel subproblem. Target text information: Trees and Splines in Survival Analysis: Technical Report No. 275 Revised March 30, 1995 University of Washington Department of Statistics Seattle, Washington 98195 Abstract During the past few years several nonparametric alternatives to the Cox proportional hazards model have appeared in the literature. These methods extend techniques that are well known from regression analysis to the analysis of censored survival data. In this paper we discuss methods based on (partition) trees and (polynomial) splines, analyze two datasets using both Survival Trees[1] and HARE[2], and compare the strengths and weaknesses of the two methods. One of the strengths of HARE is that its model fitting procedure has an implicit check for proportionality of the underlying hazards model. It also provides an explicit model for the conditional hazards function, which makes it very convenient to obtain graphical summaries. On the other hand, the tree-based methods automatically partition a dataset into groups of cases that are similar in survival history. Results obtained by survival trees and HARE are often complimentary. Trees and splines in survival analysis should provide the data analyst with two useful tools when analyzing survival data. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
416
test
1-hop neighbor's text information: Abduction as belief revision. : We propose a model of abduction based on the revision of the epistemic state of an agent. Explanations must be sufficient to induce belief in the sentence to be explained (for instance, some observation), or ensure its consistency with other beliefs, in a manner that adequately accounts for factual and hypothetical sentences. Our model will generate explanations that nonmonotonically predict an observation, thus generalizing most current accounts, which require some deductive relationship between explanation and observation. It also provides a natural preference ordering on explanations, defined in terms of normality or plausibility. To illustrate the generality of our approach, we reconstruct two of the key paradigms for model-based diagnosis, abductive and consistency-based diagnosis, within our framework. This reconstruction provides an alternative semantics for both and extends these systems to accommodate our predictive explanations and semantic preferences on explanations. It also illustrates how more general information can be incorporated in a principled manner. fl Some parts of this paper appeared in preliminary form as Abduction as Belief Revision: A Model of Preferred Explanations, Proc. of Eleventh National Conf. on Artificial Intelligence (AAAI-93), Washington, DC, pp.642-648 (1993). 1-hop neighbor's text information: Halpern (1997). Defining explanation in probabilistic systems. : As probabilistic systems gain popularity and are coming into wider use, the need for a mechanism that explains the system's findings and recommendations becomes more critical. The system will also need a mechanism for ordering competing explanations. We examine two representative approaches to explanation in the literature one due to G ardenfors and one due to Pearland show that both suffer from significant problems. We propose an approach to defining a notion of better explanation that combines some of the features of both together with more recent work by Pearl and others on causality. Target text information: Explaining Predictions in Bayesian Networks and Influence Diagrams: As Bayesian Networks and Influence Diagrams are being used more and more widely, the importance of an efficient explanation mechanism becomes more apparent. We focus on predictive explanations, the ones designed to explain predictions and recommendations of probabilistic systems. We analyze the issues involved in defining, computing and evaluating such explanations and present an algorithm to compute them. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
472
test
1-hop neighbor's text information: Collective memory search. : 1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming. 1-hop neighbor's text information: Strongly typed genetic programming in evolving cooperation strategies. : Target text information: Voting for Schemata: The schema theorem states that implicit parallel search is behind the power of the genetic algorithm. We contend that chromosomes can vote, proportionate to their fitness, for candidate schemata. We maintain a population of binary strings and ternary schemata. The string population not only works on solving its problem domain, but it supplies fitness for the schema population, which indirectly can solve the original problem. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
841
train
1-hop neighbor's text information: Perceptual Development and Learning: From Behavioral, Neurophysiological and Morphological Evidence to Computational Models. : An intelligent system has to be capable of adapting to a constantly changing environment. It therefore, ought to be capable of learning from its perceptual interactions with its surroundings. This requires a certain amount of plasticity in its structure. Any attempt to model the perceptual capabilities of a living system or, for that matter, to construct a synthetic system of comparable abilities, must therefore, account for such plasticity through a variety of developmental and learning mechanisms. This paper examines some results from neuroanatomical, morphological, as well as behavioral studies of the development of visual perception; integrates them into a computational framework; and suggests several interesting experiments with computational models that can yield insights into the development of visual perception. In order to understand the development of information processing structures in the brain, one needs knowledge of changes it undergoes from birth to maturity in the context of a normal environment. However, knowledge of its development in aberrant settings is also extremely useful, because it reveals the extent to which the development is a function of environmental experience (as opposed to genetically determined pre-wiring). Accordingly, we consider development of the visual system under both normal and restricted rearing conditions. The role of experience in the early development of the sensory systems in general, and the visual system in particular, has been widely studied through a variety of experiments involving carefully controlled manipulation of the environment presented to an animal. Extensive reviews of such results can be found in (Mitchell, 1984; Movshon, 1981; Hirsch, 1986; Boothe, 1986; Singer, 1986). Some examples of manipulation of visual experience are total pattern deprivation (e.g., dark rearing), selective deprivation of a certain class of patterns (e.g., vertical lines), monocular deprivation in animals with binocular vision, etc. Extensive studies involving behavioral deficits resulting from total visual pattern deprivation indicate that the deficits arise primarily as a result of impairment of visual information processing in the brain. The results of these experiments suggest specific developmental or learning mechanisms that may be operating at various stages of development, and at different levels in the system. We will discuss some of these hhhhhhhhhhhhhhh This is a working draft. All comments, especially constructive criticism and suggestions for improvement, will be appreciated. I am indebted to Prof. James Dannemiller for introducing me to some of the literature in infant development; to Prof. Leonard Uhr for his helpful comments on an initial draft of the paper; and to numerous researchers whose experimental work has provided the basis for the model outlined in this paper. This research was partially supported by grants from the National Science Foundation and the University of Wisconsin Graduate School. 1-hop neighbor's text information: Some Biases For Efficient Learning of Spatial, Temporal, and Spatio-Temporal Patterns. : This paper introduces and explores some representational biases for efficient learning of spatial, temporal, or spatio-temporal patterns in connectionist networks (CN) massively parallel networks of simple computing elements. It examines learning mechanisms that constructively build up network structures that encode information from environmental stimuli at successively higher resolutions as needed for the tasks (e.g., perceptual recognition) that the network has to perform. Some simple examples are presented to illustrate the the basic structures and processes used in such networks to ensure the parsimony of learned representations by guiding the system to focus its efforts at the minimal adequate resolution. Several extensions of the basic algorithm for efficient learning using multi-resolution representations of spatial, temporal, or spatio-temporal patterns are discussed. 1-hop neighbor's text information: Experiments with the cascade-correlation algorithm. Microcomputer Applications, : Technical Report # 91-16 July 1991; Revised August 1991 Target text information: Brain-Structured Networks That Perceive and Learn. : This paper specifies the main features of Brain-like, Neuronal, and Connectionist models; argues for the need for, and usefulness of, appropriate successively larger brain-like structures; and examines parallel-hierarchical Recognition Cone models of perception from this perspective, as examples of such structures. The anatomy, physiology, behavior, and development of the visual system are briefly summarized to motivate the architecture of brain-structured networks for perceptual recognition. Results are presented from simulations of carefully pre-designed Recognition Cone structures that perceive objects (e.g., houses) in digitized photographs. A framework for perceptual learning is introduced, including mechanisms for generation-discovery (feedback-guided growth of new links and nodes, subject to brain-like constraints (e.g., local receptive fields, global convergence-divergence). The information processing transforms discovered through generation are fine-tuned by feedback-guided reweight-ing of links. Some preliminary results are presented of brain-structured networks that learn to recognize simple objects (e.g., letters of the alphabet, cups, apples, bananas) through feedback-guided generation and reweighting. These show large improvements over networks that either lack brain-like structure or/and learn by reweighting of links alone. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
1,017
train
1-hop neighbor's text information: Reducing disruption of superior building blocks in genetic algorithms. : 1-hop neighbor's text information: A User Friendly Workbench for Order-Based Genetic Algorithm Research, : Over the years there has been several packages developed that provide a workbench for genetic algorithm (GA) research. Most of these packages use the generational model inspired by GENESIS. A few have adopted the steady-state model used in Genitor. Unfortunately, they have some deficiencies when working with order-based problems such as packing, routing, and scheduling. This paper describes LibGA, which was developed specifically for order-based problems, but which also works easily with other kinds of problems. It offers an easy to use `user-friendly' interface and allows comparisons to be made between both generational and steady-state genetic algorithms for a particular problem. It includes a variety of genetic operators for reproduction, crossover, and mutation. LibGA makes it easy to use these operators in new ways for particular applications or to develop and include new operators. Finally, it offers the unique new feature of a dynamic generation gap. 1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming. Target text information: A parallel island model genetic algorithm for the multiprocessor scheduling problem. : In this paper we compare the performance of a serial and a parallel island model Genetic Algorithm for solving the Multiprocessor Scheduling Problem. We show results using fixed and scaled problems both using and not using migration. We have found that in addition to providing a speedup through the use of parallel processing, the parallel island model GA with migration finds better quality solutions than the serial GA. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
2,232
test
1-hop neighbor's text information: A hardware mechanism for dynamic reordering of memory references. : 1-hop neighbor's text information: The Expandable Split Window Paradigm for Exploiting Fine-Grain Parallelism, : We propose a new processing paradigm, called the Expandable Split Window (ESW) paradigm, for exploiting fine-grain parallelism. This paradigm considers a window of instructions (possibly having dependencies) as a single unit, and exploits fine-grain parallelism by overlapping the execution of multiple windows. The basic idea is to connect multiple sequential processors, in a decoupled and decentralized manner, to achieve overall multiple issue. This processing paradigm shares a number of properties of the restricted dataflow machines, but was derived from the sequential von Neumann architecture. We also present an implementation of the Expandable Split Window execution model, and preliminary performance results. Target text information: Task selection for a Multiscalar processor. : The Multiscalar architecture advocates a distributed processor organization and task-level speculation to exploit high degrees of instruction level parallelism (ILP) in sequential programs without impeding improvements in clock speeds. The main goal of this paper is to understand the key implications of the architectural features of distributed processor organization and task-level speculation for compiler task selection from the point of view of performance. We identify the fundamental performance issues to be: control ow speculation, data communication, data dependence speculation, load imbalance, and task overhead. We show that these issues are intimately related to a few key characteristics of tasks: task size, inter-task control ow, and inter-task data dependence. We describe compiler heuristics to select tasks with favorable characteristics. We report experimental results to show that the heuristics are successful in boosting overall performance by establishing larger ILP windows. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
0
Rule Learning
cora
2,622
train
1-hop neighbor's text information: Self-Organization and Associative Memory, : Selective suppression of transmission at feedback synapses during learning is proposed as a mechanism for combining associative feedback with self-organization of feedforward synapses. Experimental data demonstrates cholinergic suppression of synaptic transmission in layer I (feedback synapses), and a lack of suppression in layer IV (feed-forward synapses). A network with this feature uses local rules to learn mappings which are not linearly separable. During learning, sensory stimuli and desired response are simultaneously presented as input. Feedforward connections form self-organized representations of input, while suppressed feedback connections learn the transpose of feedfor-ward connectivity. During recall, suppression is removed, sensory input activates the self-organized representation, and activity generates the learned response. 1-hop neighbor's text information: On estimation of a probability density function and mode. : To apply the algorithm for classification we assign each class a separate set of codebook Gaussians. Each set is only trained with patterns from a single class. After having trained the codebook Gaussians, each set provides an estimate of the probability function of one class; just as with Parzen window estimation, we take as the estimate of the pattern distribution the average of all Gaussians in the set. Classification of a pattern may now be done by calculating the probability of each class at the respective sample point, and assigning to the pattern the class with the highest probability. Hence the whole codebook plays a role in the classification of patterns. This is not the case with regular classification schemes using codebooks. We have tested the classification scheme on several classification tasks including the two spiral problem. We compared our algorithm to various other classification algorithms and it came out second; the best algorithm for the applications is the Parzen window estimation. However, the computing time and memory for Parzen window estimation are excessive when compared to our algorithm, and hence, in practical situations, our algorithm is to be preferred. We have developed a fast algorithm which combines attractive properties of both Parzen window estimation and vector quantization. The scale parameter is tuned adaptively and, therefore, is not set in an ad hoc manner. It allows a classification strategy in which all the codebook vectors are taken into account. This yields better results than the standard vector quantization techniques. An interesting topic for further research is to use radially non-symmetric Gaussians. Target text information: Interactive Segmentation of Three-dimensional Medical Images (Extended abstract): I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
654
test
1-hop neighbor's text information: A note on learning from multiple--instance examples. : We describe a simple reduction from the problem of PAC-learning from multiple-instance examples to that of PAC-learning with one-sided random classification noise. Thus, all concept classes learnable with one-sided noise, which includes all concepts learnable in the usual 2-sided random noise model plus others such as the parity function, are learnable from multiple-instance examples. We also describe a more efficient (and somewhat technically more involved) reduction to the Statistical-Query model that results in a polynomial-time algorithm for learning axis-parallel rectangles with sample complexity ~ O(d 2 r=* 2 ), saving roughly a factor of r over the results of Auer et al. (1997). 1-hop neighbor's text information: PAC learning axis-aligned rectangles with respect to product distributions from multiple-instance examples. : We describe a polynomial-time algorithm for learning axis-aligned rectangles in Q d with respect to product distributions from multiple-instance examples in the PAC model. Here, each example consists of n elements of Q d together with a label indicating whether any of the n points is in the rectangle to be learned. We assume that there is an unknown product distribution D over Q d such that all instances are independently drawn according to D. The accuracy of a hypothesis is measured by the probability that it would incorrectly predict whether one of n more points drawn from D was in the rectangle to be learned. Our algorithm achieves accuracy * with probability 1 ffi in 1-hop neighbor's text information: Generalizing from case studies: A case study. : Most empirical evaluations of machine learning algorithms are case studies evaluations of multiple algorithms on multiple databases. Authors of case studies implicitly or explicitly hypothesize that the pattern of their results, which often suggests that one algorithm performs significantly better than others, is not limited to the small number of databases investigated, but instead holds for some general class of learning problems. However, these hypotheses are rarely supported with additional evidence, which leaves them suspect. This paper describes an empirical method for generalizing results from case studies and an example application. This method yields rules describing when some algorithms significantly outperform others on some dependent measures. Advantages for generalizing from case studies and limitations of this particular approach are also described. Target text information: Solving the multiple-instance problem with axis-parallel rectangles. : The multiple instance problem arises in tasks where the training examples are ambiguous: a single example object may have many alternative feature vectors (instances) that describe it, and yet only one of those feature vectors may be responsible for the observed classification of the object. This paper describes and compares three kinds of algorithms that learn axis-parallel rectangles to solve the multiple-instance problem. Algorithms that ignore the multiple instance problem perform very poorly. An algorithm that directly confronts the multiple instance problem (by attempting to identify which feature vectors are responsible for the observed classifications) performs best, giving 89% correct predictions on a musk-odor prediction task. The paper also illustrates the use of artificial data to debug and compare these algorithms. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
2,410
test
1-hop neighbor's text information: From Design Experiences to Generic Mechanisms: Model-Based Learning in Analogical Design. : Analogical reasoning plays an important role in design. In particular, cross-domain analogies appear to be important in innovative and creative design. However, making cross-domain analogies is hard and often requires abstractions common to the source and target domains. Recent work in case-based design suggests that generic mechanisms are one type of abstractions useful in adapting past designs. However, one important yet unexplored issue is where these generic mechanisms come from. We hypothesize that they are acquired incrementally from design experiences in familiar domains by generalization over patterns of regularity. Three important issues in generalization from experiences are what to generalize from an experience, how far to generalize, and what methods to use. In this paper, we describe how structure-behavior-function models of designs in a familiar domain provide the content, and together with the problem-solving context in which learning occurs, also provide the constraints for learning generic mechanisms from design experiences. In particular, we describe the model-based learning method with a scenario of learning of feedback mechanism. 1-hop neighbor's text information: Model-Based Learning of Structural Indices to Design Cases. : A major issue in case-basedsystems is retrieving the appropriate cases from memory to solve a given problem. This implies that a case should be indexed appropriately when stored in memory. A case-based system, being dynamic in that it stores cases for reuse, needs to learn indices for the new knowledge as the system designers cannot envision that knowledge. Irrespective of the type of indexing (structural or functional), a hierarchical organization of the case memory raises two distinct but related issues in index learning: learning the indexing vocabulary and learning the right level of generalization. In this paper we show how structure-behavior-function (SBF) models help in learning structural indices to design cases in the domain of physical devices. The SBF model of a design provides the functional and causal explanation of how the structure of the design delivers its function. We describe how the SBF model of a design provides both the vocabulary for structural indexing of design cases and the inductive biases for index generalization. We further discuss how model-based learning can be integrated with similarity-based learning (that uses prior design cases) for learning the level of index generalization. 1-hop neighbor's text information: Innovation in Analogical Design: A Model-Based Approach. : Target text information: Discovery of Physical Principles from Design Experiences. : One method for making analogies is to access and instantiate abstract domain principles, and one method for acquiring knowledge of abstract principles is to discover them from experience. We view generalization over experiences in the absence of any prior knowledge of the target principle as the task of hypothesis formation, a subtask of discovery. Also, we view the use of the hypothesized principles for analogical design as the task of hypothesis testing, another subtask of discovery. In this paper, we focus on discovery of physical principles by generalization over design experiences in the domain of physical devices. Some important issues in generalization from experiences are what to generalize from an experience, how far to generalize, and what methods to use. We represent a reasoner's comprehension of specific designs in the form of structure-behavior-function (SBF) models. An SBF model provides a functional and causal explanation of the working of a device. We represent domain principles as device-independent behavior-function (BF) models. We show that (i) the function of a device determines what to generalize from its SBF model, (ii) the SBF model itself suggests how far to generalize, and (iii) the typology of functions indicates what method to use. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
1,283
test
1-hop neighbor's text information: The Structure-Mapping Engine: Algorithms and Examples. : This paper describes the Structure-Mapping Engine (SME), a program for studying analogical processing. SME has been built to explore Gentner's Structure-mapping theory of analogy, and provides a "tool kit" for constructing matching algorithms consistent with this theory. Its flexibility enhances cognitive simulation studies by simplifying experimentation. Furthermore, SME is very efficient, making it a useful component in machine learning systems as well. We review the Structure-mapping theory and describe the design of the engine. We analyze the complexity of the algorithm, and demonstrate that most of the steps are polynomial, typically bounded by O (N 2 ). Next we demonstrate some examples of its operation taken from our cognitive simulation studies and work in machine learning. Finally, we compare SME to other analogy programs and discuss several areas for future work. This paper appeared in Artificial Intelligence, 41, 1989, pp 1-63. For more information, please contact [email protected] Target text information: Modeling Analogical Problem Solving in a Production System Architecture: This research is supported by a National Science Foundation Fellowship awarded to Dario Salvucci and Office of Naval Research grant N00014-96-1-0491 awarded to John Anderson. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the National Science Foundation, the Office of Naval Research, or the United States government. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
343
test
1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming. 1-hop neighbor's text information: Extended selection mechanisms in genetic algorithms. : 1-hop neighbor's text information: (1991) Global optimization by means of distributed evolution Genetic Algorithms in Engineering and Computer Science Editor J. : Genetic Algorithms (GAs) are powerful heuristic search strategies based upon a simple model of organic evolution. The basic working scheme of GAs as developed by Holland [Hol75] is described within this paper in a formal way, and extensions based upon the second-level learning principle for strategy parameters as introduced in Evolution Strategies (ESs) are proposed. First experimental results concerning this extension of GAs are also reported. Target text information: (1992) Genetic self-learning. : Evolutionary Algorithms are direct random search algorithms which imitate the principles of natural evolution as a method to solve adaptation (learning) tasks in general. As such they have several features in common which can be observed on the genetic and phenotypic level of living species. In this paper the algorithms' capability of adaptation or learning in a wider sense is demonstrated, and it is focused on Genetic Algorithms to illustrate the learning process on the population level (first level learning), and on Evolution Strategies to demonstrate the learning process on the meta-level of strategy parameters (second level learning). I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
2,537
test
1-hop neighbor's text information: On the Virtues of Parameterized Uniform Crossover, : Traditionally, genetic algorithms have relied upon 1 and 2-point crossover operators. Many recent empirical studies, however, have shown the benefits of higher numbers of crossover points. Some of the most intriguing recent work has focused on uniform crossover, which involves on the average L/2 crossover points for strings of length L. Theoretical results suggest that, from the view of hyperplane sampling disruption, uniform crossover has few redeeming features. However, a growing body of experimental evidence suggests otherwise. In this paper, we attempt to reconcile these opposing views of uniform crossover and present a framework for understanding its virtues. 1-hop neighbor's text information: Genetic algorithms for vertex splitting in DAGs. : 1 This paper has been submitted to the 5th International Conference on Genetic Algorithms 2 electronic mail address: [email protected] 3 electronic mail address: [email protected] 1-hop neighbor's text information: A formal analysis of the role of multi--point crossover in genetic algorithms. : On the basis of early theoretical and empirical studies, genetic algorithms have typically used 1 and 2-point crossover operators as the standard mechanisms for implementing recombination. However, there have been a number of recent studies, primarily empirical in nature, which have shown the benefits of crossover operators involving a higher number of crossover points. From a traditional theoretical point of view, the most surprising of these new results relate to uniform crossover, which involves on the average L / 2 crossover points for strings of length L. In this paper we extend the existing theoretical results in an attempt to provide a broader explanatory and predictive theory of the role of multi-point crossover in genetic algorithms. In particular, we extend the traditional disruption analysis to include two general forms of multi-point crossover: n-point crossover and uniform crossover. We also analyze two other aspects of multi-point crossover operators, namely, their recombination potential and exploratory power. The results of this analysis provide a much clearer view of the role of multi-point crossover in genetic algorithms. The implications of these results on implementation issues and performance are discussed, and several directions for further research are suggested. Target text information: A study of crossover operators in genetic programming. : Holland's analysis of the sources of power of genetic algorithms has served as guidance for the applications of genetic algorithms for more than 15 years. The technique of applying a recombination operator (crossover) to a population of individuals is a key to that power. Neverless, there have been a number of contradictory results concerning crossover operators with respect to overall performance. Recently, for example, genetic algorithms were used to design neural network modules and their control circuits. In these studies, a genetic algorithm without crossover outperformed a genetic algorithm with crossover. This report re-examines these studies, and concludes that the results were caused by a small population size. New results are presented that illustrate the effectiveness of crossover when the population size is larger. From a performance view, the results indicate that better neural networks can be evolved in a shorter time if the genetic algorithm uses crossover. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
2,699
test
1-hop neighbor's text information: Parallel gradient distribution in unconstrained optimization. : A parallel version is proposed for a fundamental theorem of serial unconstrained optimization. The parallel theorem allows each of k parallel processors to use simultaneously a different algorithm, such as a descent, Newton, quasi-Newton or a conjugate gradient algorithm. Each processor can perform one or many steps of a serial algorithm on a portion of the gradient of the objective function assigned to it, independently of the other processors. Eventually a synchronization step is performed which, for differentiable convex functions, consists of taking a strong convex combination of the k points found by the k processors. For nonconvex, as well as convex, differentiable functions, the best point found by the k processors is taken, or any better point. The fundamental result that we establish is that any accumulation point of the parallel algorithm is stationary for the nonconvex case, and is a global solution for the convex case. Computational testing on the Thinking Machines CM-5 multiprocessor indicate a speedup of the order of the number of processors employed. Target text information: Serial and parallel multicategory discrimination. : A parallel algorithm is proposed for a fundamental problem of machine learning, that of mul-ticategory discrimination. The algorithm is based on minimizing an error function associated with a set of highly structured linear inequalities. These inequalities characterize piecewise-linear separation of k sets by the maximum of k affine functions. The error function has a Lipschitz continuous gradient that allows the use of fast serial and parallel unconstrained minimization algorithms. A serial quasi-Newton algorithm is considerably faster than previous linear programming formulations. A parallel gradient distribution algorithm is used to parallelize the error-minimization problem. Preliminary computational results are given for both a DECstation I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
2,138
test
1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming. 1-hop neighbor's text information: Evolving networks: Using the genetic algorithm with connectionist learning. : 1-hop neighbor's text information: networks that develop their own teaching input. : Backpropagation learning (Rumelhart, Hinton and Williams, 1986) is a useful research tool but it has a number of undesiderable features such as having the experimenter decide from outside what should be learned. We describe a number of simulations of neural networks that internally generate their own teaching input. The networks generate the teaching input by trasforming the network input through connection weights that are evolved using a form of genetic algorithm. What results is an innate (evolved) capacity not to behave efficiently in an environment but to learn to behave efficiently. The analysis of what these networks evolve to learn shows some interesting results. Target text information: Modeling the Evolution of Motivation: In order for learning to improve the adaptiveness of an animal's behavior and thus direct evolution in the way Baldwin suggested, the learning mechanism must incorporate an innate evaluation of how the animal's actions influence its reproductive fitness. For example, many circumstances that damage an animal, or otherwise reduce its fitness are painful and tend to be avoided. We refer to the mechanism by which an animal evaluates the fitness consequences of its actions as a "motivation system," and argue that such a system must evolve along with the behaviors it evaluates. We describe simulations of the evolution of populations of agents instantiating a number of different architectures for generating action and learning, in worlds of differing complexity. We find that in some cases, members of the populations evolve motivation systems that are accurate enough to direct learning so as to increase the fitness of the actions the agents perform. Furthermore, the motivation systems tend to incorporate systematic distortions in their representations of the worlds they inhabit; these distortions can increase the adaptiveness of the behavior generated. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
278
val
1-hop neighbor's text information: Empirical Analysis of the General Utility Problem in Machine Learning, : The overfit problem in inductive learning and the utility problem in speedup learning both describe a common behavior of machine learning methods: the eventual degradation of performance due to increasing amounts of learned knowledge. Plotting the performance of the changing knowledge during execution of a learning method (the performance response) reveals similar curves for several methods. The performance response generally indicates an increase to a single peak followed by a more gradual decrease in performance. The similarity in performance responses suggests a model relating performance to the amount of learned knowledge. This paper provides empirical evidence for the existence of a general model by plotting the performance responses of several learning programs. Formal models of the performance response are also discussed. These models can be used to control the amount of learning and avoid degradation of performance. 1-hop neighbor's text information: A Statistical Approach to Solving the EBL Utility Problem, : Many "learning from experience" systems use information extracted from problem solving experiences to modify a performance element PE, forming a new element PE 0 that can solve these and similar problems more efficiently. However, as transformations that improve performance on one set of problems can degrade performance on other sets, the new PE 0 is not always better than the original PE; this depends on the distribution of problems. We therefore seek the performance element whose expected performance, over this distribution, is optimal. Unfortunately, the actual distribution, which is needed to determine which element is optimal, is usually not known. Moreover, the task of finding the optimal element, even knowing the distribution, is intractable for most interesting spaces of elements. This paper presents a method, palo, that side-steps these problems by using a set of samples to estimate the unknown distribution, and by using a set of transformations to hill-climb to a local optimum. This process is based on a mathematically rigorous form of utility analysis: in particular, it uses statistical techniques to determine whether the result of a proposed transformation will be better than the original system. We also present an efficient way of implementing this learning system in the context of a general class of performance elements, and include empirical evidence that this approach can work effectively. fl Much of this work was performed at the University of Toronto, where it was supported by the Institute for Robotics and Intelligent Systems and by an operating grant from the National Science and Engineering Research Council of Canada. We also gratefully acknowledge receiving many helpful comments from William Cohen, Dave Mitchell, Dale Schuurmans and the anonymous referees. 1-hop neighbor's text information: Utilization Filtering: a method for reducing the inherent harmfulness of deductively learned knowledge, : This paper highlights a phenomenon that causes deductively learned knowledge to be harmful when used for problem solving. The problem occurs when deductive problem solvers encounter a failure branch of the search tree. The backtracking mechanism of such problem solvers will force the program to traverse the whole subtree thus visiting many nodes twice - once by using the deductively learned rule and once by using the rules that generated the learned rule in the first place. We suggest an approach called utilization filtering to solve that problem. Learners that use this approach submit to the problem solver a filter function together with the knowledge that was acquired. The function decides for each problem whether to use the learned knowledge and what part of it to use. We have tested the idea in the context of a lemma learning system, where the filter uses the probability of a subgoal failing to decide whether to turn lemma usage off. Experiments show an improvement of performance by a factor of 3. This paper is concerned with a particular type of harmful redundancy that occurs in deductive problem solvers that employ backtracking in their search procedure, and use deductively learned knowledge to accelerate the search. The problem is that in failure branches of the search tree, the backtracking mechanism of the problem solver forces exploration of the whole subtree. Thus, the search procedure will visit many states twice - once by using the deductively learned rule, and once by using the search path that produced the rule in the first place. Target text information: Simple selection of utile control rules in speedup learning. : Many recent approaches to avoiding the utility problem in speedup learning rely on sophisticated utility measures and significant numbers of training data to accurately estimate the utility of control knowledge. Empirical results presented here and elsewhere indicate that a simple selection strategy of retaining all control rules derived from a training problem explanation quickly defines an efficient set of control knowledge from few training problems. This simple selection strategy provides a low-cost alternative to example-intensive approaches for improving the speed of a problem solver. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
1,100
val
1-hop neighbor's text information: Learning probabilistic automata with variable memory length. : We propose and analyze a distribution learning algorithm for variable memory length Markov processes. These processes can be described by a subclass of probabilistic finite automata which we name Probabilistic Finite Suffix Automata. The learning algorithm is motivated by real applications in man-machine interaction such as handwriting and speech recognition. Conventionally used fixed memory Markov and hidden Markov models have either severe practical or theoretical drawbacks. Though general hardness results are known for learning distributions generated by sources with similar structure, we prove that our algorithm can indeed efficiently learn distributions generated by our more restricted sources. In Particular, we show that the KL-divergence between the distribution generated by the target source and the distribution generated by our hypothesis can be made small with high confidence in polynomial time and sample complexity. We demonstrate the applicability of our algorithm by learning the structure of natural English text and using our hy pothesis for the correction of corrupted text. 1-hop neighbor's text information: Using Communication to Reduce Locality in Distributed Multi-Agent Learning. : This paper attempts to bridge the fields of machine learning, robotics, and distributed AI. It discusses the use of communication in reducing the undesirable effects of locality in fully distributed multi-agent systems with multiple agents/robots learning in parallel while interacting with each other. Two key problems, hidden state and credit assignment, are addressed by applying local undirected broadcast communication in a dual role: as sensing and as reinforcement. The methodology is demonstrated on two multi-robot learning experiments. The first describes learning a tightly-coupled coordination task with two robots, the second a loosely-coupled task with four robots learning social rules. Communication is used to share sensory data to overcome hidden state and reinforcement to overcome the credit assignment problem between the agents and to bridge the gap between local and global payoff. 1 1-hop neighbor's text information: "The Parti-game Algorithm for Variable Resolution Reinforcement Learning in Multidimensional State Spaces," : Parti-game is a new algorithm for learning feasible trajectories to goal regions in high dimensional continuous state-spaces. In high dimensions it is essential that learning does not plan uniformly over a state-space. Parti-game maintains a decision-tree partitioning of state-space and applies techniques from game-theory and computational geometry to efficiently and adaptively concentrate high resolution only on critical areas. The current version of the algorithm is designed to find feasible paths or trajectories to goal regions in high dimensional spaces. Future versions will be designed to find a solution that optimizes a real-valued criterion. Many simulated problems have been tested, ranging from two-dimensional to nine-dimensional state-spaces, including mazes, path planning, non-linear dynamics, and planar snake robots in restricted spaces. In all cases, a good solution is found in less than ten trials and a few minutes. Target text information: Learning to use selective attention and short-term memory in sequential tasks. : This paper presents U-Tree, a reinforcement learning algorithm that uses selective attention and short-term memory to simultaneously address the intertwined problems of large perceptual state spaces and hidden state. By combining the advantages of work in instance-based (or memory-based) learning and work with robust statistical tests for separating noise from task structure, the method learns quickly, creates only task-relevant state distinctions, and handles noise well. U-Tree uses a tree-structured representation, and is related to work on Prediction Suffix Trees [ Ron et al., 1994 ] , Parti-game [ Moore, 1993 ] , G-algorithm [ Chap-man and Kaelbling, 1991 ] , and Variable Resolution Dynamic Programming [ Moore, 1991 ] . It builds on Utile Suffix Memory [ McCallum, 1995c ] , which only used short-term memory, not selective perception. The algorithm is demonstrated solving a highway driving task in which the agent weaves around slower and faster traffic. The agent uses active perception with simulated eye movements. The environment has hidden state, time pressure, stochasticity, over 21,000 world states and over 2,500 percepts. From this environment and sensory system, the agent uses a utile distinction test to build a tree that represents depth-three memory where necessary, and has just 143 internal statesfar fewer than the 2500 3 states that would have resulted from a fixed-sized history-window ap proach. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
5
Reinforcement Learning
cora
2,546
test
1-hop neighbor's text information: Enhancing model-based learning for its application in robot navigation. : Target text information: Towards concept formation grounded on perception and action of a mobile robot. : The recognition of objects and, hence, their descriptions must be grounded in the environment in terms of sensor data. We argue, why the concepts, used to classify perceived objects and used to perform actions on these objects, should integrate action-oriented perceptual features and perception-oriented action features. We present a grounded symbolic representation for these concepts. Moreover, the concepts should be learned. We show a logic-oriented approach to learning grounded concepts. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
1,381
val
1-hop neighbor's text information: Inferential Theory of Learning: Developing Foundations for Multistrategy Learning, in Machine Learning: A Multistrategy Approach, Vol. IV, R.S. : The development of multistrategy learning systems should be based on a clear understanding of the roles and the applicability conditions of different learning strategies. To this end, this chapter introduces the Inferential Theory of Learning that provides a conceptual framework for explaining logical capabilities of learning strategies, i.e., their competence. Viewing learning as a process of modifying the learners knowledge by exploring the learners experience, the theory postulates that any such process can be described as a search in a knowledge space, triggered by the learners experience and guided by learning goals. The search operators are instantiations of knowledge transmutations, which are generic patterns of knowledge change. Transmutations may employ any basic type of inferencededuction, induction or analogy. Several fundamental knowledge transmutations are described in a novel and general way, such as generalization, abstraction, explanation and similization, and their counterparts, specialization, concretion, prediction and dissimilization, respectively. Generalization enlarges the reference set of a description (the set of entities that are being described). Abstraction reduces the amount of the detail about the reference set. Explanation generates premises that explain (or imply) the given properties of the reference set. Similization transfers knowledge from one reference set to a similar reference set. Using concepts of the theory, a multistrategy task-adaptive learning (MTL) methodology is outlined, and illustrated b y an example. MTL dynamically adapts strategies to the learning task, defined by the input information, learners background knowledge, and the learning goal. It aims at synergistically integrating a whole range of inferential learning strategies, such as empirical generalization, constructive induction, deductive generalization, explanation, prediction, abstraction, and similization. 1-hop neighbor's text information: Evolution, Learning, and Instinct: 100 Years of the Baldwin Effect Using Learning to Facilitate the: This paper describes a hybrid methodology that integrates genetic algorithms and decision tree learning in order to evolve useful subsets of discriminatory features for recognizing complex visual concepts. A genetic algorithm (GA) is used to search the space of all possible subsets of a large set of candidate discrimination features. Candidate feature subsets are evaluated by using C4.5, a decision-tree learning algorithm, to produce a decision tree based on the given features using a limited amount of training data. The classification performance of the resulting decision tree on unseen testing data is used as the fitness of the underlying feature subset. Experimental results are presented to show how increasing the amount of learning significantly improves feature set evolution for difficult visual recognition problems involving satellite and facial image data. In addition, we also report on the extent to which other more subtle aspects of the Baldwin effect are exhibited by the system. Target text information: Hybrid Learning Using Genetic Algorithms and Decision Trees for Pattern Classification. : This paper introduces a hybrid learning methodology that integrates genetic algorithms (GAs) and decision tree learning (ID3) in order to evolve optimal subsets of discriminatory features for robust pattern classification. A GA is used to search the space of all possible subsets of a large set of candidate discrimination features. For a given feature subset, ID3 is invoked to produce a decision tree. The classification performance of the decision tree on unseen data is used as a measure of fitness for the given feature set, which, in turn, is used by the GA to evolve better feature sets. This GA-ID3 process iterates until a feature subset is found with satisfactory classification performance. Experimental results are presented which illustrate the feasibility of our approach on difficult problems involving recognizing visual concepts in satellite and facial image data. The results also show improved classification performance and reduced description complexity when compared against standard methods for feature selection. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
1,196
test
1-hop neighbor's text information: The Design and Implementation of a Case based Planning Framework within a Partial Order Planner. : 1-hop neighbor's text information: Derivation replay for partial-order planning. : Derivation replay was first proposed by Carbonell as a method of transferring guidance from a previous problem-solving episode to a new one. Subsequent implementations have used state-space planning as the underlying methodology. This paper is motivated by the acknowledged superiority of partial-order (PO) planners in plan generation, and is an attempt to bring derivation replay into the realm of partial-order planning. Here we develop DerSNLP, a framework for doing replay in SNLP, a partial-order plan-space planner, and analyze its relative effectiveness. We will argue that the decoupling of planning (derivation) order and the execution order of plan steps, provided by partial-order planners, enables DerSNLP to exploit the guidance of previous cases in a more efficient and straightforward fashion. We validate our hypothesis through empirical comparisons between DerSNLP and two replay systems based on state-space planners. Target text information: Storing and indexing plan derivations through explanation-based analysis of retrieval failures. : Case-Based Planning (CBP) provides a way of scaling up domain-independent planning to solve large problems in complex domains. It replaces the detailed and lengthy search for a solution with the retrieval and adaptation of previous planning experiences. In general, CBP has been demonstrated to improve performance over generative (from-scratch) planning. However, the performance improvements it provides are dependent on adequate judgements as to problem similarity. In particular, although CBP may substantially reduce planning effort overall, it is subject to a mis-retrieval problem. The success of CBP depends on these retrieval errors being relatively rare. This paper describes the design and implementation of a replay framework for the case-based planner dersnlp+ebl. der-snlp+ebl extends current CBP methodology by incorporating explanation-based learning techniques that allow it to explain and learn from the retrieval failures it encounters. These techniques are used to refine judgements about case similarity in response to feedback when a wrong decision has been made. The same failure analysis is used in building the case library, through the addition of repairing cases. Large problems are split and stored as single goal subproblems. Multi-goal problems are stored only when these smaller cases fail to be merged into a full solution. An empirical evaluation of this approach demonstrates the advantage of learning from experienced retrieval failure. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
18
test
1-hop neighbor's text information: On the convergence properties of the EM algorithm. : In this article we investigate the relationship between the two popular algorithms, the EM algorithm and the Gibbs sampler. We show that the approximate rate of convergence of the Gibbs sampler by Gaussian approximation is equal to that of the corresponding EM type algorithm. This helps in implementing either of the algorithms as improvement strategies for one algorithm can be directly transported to the other. In particular, by running the EM algorithm we know approximately how many iterations are needed for convergence of the Gibbs sampler. We also obtain a result that under conditions, the EM algorithm used for finding the maximum likelihood estimates can be slower to converge than the corresponding Gibbs sampler for Bayesian inference which uses proper prior distributions. We illustrate our results in a number of realistic examples all based on the generalized linear mixed models. Target text information: Identifiability, Improper Priors and Gibbs Sampling for Generalized Linear Models: Alan E. Gelfand is a Professor in the Department of Statistics at the University of Connecti-cut, Storrs, CT 06269. Sujit K. Sahu is a Lecturer at the School of Mathematics, University of Wales, Cardiff, CF2 4YH, UK. The research of the first author was supported in part by NSF grant DMS 9301316 while the second author was supported in part by an EPSRC grant from UK. The authors thank Brad Carlin, Kate Cowles, Gareth Roberts and an anonymous referee for valuable comments. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
613
val
1-hop neighbor's text information: Specialization under social conditions in shared environments. : Specialist and generalist behaviors in populations of artificial neural networks are studied. A genetic algorithm is used to simulate evolution processes, and thereby to develop neural network control systems that exhibit specialist or generalist behaviors according to the fitness formula. With evolvable fitness formulae the evaluation measure is let free to evolve, and we obtain a co-evolution of the expressed behavior and the individual evolvable fitness formula. The use of evolvable fitness formulae lets us work in a dynamic fitness landscape, opposed to most work, that traditionally applies to static fitness landscapes, only. The role of competition in specialization is studied by letting the individuals live under social conditions in the same, shared environment and directly compete with each other. We find, that competition can act to provide population diversification in populations of organisms with individual evolvable fitness formulae. 1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming. Target text information: Specialization in Populations of Artificial Neural Networks: Specialization in populations of artificial neural networks is studied. Organisms with both fixed and evolvable fitness formulae are placed in isolated and shared environments, and the emerged behaviors are compared. An evolvable fitness formula specifies, that the evaluation measure is let free to evolve, and we obtain co-evolution of the expressed behavior and the individual evolvable fitness formula. In an isolated environment a generalist behavior emerges when organisms have a fixed fitness formula, and a specialist behavior emerges when organisms have individual evolvable fitness formulae. A population diversification analysis shows, that almost all organisms in a population in an isolated environment converge towards the same behavioral strategy, while we find, that competition can act to provide population diversification in populations of organisms in a shared environment. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
643
val
1-hop neighbor's text information: Genetic Encoding Strategies for Neural Networks: The application of genetic algorithms to neural network optimization (GANN) has produced an active field of research. This paper proposes a classification of the encoding strategies and it also gives a critical analysis The idea of evolving artificial neural networks (NN) by genetic algorithms (GA) is based on a powerful metaphor: the evolution of the human brain. This mechanism has developed the highest form of intelligence known from scratch. The metaphor has inspired a great deal of research activities that can be traced to the late 1980s (for instance [15]). An increasing amount of research reports, journal papers and theses have been published on the topic, generating a conti-nously growing field. Researchers have devoloped a variety of different techniques to encode neural networks for the GA, with increasing complexity. This young field is driven mostly by small, independet research groups that scarcely cooperate with each other. This paper will attempt to analyse and to structure the already performed work, and to point out the shortcomings of the approaches. of the current state of development. 1-hop neighbor's text information: Soft Computing: the Convergence of Emerging Reasoning Technologies: The term Soft Computing (SC) represents the combination of emerging problem-solving technologies such as Fuzzy Logic (FL), Probabilistic Reasoning (PR), Neural Networks (NNs), and Genetic Algorithms (GAs). Each of these technologies provide us with complementary reasoning and searching methods to solve complex, real-world problems. After a brief description of each of these technologies, we will analyze some of their most useful combinations, such as the use of FL to control GAs and NNs parameters; the application of GAs to evolve NNs (topologies or weights) or to tune FL controllers; and the implementation of FL controllers as NNs tuned by backpropagation-type algorithms. 1-hop neighbor's text information: A hypothesis-driven constructive induction approach to expanding neural networks. : With most machine learning methods, if the given knowledge representation space is inadequate then the learning process will fail. This is also true with methods using neural networks as the form of the representation space. To overcome this limitation, an automatic construction method for a neural network is proposed. This paper describes the BP-HCI method for a hypothesis-driven constructive induction in a neural network trained by the backpropagation algorithm. The method searches for a better representation space by analyzing the hypotheses generated in each step of an iterative learning process. The method was applied to ten problems, which include, in particular, exclusive-or, MONK2, parity-6BIT and inverse parity-6BIT problems. All problems were successfully solved with the same initial set of parameters; the extension of representation space was no more than necessary extension for each problem. Target text information: "Genetic Evolution of the Topology and Weight Distribution of Neural Networks", : Genetic programming is a methodology for program development, consisting of a special form of genetic algorithm capable of handling parse trees representing programs, that has been successfully applied to a variety of problems. In this paper a new approach to the construction of neural networks based on genetic programming is presented. A linear chromosome is combined to a graph representation of the network and new operators are introduced, which allow the evolution of the architecture and the weights simultaneously without the need of local weight optimization. This paper describes the approach, the operators and reports results of the application of the model to several binary classification problems. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
1,451
test
1-hop neighbor's text information: Induction and recapitulation of deep musical structure. : We describe recent extensions to our framework for the automatic generation of music-making programs. We have previously used genetic programming techniques to produce music-making programs that satisfy user-provided critical criteria. In this paper we describe new work on the use of connectionist techniques to automatically induce musical structure from a corpus. We show how the resulting neural networks can be used as critics that drive our genetic programming system. We argue that this framework can potentially support the induction and recapitulation of deep structural features of music. We present some initial results produced using neural and hybrid symbolic/neural critics, and we discuss directions for future work. Target text information: "Evolving Control Structures with Automatically Defined Macros," : I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
1,561
test
1-hop neighbor's text information: "Coevolving High Level Representations," : 1-hop neighbor's text information: "An Evolutionary Algorithm That Constructs Recurrent Neural Networks", : Standard methods for inducing both the structure and weight values of recurrent neural networks fit an assumed class of architectures to every task. This simplification is necessary because the interactions between network structure and function are not well understood. Evolutionary computation, which includes genetic algorithms and evolutionary programming, is a population-based search method that has shown promise in such complex tasks. This paper argues that genetic algorithms are inappropriate for network acquisition and describes an evolutionary program, called GNARL, that simultaneously acquires both the structure and weights for recurrent networks. This algorithms empirical acquisition method allows for the emergence of complex behaviors and topologies that are potentially excluded by the artificial architectural constraints imposed in standard network induction methods. 1-hop neighbor's text information: "A Survey of Evolutionary Strategies," : Target text information: "Evolutionary Module Acquisition," : I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
1,487
test
1-hop neighbor's text information: Improving RBF Networks by the Feature Selection Approach EUBAFES: The curse of dimensionality is one of the severest problems concerning the application of RBF networks. The number of RBF nodes and therefore the number of training examples needed grows exponentially with the intrinsic dimensionality of the input space. One way to address this problem is the application of feature selection as a data preprocessing step. In this paper we propose a two-step approach for the determination of an optimal feature subset: First, all possible feature-subsets are reduced to those with best discrimination properties by the application of the fast and robust filter technique EUBAFES. Secondly we use a wrapper approach to judge, which of the pre-selected feature subsets leads to RBF networks with least complexity and best classification accuracy. Experiments are undertaken to show the improvement for RBF networks by our feature selection approach. Target text information: Feature Selection by Means of a Feature Weighting Approach. : Selecting a set of features which is optimal for a given classification task is one of the central problems in machine learning. We address the problem using the flexible and robust filter technique EUBAFES. EUBAFES is based on a feature weighting approach which computes binary feature weights and therefore a solution in the feature selection sense and also gives detailed information about feature relevance by continuous weights. Moreover the user gets not only one but several potentially optimal feature subsets which is important for filter-based feature selection algorithms since it gives the flexibility to use even complex classifiers by the application of a combined filter/wrapper approach. We applied EUBAFES on a number of artificial and real world data sets and used radial basis function networks to examine the impact of the feature subsets to classifier accuracy and complexity. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
1,423
test
1-hop neighbor's text information: Bumptrees for Efficient Function, Constraint, and Classification Learning, : A new class of data structures called bumptrees is described. These structures are useful for efficiently implementing a number of neural network related operations. An empirical comparison with radial basis functions is presented on a robot arm mapping learning task. Applications to density estimation, classification, and constraint representation and learning are also outlined. 1-hop neighbor's text information: A hybrid nearest-neighbor and nearest-hyperrectangle algorithm. : Algorithms based on Nested Generalized Exemplar (NGE) theory (Salzberg, 1991) classify new data points by computing their distance to the nearest "generalized exemplar" (i.e., either a point or an axis-parallel rectangle). They combine the distance-based character of nearest neighbor (NN) classifiers with the axis-parallel rectangle representation employed in many rule-learning systems. An implementation of NGE was compared to the k-nearest neighbor (kNN) algorithm in 11 domains and found to be significantly inferior to kNN in 9 of them. Several modifications of NGE were studied to understand the cause of its poor performance. These show that its performance can be substantially improved by preventing NGE from creating overlapping rectangles, while still allowing complete nesting of rectangles. Performance can be further improved by modifying the distance metric to allow weights on each of the features (Salzberg, 1991). Best results were obtained in this study when the weights were computed using mutual information between the features and the output class. The best version of NGE developed is a batch algorithm (BNGE FW MI ) that has no user-tunable parameters. BNGE FW MI 's performance is comparable to the first-nearest neighbor algorithm (also incorporating feature weights). However, the k-nearest neighbor algorithm is still significantly superior to BNGE FW MI in 7 of the 11 domains, and inferior to it in only 2. We conclude that, even with our improvements, the NGE approach is very sensitive to the shape of the decision boundaries in classification problems. In domains where the decision boundaries are axis-parallel, the NGE approach can produce excellent generalization with interpretable hypotheses. In all domains tested, NGE algorithms require much less memory to store generalized exemplars than is required by NN algorithms. 1-hop neighbor's text information: L 0 |The First Four Years Abstract A summary of the progress and plans of: Target text information: Best-first model merging for dynamic learning and recognition. : Best-first model merging is a general technique for dynamically choosing the structure of a neural or related architecture while avoiding overfitting. It is applicable to both learning and recognition tasks and often generalizes significantly better than fixed structures. We demonstrate the approach applied to the tasks of choosing radial basis functions for function learning, choosing local affine models for curve and constraint surface modelling, and choosing the structure of a balltree or bumptree to maximize efficiency of access. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
1,273
val
1-hop neighbor's text information: TRAINREC: A System for Training Feedforward and Simple Recurrent Networks Efficiently and Correctly. : Target text information: Distributed Patterns as Hierarchical Structures, : Recursive Auto-Associative Memory (RAAM) structures show promise as a general representation vehicle that uses distributed patterns. However training is often difficult, which explains, at least in part, why only relatively small networks have been studied. We show a technique for transforming any collection of hierarchical structures into a set of training patterns for a sequential RAAM which can be effectively trained using a simple (Elman-style) recurrent network. Tr aining produces a set of distributed patterns corresponding to the structures. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
1,807
val
1-hop neighbor's text information: A fast Kohonen net implementation for spert-ii. : We present an implementation of Kohonen Self-Organizing Feature Maps for the Spert-II vector microprocessor system. The implementation supports arbitrary neural map topologies and arbitrary neighborhood functions. For small networks, as used in real-world tasks, a single Spert-II board is measured to run Kohonen net classification at up to 208 million connections per second (MCPS). On a speech coding benchmark task, Spert-II performs on-line Kohonen net training at over 100 million connection updates per second (MCUPS). This represents almost a factor of 10 improvement compared to previously reported implementations. The asymptotic peak speed of the system is 213 MCPS and 213 MCUPS. 1-hop neighbor's text information: Soft Computing: the Convergence of Emerging Reasoning Technologies: The term Soft Computing (SC) represents the combination of emerging problem-solving technologies such as Fuzzy Logic (FL), Probabilistic Reasoning (PR), Neural Networks (NNs), and Genetic Algorithms (GAs). Each of these technologies provide us with complementary reasoning and searching methods to solve complex, real-world problems. After a brief description of each of these technologies, we will analyze some of their most useful combinations, such as the use of FL to control GAs and NNs parameters; the application of GAs to evolve NNs (topologies or weights) or to tune FL controllers; and the implementation of FL controllers as NNs tuned by backpropagation-type algorithms. 1-hop neighbor's text information: Power system security margin prediction using radial basis function networks. : This paper presents and evaluates two algorithms for incrementally constructing Radial Basis Function Networks, a class of neural networks which looks more suitable for adtaptive control applications than the more popular backpropagation networks. The first algorithm has been derived by a previous method developed by Fritzke, while the second one has been inspired by the CART algorithm developed by Breiman for generation regression trees. Both algorithms proved to work well on a number of tests and exhibit comparable performances. An evaluation on the standard case study of the Mackey-Glass temporal series is reported. Target text information: Self-organized formation of typologically correct feature maps. : 2] D. E. Rumelhart, G. E. Hinton and R. J. Williams, "Learning Internal Representations by Error Propagation", in D. E. Rumelhart and J. L. McClelland (eds.) Parallel Distributed Processing: Explorations in the Microstructure of Cognition (Vol. 1), MIT Press (1986). I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
1,270
test
1-hop neighbor's text information: Combining FOIL and EBG to speedup logic programs. : This paper presents an algorithm that combines traditional EBL techniques and recent developments in inductive logic programming to learn effective clause selection rules for Prolog programs. When these control rules are incorporated into the original program, significant speed-up may be achieved. The algorithm is shown to be an improvement over competing EBL approaches in several domains. Additionally, the algorithm is capable of automatically transforming some intractable algorithms into ones that run in polynomial time. 1-hop neighbor's text information: Combining Top-down and Bottom-up Techniques in Inductive Logic Programming, : This paper describes a new method for inducing logic programs from examples which attempts to integrate the best aspects of existing ILP methods into a single coherent framework. In particular, it combines a bottom-up method similar to Golem with a top-down method similar to Foil. It also includes a method for predicate invention similar to Champ and an elegant solution to the "noisy oracle" problem which allows the system to learn recursive programs without requiring a complete set of positive examples. Systematic experimental comparisons to both Golem and Foil on a range of problems are used to clearly demonstrate the advantages of the approach. 1-hop neighbor's text information: Computational Learning in Humans and Machines: In this paper we review research on machine learning and its relation to computational models of human learning. We focus initially on concept induction, examining five main approaches to this problem, then consider the more complex issue of learning sequential behaviors. After this, we compare the rhetoric that sometimes appears in the machine learning and psychological literature with the growing evidence that different theoretical paradigms typically produce similar results. In response, we suggest that concrete computational models, which currently dominate the field, may be less useful than simulations that operate at a more abstract level. We illustrate this point with an abstract simulation that explains a challenging phenomenon in the area of category learning, and we conclude with some general observations about such abstract models. Target text information: Learning se-mantic grammars with constructive inductive logic programming. : Automating the construction of semantic grammars is a difficult and interesting problem for machine learning. This paper shows how the semantic-grammar acquisition problem can be viewed as the learning of search-control heuristics in a logic program. Appropriate control rules are learned using a new first-order induction algorithm that automatically invents useful syntactic and semantic categories. Empirical results show that the learned parsers generalize well to novel sentences and out-perform previous approaches based on connectionist techniques. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
0
Rule Learning
cora
1,027
test
1-hop neighbor's text information: A survey of intron research in genetics. : A brief survey of biological research on non-coding DNA is presented here. There has been growing interest in the effects of non-coding segments in evolutionary algorithms (EAs). To better understand and conduct research on non-coding segments and EAs, it is important to understand the biological background of such work. This paper begins with a review of basic genetics and terminology, describes the different types of non-coding DNA, and then surveys recent intron research. 1-hop neighbor's text information: A Comparison of Random Search versus Genetic Programming as Engines for Collective Adaptation: We have integrated the distributed search of genetic programming (GP) based systems with collective memory to form a collective adaptation search method. Such a system significantly improves search as problem complexity is increased. Since the pure GP approach does not scale well with problem complexity, a natural question is which of the two components is actually contributing to the search process. We investigate a collective memory search which utilizes a random search engine and find that it significantly outperforms the GP based search engine. We examine the solution space and show that as problem complexity and search space grow, a collective adaptive system will perform better than a collective memory search employing random search as an engine. 1-hop neighbor's text information: Entailment for specification refinement. : Specification refinement is part of formal program derivation, a method by which software is directly constructed from a provably correct specification. Because program derivation is an intensive manual exercise used for critical software systems, an automated approach would allow it to be viable for many other types of software systems. The goal of this research is to determine if genetic programming (GP) can be used to automate the specification refinement process. The initial steps toward this goal are to show that a well-known proof logic for program derivation can be encoded such that a GP-based system can infer sentences in the logic for proof of a particular sentence. The results are promising and indicate that GP can be useful in aiding pro gram derivation. Target text information: Duplication of coding segments in genetic programming. : Research into the utility of non-coding segments, or introns, in genetic-based encodings has shown that they expedite the evolution of solutions in domains by protecting building blocks against destructive crossover. We consider a genetic programming system where non-coding segments can be removed, and the resultant chromosomes returned into the population. This parsimonious repair leads to premature convergence, since as we remove the naturally occurring non-coding segments, we strip away their protective backup feature. We then duplicate the coding segments in the repaired chromosomes, and place the modified chromosomes into the population. The duplication method significantly improves the learning rate in the domain we have considered. We also show that this method can be applied to other domains. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
889
test
1-hop neighbor's text information: ICSIM: An Object Oriented Simulation Environment for Structured Connectionist Nets. Class Project Report, Physics 250: ICSIM is a simulator for structured connectionism under development at ICSI. Structured connectionism is characterized by the need for flexibility, efficiency and support for the design and reuse of modular substructure. We take the position that a fast object-oriented language like Sather [5] is an appropriate implementation medium to achieve these goals. The core of ICSIM consists of a hierarchy of classes that correspond to simulation entities. New connectionist models are realized by combining and specializing pre-existing classes. Whenever possible, auxillary functionality has been separated out into functional modules in order to keep the basic hierarchy as clean and simple as possible. 1-hop neighbor's text information: CLONES: A Connectionist Layerd Object-oriented NEtwork Simulator. : CLONES is a object-oriented library for constructing, training and utilizing layered connectionist networks. The CLONES library contains all the object classes needed to write a simulator with a small amount of added source code (examples are included). The size of experimental ANN programs is greatly reduced by using an object-oriented library; at the same time these programs are easier to read, write and evolve. The library includes database, network behavior and training procedures that can be customized by the user. It is designed to run efficiently on data parallel computers (such as the RAP [6] and SPERT [1]) as well as uniprocessor workstations. While efficiency and portability to parallel computers are the primary goals, there are several secondary design goals: 3. allow heterogeneous algorithms and training procedures to be interconnected and trained together. Within these constraints we attempt to maximize the variety of artificial neural net work algorithms that can be supported. Target text information: An object-oriented connectionist simulator. : ICSIM is a connectionist net simulator being developed at ICSI and written in Sather. It is object-oriented to meet the requirements for flexibility and reuse of homogeneous and structured connectionist nets and to allow the user to encapsulate efficient customized implementations perhaps running on dedicated hardware. Nets are composed by combining off-the-shelf library classes and if necessary by specializing some of their behaviour. General user interface classes allow a uniform or customized graphic presentation of the nets being modeled. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
2,475
test
1-hop neighbor's text information: Unsupervised learning by convex and conic coding. : Unsupervised learning algorithms based on convex and conic encoders are proposed. The encoders find the closest convex or conic combination of basis vectors to the input. The learning algorithms produce basis vectors that minimize the reconstruction error of the encoders. The convex algorithm develops locally linear models of the input, while the conic algorithm discovers features. Both algorithms are used to model handwritten digits and compared with vector quantization and principal component analysis. The neural network implementations involve feedback connections that project a reconstruction back to the input layer. 1-hop neighbor's text information: Learning generative models with the up-propagation algorithm. : Up-propagation is an algorithm for inverting and learning neural network generative models. Sensory input is processed by inverting a model that generates patterns from hidden variables using top-down connections. The inversion process is iterative, utilizing a negative feedback loop that depends on an error signal propagated by bottom-up connections. The error signal is also used to learn the generative model from examples. The algorithm is benchmarked against principal component analysis in In his doctrine of unconscious inference, Helmholtz argued that perceptions are formed by the interaction of bottom-up sensory data with top-down expectations. According to one interpretation of this doctrine, perception is a procedure of sequential hypothesis testing. We propose a new algorithm, called up-propagation, that realizes this interpretation in layered neural networks. It uses top-down connections to generate hypotheses, and bottom-up connections to revise them. It is important to understand the difference between up-propagation and its ancestor, the backpropagation algorithm[1]. Backpropagation is a learning algorithm for recognition models. As shown in Figure 1a, bottom-up connections recognize patterns, while top-down connections propagate an error signal that is used to learn the recognition model. In contrast, up-propagation is an algorithm for inverting and learning generative models, as shown in Figure 1b. Top-down connections generate patterns from a set of hidden variables. Sensory input is processed by inverting the generative model, recovering hidden variables that could have generated the sensory data. This operation is called either pattern recognition or pattern analysis, depending on the meaning of the hidden variables. Inversion of the generative model is done iteratively, through a negative feedback loop driven by an error signal from the bottom-up connections. The error signal is also used for learning the connections experiments on images of handwritten digits. 1-hop neighbor's text information: Generative models for discovering sparse distributed representations. : We describe a hierarchical, generative model that can be viewed as a non-linear generalization of factor analysis and can be implemented in a neural network. The model uses bottom-up, top-down and lateral connections to perform Bayesian perceptual inference correctly. Once perceptual inference has been performed the connection strengths can be updated using a very simple learning rule that only requires locally available information. We demon strate that the network learns to extract sparse, distributed, hierarchical representations. Target text information: Pattern analysis and synthesis in attractor neural networks. : The representation of hidden variable models by attractor neural networks is studied. Memories are stored in a dynamical attractor that is a continuous manifold of fixed points, as illustrated by linear and nonlinear networks with hidden neurons. Pattern analysis and synthesis are forms of pattern completion by recall of a stored memory. Analysis and synthesis in the linear network are performed by bottom-up and top-down connections. In the nonlinear network, the analysis computation additionally requires rectification nonlinearity and inner product inhibition between hidden neurons. One popular approach to sensory processing is based on generative models, which assume that sensory input patterns are synthesized from some underlying hidden variables. For example, the sounds of speech can be synthesized from a sequence of phonemes, and images of a face can be synthesized from pose and lighting variables. Hidden variables are useful because they constitute a simpler representation of the variables that are visible in the sensory input. Using a generative model for sensory processing requires a method of pattern analysis. Given a sensory input pattern, analysis is the recovery of the hidden variables from which it was synthesized. In other words, analysis and synthesis are inverses of each other. There are a number of approaches to pattern analysis. In analysis-by-synthesis, the synthetic model is embedded inside a negative feedback loop[1]. Another approach is to construct a separate analysis model[2]. This paper explores a third approach, in which visible-hidden pairs are embedded as attractive fixed points, or attractors, in the state space of a recurrent neural network. The attractors can be regarded as memories stored in the network, and analysis and synthesis as forms of pattern completion by recall of a memory. The approach is illustrated with linear and nonlinear network architectures. In both networks, the synthetic model is linear, as in principal I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
1,766
train
1-hop neighbor's text information: Explaining "explaining away". : Explaining away is a common pattern of reasoning in which the confirmation of one cause of an observed or believed event reduces the need to invoke alternative causes. The opposite of explaining away also can occur, in which the confirmation of one cause increases belief in another. We provide a general qualitative probabilistic analysis of intercausal reasoning, and identify the property of the interaction among the causes, product synergy, that determines which form of reasoning is appropriate. Product synergy extends the qualitative probabilistic network (QPN) formalism to support qualitative intercausal inference about the directions of change in probabilistic belief. The intercausal relation also justifies Occam's razor, facilitating pruning in search for likely diagnoses. 0 Portions of this paper originally appeared in Proceedings of the Second International Conference on Principles of Knowledge Representation and Reasoning [16]. y Supported by the National Science Foundation under grant IRI-8807061 to Carnegie Mellon and by the Rockwell International Science Center. Target text information: Inference in Cognitive Maps: Cognitive mapping is a qualitative decision modeling technique developed over twenty years ago by political scientists, which continues to see occasional use in social science and decision-aiding applications. In this paper, I show how cognitive maps can be viewed in the context of more recent formalisms for qualitative decision modeling, and how the latter provide a firm semantic foundation that can facilitate the development of more powerful inference procedures as well as extensions in expressiveness for models of this sort. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
680
test
1-hop neighbor's text information: Reinforcement learning with replacing eligibility traces. : The eligibility trace is one of the basic mechanisms used in reinforcement learning to handle delayed reward. In this paper we introduce a new kind of eligibility trace, the replacing trace, analyze it theoretically, and show that it results in faster, more reliable learning than the conventional trace. Both kinds of trace assign credit to prior events according to how recently they occurred, but only the conventional trace gives greater credit to repeated events. Our analysis is for conventional and replace-trace versions of the o*ine TD(1) algorithm applied to undiscounted absorbing Markov chains. First, we show that these methods converge under repeated presentations of the training set to the same predictions as two well known Monte Carlo methods. We then analyze the relative efficiency of the two Monte Carlo methods. We show that the method corresponding to conventional TD is biased, whereas the method corresponding to replace-trace TD is unbiased. In addition, we show that the method corresponding to replacing traces is closely related to the maximum likelihood solution for these tasks, and that its mean squared error is always lower in the long run. Computational results confirm these analyses and show that they are applicable more generally. In particular, we show that replacing traces significantly improve performance and reduce parameter sensitivity on the "Mountain-Car" task, a full reinforcement-learning problem with a continuous state space, when using a feature-based function approximator. Target text information: : Empirical Comparison of Gradient Descent and Exponentiated Gradient Descent in Supervised and Reinforcement Learning Technical Report 96-70 I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
5
Reinforcement Learning
cora
801
test
1-hop neighbor's text information: Context-sensitive feature selection for lazy learners. : 1-hop neighbor's text information: Efficient feature selection in conceptual clustering. : Feature selection has proven to be a valuable technique in supervised learning for improving predictive accuracy while reducing the number of attributes considered in a task. We investigate the potential for similar benefits in an unsupervised learning task, conceptual clustering. The issues raised in feature selection by the absence of class labels are discussed and an implementation of a sequential feature selection algorithm based on an existing conceptual clustering system is described. Additionally, we present a second implementation which employs a technique for improving the efficiency of the search for an optimal description and compare the performance of both algorithms. Target text information: Dynamically adjusting concepts to accommodate changing contexts. : In concept learning, objects in a domain are grouped together based on similarity as determined by the attributes used to describe them. Existing concept learners require that this set of attributes be known in advance and presented in entirety before learning begins. Additionally, most systems do not possess mechanisms for altering the attribute set after concepts have been learned. Consequently, a veridical attribute set relevant to the task for which the concepts are to be used must be supplied at the onset of learning, and in turn, the usefulness of the concepts is limited to the task for which the attributes were originally selected. In order to efficiently accommodate changing contexts, a concept learner must be able to alter the set of descriptors without discarding its prior knowledge of the domain. We introduce the notion of attribute-incrementation, the dynamic modification of the attribute set used to describe instances in a problem domain. We have implemented the capability in a concept learning system that has been evaluated along several dimensions using an existing concept formation system for com parison. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
2,150
train
1-hop neighbor's text information: Graphical Models in Applied Multivariate Statistics. : 1-hop neighbor's text information: and Sylvia Richardson (1995). Model selection for generalized linear models via GLIB, with application to epidemiology. In Bayesian Biostatistics (D.A. : 1 This is the first draft of a chapter for Bayesian Biostatistics, edited by Donald A. Berry and Darlene K. Strangl. Adrian E. Raftery is Professor of Statistics and Sociology, Department of Statistics, GN-22, University of Washington, Seattle, WA 98195, USA. Sylvia Richardson is Directeur de Recherche, INSERM Unite 170, 16 avenue Paul Vaillant Couturier, 94807 Villejuif CEDEX, France. Raftery's research was supported by ONR contract no. N-00014-91-J-1074, by the Ministere de la Recherche et de l'Espace, Paris, by the Universite de Paris VI, and by INRIA, Rocquencourt, France. Raftery thanks the latter two institutions, Paul Deheuvels and Gilles Celeux for hearty hospitality during his Paris sabbatical in which part of this chapter was written. The authors are grateful to Christine Montfort for excellent research assistance and to Mariette Gerber, Michel Chavance and David Madigan for helpful discussions. 1-hop neighbor's text information: Bayes factors and model uncertainty. : Technical Report no. 255 Department of Statistics, University of Washington August 1993; Revised March 1994 Target text information: Bayesian graphical models for discrete data. : z York's research was supported by a NSF graduate fellowship. The authors are grateful to Julian Besag, David Bradshaw, Jeff Bradshaw, James Carlsen, David Draper, Ivar Heuch, Robert Kass, Augustine Kong, Steffen Lauritzen, Adrian Raftery, and James Zidek for helpful comments and discussions. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
1,621
test
1-hop neighbor's text information: How to evolve autonomous robots: : 1-hop neighbor's text information: Investigating the role of diploidy in simulated populations of evolving individuals: In most work applying genetic algorithms to populations of neural networks there is no real distinction between genotype and phenotype. In nature both the information contained in the genotype and the mapping of the genetic information into the phenotype are usually much more complex. The genotypes of many organisms exhibit diploidy, i.e., they include two copies of each gene: if the two copies are not identical in their sequences and therefore have a functional difference in their products (usually proteins), the expressed phenotypic feature is termed the dominant one, the other one recessive (not expressed). In this paper we review the literature on the use of diploidy and dominance operators in genetic algorithms; we present the new results we obtained with our own simulations in changing environments; finally, we discuss some results of our simulations that parallel biological findings. Target text information: Two is Better than One: a Diploid Genotype for Neural Networks. : In nature the genotype of many organisms exhibits diploidy, i.e., it includes two copies of every gene. In this paper we describe the results of simulations comparing the behavior of haploid and diploid populations of ecological neural networks living in both fixed and changing environments. We show that diploid genotypes create more variability in fitness in the population than haploid genotypes and buffer better environmental change; as a consequence, if one wants to obtain good results for both average and peak fitness in a single population one should choose a diploid population with an appropriate mutation rate. Some results of our simulations parallel biological findings. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
2,321
val
1-hop neighbor's text information: Estimating the error rate of a prediction rule: improvement on cross-validation. : A training set of data has been used to construct a rule for predicting future responses. What is the error rate of this rule? The traditional answer to this question is given by cross-validation. The cross-validation estimate of prediction error is nearly unbiased, but can be highly variable. This article discusses bootstrap estimates of prediction error, which can be thought of as smoothed versions of cross-validation. A particular bootstrap method, the 632+ rule, is shown to substantially outperform cross-validation in a catalog of 24 simulation experiments. Besides providing point estimates, we also consider estimating the variability of an error rate estimate. All of the results here are nonparametric, and apply to any possible prediction rule: however we only study classification problems with 0-1 loss in detail. Our simulations include "smooth" prediction rules like Fisher's Linear Discriminant Function, and unsmooth ones like Nearest Neighbors. Target text information: The covariance inflation criterion for adaptive model selection: We propose a new criterion for model selection in prediction problems. The covariance inflation criterion adjusts the training error by the average covariance of the predictions and responses, when the prediction rule is applied to permuted versions of the dataset. This criterion can be applied to general prediction problems (for example regression or classification), and to general prediction rules (for example stepwise regression, tree-based models and neural nets). As a byproduct we obtain a measure of the effective number of parameters used by an adaptive procedure. We relate the covariance inflation criterion to other model selection procedures and illustrate its use in some regression and classification problems. We also revisit the conditional bootstrap approach to model selection. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
219
test
1-hop neighbor's text information: Global self organization of all known protein sequences reveals inherent biological signatures. : A global classification of all currently known protein sequences is performed. Every protein sequence is partitioned into segments of 50 amino acids and a dynamic-programming distance is calculated between each pair of segments. This space of segments is first embedded into Euclidean space with small metric distortion. A novel self-organized cross-validated clustering algorithm is then applied to the embedded space with Euclidean distances. The resulting hierarchical tree of clusters offers a new representation of protein sequences and families, which compares favorably with the most updated classifications based on functional and structural protein data. Motifs and domains such as the Zinc Finger, EF hand, Homeobox, EGF-like and others are automatically correctly identified. A novel representation of protein families is introduced, from which functional biological kinship of protein families can be deduced, as demonstrated for the transporters family. Target text information: A map of the protein space An automatic hierarchical classification of all protein sequences: We investigate the space of all protein sequences. We combine the standard measures of similarity (SW, FASTA, BLAST), to associate with each sequence an exhaustive list of neighboring sequences. These lists induce a (weighted directed) graph whose vertices are the sequences. The weight of an edge connecting two sequences represents their degree of similarity. This graph encodes much of the fundamental properties of the sequence space. We look for clusters of related proteins in this graph. These clusters correspond to strongly connected sets of vertices. Two main ideas underlie our work: i) Interesting homologies among proteins can be deduced by transitivity. ii) Transitivity should be applied restrictively in order to prevent unrelated proteins from clustering together. Our analysis starts from a very conservative classification, based on very significant similarities, that has many classes. Subsequently, classes are merged to include less significant similarities. Merging is performed via a novel two phase algorithm. First, the algorithm identifies groups of possibly related clusters (based on transitivity and strong connectivity) using local considerations, and merges them. Then, a global test is applied to identify nuclei of strong relationships within these groups of clusters, and the classification is refined accordingly. This process takes place at varying thresholds of statistical significance, where at each step the algorithm is applied on the classes of the previous classification, to obtain the next one, at the more permissive threshold. Consequently, a hierarchical organization of all proteins is obtained. The resulting classification splits the space of all protein sequences into well defined groups of proteins. The results show that the automatically induced sets of proteins are closely correlated with natural biological families and super families. The hierarchical organization reveals finer sub-families that make up known families of proteins as well as many interesting relations between protein families. The hierarchical organization proposed may be considered as the first map of the space of all protein sequences. An interactive web site including the results of our analysis has been constructed, and is now accessible through http://www.protomap.cs.huji.ac.il I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
601
val
1-hop neighbor's text information: Learning boxes in high dimension. : DIMACS Technical Report 97-32 July 1997 1 A preliminary version of this paper appeared in the proceedings of the EuroCOLT '97 conference, published in volume 1208 of Lecture Notes in Artificial Intelligence, pages 3-15. Springer-Verlag, 1997. The journal version will appear in Algoritmica. 2 E-mail: [email protected]. http://dimacs.rutgers.edu/~beimel. Part of this research was done while the author was a Ph.D. student at the Technion. 3 E-mail: [email protected]. http://www.cs.technion.ac.il/~eyalk. This research was supported by Technion V.P.R. Fund 120-872 and by Japan Technion Society Research Fund. 1-hop neighbor's text information: Pac learning intersections of halfspaces with membership queries. : Target text information: Noise-tolerant parallel learning of geometric concepts. : We present an efficient algorithm for PAC-learning a very general class of geometric concepts over < d for fixed d. More specifically, let T be any set of s halfspaces. Let x = (x 1 ; : : : ; x d ) be an arbitrary point in < d . With each t 2 T we associate a boolean indicator function I t (x) which is 1 if and only if x is in the halfspace t. The concept class, C d s , that we study consists of all concepts formed by any boolean function over I t 1 ; : : : ; I t s for t i 2 T . This concept class is much more general than any geometric concept class known to be PAC-learnable. Our results can be easily extended to efficiently learn any boolean combination of a polynomial number of concepts selected from any concept class C over < d given that the VC-dimension of C has dependence only on d (and is thus constant for any constant d), and there is a polynomial time algorithm to determine if there is a concept from C consistent with a given set of labeled examples. We also present a statistical query version of our algorithm that can tolerate random classification noise for any noise rate strictly less than 1/2. Finally we present a generalization of the standard *-net result of Haussler and Welzl [25] and apply it to give an alternative noise-tolerant algorithm for d = 2 based on geometric subdivisions. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
2,640
test
1-hop neighbor's text information: Rigorous learning curve bounds from statistical mechanics. : In this paper we introduce and investigate a mathematically rigorous theory of learning curves that is based on ideas from statistical mechanics. The advantage of our theory over the well-established Vapnik-Chervonenkis theory is that our bounds can be considerably tighter in many cases, and are also more reflective of the true behavior (functional form) of learning curves. This behavior can often exhibit dramatic properties such as phase transitions, as well as power law asymptotics not explained by the VC theory. The disadvantages of our theory are that its application requires knowledge of the input distribution, and it is limited so far to finite cardinality function classes. We illustrate our results with many concrete examples of learning curve bounds derived from our theory. Target text information: Characterizing rational versus exponential learning curves. : We consider the standard problem of learning a concept from random examples. Here a learning curve can be defined to be the expected error of a learner's hypotheses as a function of training sample size. Haussler, Littlestone and Warmuth have shown that, in the distribution free setting, the smallest expected error a learner can achieve in the worst case over a concept class C converges rationally to zero error (i.e., fi(1=t) for training sample size t). However, recently Cohn and Tesauro have demonstrated how exponential convergence can often be observed in experimental settings (i.e., average error decreasing as e fi(t) ). By addressing a simple non-uniformity in the original analysis, this paper shows how the dichotomy between rational and exponential worst case learning curves can be recovered in the distribution free theory. These results support the experimental findings of Cohn and Tesauro: for finite concept classes, any consistent learner achieves exponential convergence, even in the worst case; but for continuous concept classes, no learner can exhibit sub-rational convergence for every target concept and domain distribution. A precise boundary between rational and exponential convergence is drawn for simple concept chains. Here we show that somewhere dense chains always force rational convergence in the worst case, but exponential convergence can always be achieved for nowhere dense chains. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
1,389
test
1-hop neighbor's text information: Trading spaces: computation, representation and the limits of learning. : fl Research on this paper was partly supported by a Senior Research Leave fellowship granted by the Joint Council (SERC/MRC/ESRC) Cognitive Science Human Computer Interaction Initiative to one of the authors (Clark). Thanks to the Initiative for that support. y The order of names is arbitrary. 1-hop neighbor's text information: Separability is a learner\'s best friend. : Geometric separability is a generalisation of linear separability, familiar to many from Minsky and Papert's analysis of the Perceptron learning method. The concept forms a novel dimension along which to conceptualise learning methods. The present paper shows how geometric separability can be defined and demonstrates that it accurately predicts the performance of a at least one empirical learning method. 1-hop neighbor's text information: Self-Organization and Associative Memory, : Selective suppression of transmission at feedback synapses during learning is proposed as a mechanism for combining associative feedback with self-organization of feedforward synapses. Experimental data demonstrates cholinergic suppression of synaptic transmission in layer I (feedback synapses), and a lack of suppression in layer IV (feed-forward synapses). A network with this feature uses local rules to learn mappings which are not linearly separable. During learning, sensory stimuli and desired response are simultaneously presented as input. Feedforward connections form self-organized representations of input, while suppressed feedback connections learn the transpose of feedfor-ward connectivity. During recall, suppression is removed, sensory input activates the self-organized representation, and activity generates the learned response. Target text information: There is No Free Lunch but the Starter is Cheap: Generalisation from First Principles: According to Wolpert's no-free-lunch (NFL) theorems [1, 2], gener-alisation in the absence of domain knowledge is necessarily a zero-sum enterprise. Good generalisation performance in one situation is always offset by bad performance in another. Wolpert notes that the theorems do not demonstrate that effective generalisation is a logical impossibility but merely that a learner's bias (or assumption set) is of key importance I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
303
test
1-hop neighbor's text information: A DISCUSSION ON SOME DESIGN PRINCIPLES FOR EFFICIENT CROSSOVER OPERATORS FOR GRAPH COLORING PROBLEMS: A year ago, a new metaheuristic for graph coloring problems was introduced by Costa, Hertz and Dubuis. They have shown, with computer experiments, some clear indication of the benefits of this approach. Graph coloring has many applications specially in the areas of scheduling, assignments and timetabling. The metaheuristic can be classified as a memetic algorithm since it is based on a population search in which periods of local optimization are interspersed with phases in which new configurations are created from earlier well-developed configurations or local minima of the previous iterative improvement process. The new population is created using crossover operators as in genetic algorithms. In this paper we discuss how a methodology inspired in Competitive Analysis may be relevant to the problem of designing better crossover operators. RESUMO: No ultimo ano uma nova metaheurstica para o problema de colora~c~ao em grafos foi apre-sentada por Costa, Hertz e Dubuis. Eles mostraram, com experimentos computacionais, algumas indica~c~oes claras dos benefcios desta nova tecnica. Colora~c~ao em grafos tem muitas aplica~c~oes, especialmente na area de programa~c~ao de tarefas, localiza~c~ao e horario . A metaheurstica pode ser classificada como algoritmo memetico desde que seja baseada em uma busca de popula~c~ao cujos perodos de otimiza~c~ao local s~ao intercalados com fases onde novas configura~c~oes s~ao criadas a partir de boas configura~c~oes ou mnimos locais de itera~c~oes anteriores. A nova popula~c~ao e criada usando opera~c~oes de crossover como em algoritmos geneticos. Neste artigo apresen-tamos como uma metodologia baseada em Competitive Analysis pode ser relevante para construir opera~c~oes de crossover. 1-hop neighbor's text information: How good are genetic algortihms at finding large cliques: an experimental study, : This paper investigates the power of genetic algorithms at solving the MAX-CLIQUE problem. We measure the performance of a standard genetic algorithm on an elementary set of problem instances consisting of embedded cliques in random graphs. We indicate the need for improvement, and introduce a new genetic algorithm, the multi-phase annealed GA, which exhibits superior performance on the same problem set. As we scale up the problem size and test on "hard" benchmark instances, we notice a degraded performance in the algorithm caused by premature convergence to local minima. To alleviate this problem, a sequence of modifications are implemented ranging from changes in input representation to systematic local search. The most recent version, called union GA, incorporates the features of union cross-over, greedy replacement, and diversity enhancement. It shows a marked speed-up in the number of iterations required to find a given solution, as well as some improvement in the clique size found. We discuss issues related to the SIMD implementation of the genetic algorithms on a Thinking Machines CM-5, which was necessitated by the intrinsically high time complexity (O(n 3 )) of the serial algorithm for computing one iteration. Our preliminary conclusions are: (1) a genetic algorithm needs to be heavily customized to work "well" for the clique problem; (2) a GA is computationally very expensive, and its use is only recommended if it is known to find larger cliques than other algorithms; (3) although our customization effort is bringing forth continued improvements, there is no clear evidence, at this time, that a GA will have better success in circumventing local minima. 1-hop neighbor's text information: An evolutionary tabu search algorithm and the NHL scheduling problem, : We present in this paper a new evolutionary procedure for solving general optimization problems that combines efficiently the mechanisms of genetic algorithms and tabu search. In order to explore the solution space properly interaction phases are interspersed with periods of optimization in the algorithm. An adaptation of this search principle to the National Hockey League (NHL) problem is discussed. The hybrid method developed in this paper is well suited for Open Shop Scheduling problems (OSSP). The results obtained appear to be quite satisfactory. Target text information: Embedding of a sequential procedure within an evolutionary algorithm for coloring problems in graphs. : I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
2,045
val
1-hop neighbor's text information: Roles of macro-actions in accelerating reinforcement learning. : We analyze the use of built-in policies, or macro-actions, as a form of domain knowledge that can improve the speed and scaling of reinforcement learning algorithms. Such macro-actions are often used in robotics, and macro-operators are also well-known as an aid to state-space search in AI systems. The macro-actions we consider are closed-loop policies with termination conditions. The macro-actions can be chosen at the same level as primitive actions. Macro-actions commit the learning agent to act in a particular, purposeful way for a sustained period of time. Overall, macro-actions may either accelerate or retard learning, depending on the appropriateness of the macro-actions to the particular task. We analyze their effect in a simple example, breaking the acceleration effect into two parts: 1) the effect of the macro-action in changing exploratory behavior, independent of learning, and 2) the effect of the macro-action on learning, independent of its effect on behavior. In our example, both effects are significant, but the latter appears to be larger. Finally, we provide a more complex gridworld illustration of how appropriately chosen macro-actions can accelerate overall learning. 1-hop neighbor's text information: Multi-Time models for reinforcement learning. In ICML\'97 Workshop: The Role of Models in Reinforcement Learning. : Reinforcement learning can be used not only to predict rewards, but also to predict states, i.e. to learn a model of the world's dynamics. Models can be defined at different levels of temporal abstraction. Multi-time models are models that focus on predicting what will happen, rather than when a certain event will take place. Based on multi-time models, we can define abstract actions, which enable planning (presumably in a more efficient way) at various levels of abstraction. 1-hop neighbor's text information: TD models: modeling the world at a mixture of time scales. : Temporal-difference (TD) learning can be used not just to predict rewards, as is commonly done in reinforcement learning, but also to predict states, i.e., to learn a model of the world's dynamics. We present theory and algorithms for intermixing TD models of the world at different levels of temporal abstraction within a single structure. Such multi-scale TD models can be used in model-based reinforcement-learning architectures and dynamic programming methods in place of conventional Markov models. This enables planning at higher and varied levels of abstraction, and, as such, may prove useful in formulating methods for hierarchical or multi-level planning and reinforcement learning. In this paper we treat only the prediction problem|that of learning a model and value function for the case of fixed agent behavior. Within this context, we establish the theoretical foundations of multi-scale models and derive TD algorithms for learning them. Two small computational experiments are presented to test and illustrate the theory. This work is an extension and generalization of the work of Singh (1992), Dayan (1993), and Sutton & Pinette (1985). Target text information: Multi-time models for temporally abstract planning. : Planning and learning at multiple levels of temporal abstraction is a key problem for artificial intelligence. In this paper we summarize an approach to this problem based on the mathematical framework of Markov decision processes and reinforcement learning. Current model-based reinforcement learning is based on one-step models that cannot represent common-sense higher-level actions, such as going to lunch, grasping an object, or flying to Denver. This paper generalizes prior work on temporally abstract models [Sutton, 1995] and extends it from the prediction setting to include actions, control, and planning. We introduce a more general form of temporally abstract model, the multi-time model, and establish its suitability for planning and learning by virtue of its relationship to the Bellman equations. This paper summarizes the theoretical framework of multi-time models and illustrates their potential advantages in a The need for hierarchical and abstract planning is a fundamental problem in AI (see, e.g., Sacerdoti, 1977; Laird et al., 1986; Korf, 1985; Kaelbling, 1993; Dayan & Hinton, 1993). Model-based reinforcement learning offers a possible solution to the problem of integrating planning with real-time learning and decision-making (Peng & Williams, 1993, Moore & Atkeson, 1993; Sutton and Barto, 1998). However, current model-based reinforcement learning is based on one-step models that cannot represent common-sense, higher-level actions. Modeling such actions requires the ability to handle different, interrelated levels of temporal abstraction. A new approach to modeling at multiple time scales was introduced by Sutton (1995) based on prior work by Singh , Dayan , and Sutton and Pinette . This approach enables models of the environment at different temporal scales to be intermixed, producing temporally abstract models. However, that work was concerned only with predicting the environment. This paper summarizes an extension of the approach including actions and control of the environment [Precup & Sutton, 1997]. In particular, we generalize the usual notion of a gridworld planning task. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
5
Reinforcement Learning
cora
2,481
test
1-hop neighbor's text information: "Using DNA to solve NP-Complete Problems", : A strategy for using Genetic Algorithms (GAs) to solve NP-complete problems is presented. The key aspect of the approach taken is to exploit the observation that, although all NP-complete problems are equally difficult in a general computational sense, some have much better GA representations than others, leading to much more successful use of GAs on some NP-complete problems than on others. Since any NP-complete problem can be mapped into any other one in polynomial time, the strategy described here consists of identifying a canonical NP-complete problem on which GAs work well, and solving other NP-complete problems indirectly by mapping them onto the canonical problem. Initial empirical results are presented which support the claim that the Boolean Satisfiability Problem (SAT) is a GA-effective canonical problem, and that other NP-complete problems with poor GA representations can be solved efficiently by mapping them first onto SAT problems. 1-hop neighbor's text information: Graph coloring with adaptive evolutionary algorithms. : This technical report summarizes our results on solving graph coloring problems with Genetic Algorithms (GA). After testing many different options we conclude that the best one is a (1+1) order-based GA using an adaptation mechanism that periodically changes the fitness function, thus guiding the GA through the search space. Except from the decoder (fitness function) this GA is general, using no domain specific knowledge. We compare this GA to a powerful traditional graph coloring technique, DSatur, on a wide range of problems with different size, topology and edge density. The results show that the GA is superior to DSatur on the hardest problem instances and it scales up better with the problem size. The GA exhibits a linear time complexity for one measure and indicates a polynomial time complexity for another one. This report is also available at http://www.wi.leidenuniv.nl/TechRep/tr96-11.html 1-hop neighbor's text information: (1995) Genetic algorithms with multi-parent recombination. : In this paper we investigate genetic algorithms where more than two parents are involved in the recombination operation. In particular, we introduce gene scanning as a reproduction mechanism that generalizes classical crossovers, such as n-point crossover or uniform crossover, and is applicable to an arbitrary number (two or more) of parents. We performed extensive tests for optimizing numerical functions, the TSP and graph coloring to observe the effect of different numbers of parents. The experiments show that 2-parent recombination is outperformed when using more parents on the classical DeJong functions. For the other problems the results are not conclusive, in some cases 2 parents are optimal, while in some others more parents are better. Target text information: Solving 3-SAT by GAs adapting constraint weights. : Handling NP complete problems with GAs is a great challenge. In particular the presence of constraints makes finding solutions hard for a GA. In this paper we present a problem independent constraint handling mechanism, Stepwise Adaptation of Weights (SAW), and apply it for solving the 3-SAT problem. Our experiments prove that the SAW mechanism substantially increases GA performance. Furthermore, we compare our SAW-ing GA with the best heuristic technique we could trace, WGSAT, and conclude that the GA is superior to the heuristic method. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
2,307
test
1-hop neighbor's text information: Robo-shepherd: Learning complex robotic behaviors. : This paper reports on recent results using genetic algorithms to learn decision rules for complex robot behaviors. The method involves evaluating hypothetical rule sets on a simulator and applying simulated evolution to evolve more effective rules. The main contributions of this paper are (1) the task learned is a complex behavior involving multiple mobile robots, and (2) the learned rules are verified through experiments on operational mobile robots. The case study involves a shepherding task in which one mobile robot attempts to guide another robot to a specified area. 1-hop neighbor's text information: Improving tactical plans with genetic algorithms. : 1-hop neighbor's text information: Cooperative Bayesian and Case-Based Reasoning for Solving Multiagent Planning Tasks: We describe an integrated problem solving architecture named INBANCA in which Bayesian networks and case-based reasoning (CBR) work cooperatively on multiagent planning tasks. This includes two-team dynamic tasks, and this paper concentrates on simulated soccer as an example. Bayesian networks are used to characterize action selection whereas a case-based approach is used to determine how to implement actions. This paper has two contributions. First, we survey integrations of case-based and Bayesian approaches from the perspective of a popular CBR task decomposition framework, thus explaining what types of integrations have been attempted. This allows us to explain the unique aspects of our proposed integration. Second, we demonstrate how Bayesian nets can be used to provide environmental context, and thus feature selection information, for the case-based reasoner. Target text information: "Learning robot behaviors using genetic algorithms," : Genetic Algorithms are used to learn navigation and collision avoidance behaviors for robots. The learning is performed under simulation, and the resulting behaviors are then used to control the The approach to learning behaviors for robots described here reflects a particular methodology for learning via a simulation model. The motivation is that making mistakes on real systems may be costly or dangerous. In addition, time constraints might limit the number of experiences during learning in the real world, while in many cases, the simulation model can be made to run faster than real time. Since learning may require experimenting with behaviors that might occasionally produce unacceptable results if applied to the real world, or might require too much time in the real environment, we assume that hypothetical behaviors will be evaluated in a simulation model (the off-line system). As illustrated in Figure 1, the current best behavior can be placed in the real, on-line system, while learning continues in the off-line system [1]. The learning algorithm was designed to learn useful behaviors from simulations of limited fidelity. The expectation is that behaviors learned in these simulations will be useful in real-world environments. Previous studies have illustrated that knowledge learned under simulation is robust and might be applicable to the real world if the simulation is more general (i.e. has more noise, more varied conditions, etc.) than the real world environment [2]. Where this is not possible, it is important to identify the differences between the simulation and the world and note the effect upon the learning process. The research reported here continues to examine this hypothesis. The next section very briefly explains the learning algorithm (and gives pointers to where more extensive documentation can be found). After that, the actual robot is described. Then we describe the simulation of the robot. The task _______________ actual robot. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
961
test
1-hop neighbor's text information: "Exploration and model building in mobile robot domains", : I present first results on COLUMBUS, an autonomous mobile robot. COLUMBUS operates in initially unknown, structured environments. Its task is to explore and model the environment efficiently while avoiding collisions with obstacles. COLUMBUS uses an instance-based learning technique for modeling its environment. Real-world experiences are generalized via two artificial neural networks that encode the characteristics of the robot's sensors, as well as the characteristics of typical environments the robot is assumed to face. Once trained, these networks allow for knowledge transfer across different environments the robot will face over its lifetime. COLUMBUS' models represent both the expected reward and the confidence in these expectations. Exploration is achieved by navigating to low confidence regions. An efficient dynamic programming method is employed in background to find minimal-cost paths that, executed by the robot, maximize exploration. COLUMBUS operates in real-time. It has been operating successfully in an office building environment for periods up to hours. 1-hop neighbor's text information: Lazy Acquisition of Place Knowledge: In this paper we define the task of place learning and describe one approach to this problem. Our framework represents distinct places as evidence grids, a probabilistic description of occupancy. Place recognition relies on nearest neighbor classification, augmented by a registration process to correct for translational differences between the two grids. The learning mechanism is lazy in that it involves the simple storage of inferred evidence grids. Experimental studies with physical and simulated robots suggest that this approach improves place recognition with experience, that it can handle significant sensor noise, that it benefits from improved quality in stored cases, and that it scales well to environments with many distinct places. Additional studies suggest that using historical information about the robot's path through the environment can actually reduce recognition accuracy. Previous researchers have studied evidence grids and place learning, but they have not combined these two powerful concepts, nor have they used systematic experimentation to evaluate their methods' abilities. Target text information: Lazy acquisition of place knowledge. : In this paper we define the task of place learning and describe one approach to this problem. The framework represents distinct places using evidence grids, a probabilistic description of occupancy. Place recognition relies on case-based classification, augmented by a registration process to correct for translations. The learning mechanism is also similar to that in case-based systems, involving the simple storage of inferred evidence grids. Experimental studies with both physical and simulated robots suggest that this approach improves place recognition with experience, that it can handle significant sensor noise, and that it scales well to increasing numbers of places. Previous researchers have studied evidence grids and place learning, but they have not combined these two powerful concepts, nor have they used the experimental methods of machine learning to evaluate their methods' abilities. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
1,200
test
1-hop neighbor's text information: A Decision-theoretic Generalization of On-line Learning and an Application to Boosting. : We consider the problem of dynamically apportioning resources among a set of options in a worst-case on-line framework. The model we study can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting. We show that the multiplicative weight-update rule of Littlestone and Warmuth [10] can be adapted to this model yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems. We show how the resulting learning algorithm can be applied to a variety of problems, including gambling, multiple-outcome prediction, repeated games and prediction of points in R n 1-hop neighbor's text information: Experiments with a New Boosting Algorithm. : In an earlier paper, we introduced a new boosting algorithm called AdaBoost which, theoretically, can be used to significantly reduce the error of any learning algorithm that consistently generates classifiers whose performance is a little better than random guessing. We also introduced the related notion of a pseudo-loss which is a method for forcing a learning algorithm of multi-label concepts to concentrate on the labels that are hardest to discriminate. In this paper, we describe experiments we carried out to assess how well AdaBoost with and without pseudo-loss, performs on real learning problems. We performed two sets of experiments. The first set compared boosting to Breiman's bagging method when used to aggregate various classifiers (including decision trees and single attribute-value tests). We compared the performance of the two methods on a collection of machine-learning benchmarks. In the second set of experiments, we studied in more detail the performance of boosting using a nearest-neighbor classifier on an OCR problem. Target text information: Pruning Adaptive Boosting ICML-97 Final Draft: The boosting algorithm AdaBoost, developed by Freund and Schapire, has exhibited outstanding performance on several benchmark problems when using C4.5 as the "weak" algorithm to be "boosted." Like other ensemble learning approaches, AdaBoost constructs a composite hypothesis by voting many individual hypotheses. In practice, the large amount of memory required to store these hypotheses can make ensemble methods hard to deploy in applications. This paper shows that by selecting a subset of the hypotheses, it is possible to obtain nearly the same levels of performance as the entire set. The results also provide some insight into the behavior of AdaBoost. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
344
val
1-hop neighbor's text information: Learning se-mantic grammars with constructive inductive logic programming. : Automating the construction of semantic grammars is a difficult and interesting problem for machine learning. This paper shows how the semantic-grammar acquisition problem can be viewed as the learning of search-control heuristics in a logic program. Appropriate control rules are learned using a new first-order induction algorithm that automatically invents useful syntactic and semantic categories. Empirical results show that the learned parsers generalize well to novel sentences and out-perform previous approaches based on connectionist techniques. Target text information: Combining Top-down and Bottom-up Techniques in Inductive Logic Programming, : This paper describes a new method for inducing logic programs from examples which attempts to integrate the best aspects of existing ILP methods into a single coherent framework. In particular, it combines a bottom-up method similar to Golem with a top-down method similar to Foil. It also includes a method for predicate invention similar to Champ and an elegant solution to the "noisy oracle" problem which allows the system to learn recursive programs without requiring a complete set of positive examples. Systematic experimental comparisons to both Golem and Foil on a range of problems are used to clearly demonstrate the advantages of the approach. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
0
Rule Learning
cora
1,735
test
1-hop neighbor's text information: Concept learning and flexible weighting. : We previously introduced an exemplar model, named GCM-ISW, that exploits a highly flexible weighting scheme. Our simulations showed that it records faster learning rates and higher asymptotic accuracies on several artificial categorization tasks than models with more limited abilities to warp input spaces. This paper extends our previous work; it describes experimental results that suggest human subjects also invoke such highly flexible schemes. In particular, our model provides significantly better fits than models with less flexibility, and we hypothesize that humans selectively weight attributes depending on an item's location in the input space. We need more flexible models Many theories of human concept learning posit that concepts are represented by prototypes (Reed, 1972) or exemplars (Medin & Schaffer, 1978). Prototype models represent concepts by the "best example" or "central tendency" of the concept. 1 A new item belongs in a category C if it is relatively similar to C's prototype. Prototype models are relatively inflexible; they discard a great deal of information that people use during concept learning (e.g., the number of exemplars in a concept (Homa & Cultice, 1984), the variability of features (Fried & Holyoak, 1984), correlations between features (Medin et al., 1982), and the particular exemplars used (Whittlesea, 1987)). of concept learning Target text information: Improving minority class prediction using case-specific feature weights. : This paper addresses the problem of handling skewed class distributions within the case-based learning (CBL) framework. We first present as a baseline an information-gain-weighted CBL algorithm and apply it to three data sets from natural language processing (NLP) with skewed class distributions. Although overall performance of the baseline CBL algorithm is good, we show that the algorithm exhibits poor performance on minority class instances. We then present two CBL algorithms designed to improve the performance of minority class predictions. Each variation creates test-case-specific feature weights by first observing the path taken by the test case in a decision tree created for the learning task, and then using path-specific information gain values to create an appropriate weight vector for use during case retrieval. When applied to the NLP data sets, the algorithms are shown to significantly increase the accuracy of minority class predictions while maintaining or improving over all classification accuracy. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
1,398
test
1-hop neighbor's text information: A formal analysis of the role of multi--point crossover in genetic algorithms. : On the basis of early theoretical and empirical studies, genetic algorithms have typically used 1 and 2-point crossover operators as the standard mechanisms for implementing recombination. However, there have been a number of recent studies, primarily empirical in nature, which have shown the benefits of crossover operators involving a higher number of crossover points. From a traditional theoretical point of view, the most surprising of these new results relate to uniform crossover, which involves on the average L / 2 crossover points for strings of length L. In this paper we extend the existing theoretical results in an attempt to provide a broader explanatory and predictive theory of the role of multi-point crossover in genetic algorithms. In particular, we extend the traditional disruption analysis to include two general forms of multi-point crossover: n-point crossover and uniform crossover. We also analyze two other aspects of multi-point crossover operators, namely, their recombination potential and exploratory power. The results of this analysis provide a much clearer view of the role of multi-point crossover in genetic algorithms. The implications of these results on implementation issues and performance are discussed, and several directions for further research are suggested. 1-hop neighbor's text information: Crossover or Mutation?: Genetic algorithms rely on two genetic operators crossover and mutation. Although there exists a large body of conventional wisdom concerning the roles of crossover and mutation, these roles have not been captured in a theoretical fashion. For example, it has never been theoretically shown that mutation is in some sense "less powerful" than crossover or vice versa. This paper provides some answers to these questions by theoretically demonstrating that there are some important characteristics of each operator that are not captured by the other. 1-hop neighbor's text information: User\'s Guide to the PGAPack Parallel Genetic Algorithm Library Version 0.2. : Target text information: On the Virtues of Parameterized Uniform Crossover, : Traditionally, genetic algorithms have relied upon 1 and 2-point crossover operators. Many recent empirical studies, however, have shown the benefits of higher numbers of crossover points. Some of the most intriguing recent work has focused on uniform crossover, which involves on the average L/2 crossover points for strings of length L. Theoretical results suggest that, from the view of hyperplane sampling disruption, uniform crossover has few redeeming features. However, a growing body of experimental evidence suggests otherwise. In this paper, we attempt to reconcile these opposing views of uniform crossover and present a framework for understanding its virtues. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
2,693
test
1-hop neighbor's text information: Growing Cell Structures A Self-Organizing Network for Unsupervised and Supervised Learning, : We present a new self-organizing neural network model having two variants. The first variant performs unsupervised learning and can be used for data visualization, clustering, and vector quantization. The main advantage over existing approaches, e.g., the Kohonen feature map, is the ability of the model to automatically find a suitable network structure and size. This is achieved through a controlled growth process which also includes occasional removal of units. The second variant of the model is a supervised learning method which results from the combination of the abovementioned self-organizing network with the radial basis function (RBF) approach. In this model it is possible in contrast to earlier approaches toperform the positioning of the RBF units and the supervised training of the weights in parallel. Therefore, the current classification error can be used to determine where to insert new RBF units. This leads to small networks which generalize very well. Results on the two-spirals benchmark and a vowel classification problem are presented which are better than any results previously published. fl submitted for publication 1-hop neighbor's text information: Pruning recurrent neural networks for improved generalization performance. : Determining the architecture of a neural network is an important issue for any learning task. For recurrent neural networks no general methods exist that permit the estimation of the number of layers of hidden neurons, the size of layers or the number of weights. We present a simple pruning heuristic which significantly improves the generalization performance of trained recurrent networks. We illustrate this heuristic by training a fully recurrent neural network on positive and negative strings of a regular grammar. We also show that if rules are extracted from networks trained to recognize these strings, that rules extracted after pruning are more consistent with the rules to be learned. This performance improvement is obtained by pruning and retraining the networks. Simulations are shown for training and pruning a recurrent neural net on strings generated by two regular grammars, a randomly-generated 10-state grammar and an 8-state triple parity grammar. Further simulations indicate that this pruning method can gives generalization performance superior to that obtained by training with weight decay. 1-hop neighbor's text information: Biological Metaphor and the design of modular artificial neural networks. : Target text information: Evolving Artificial Neural Networks using the Baldwin Effect: This paper describes how through simple means a genetic search towards optimal neural network architectures can be improved, both in the convergence speed as in the quality of the final result. This result can be theoretically explained with the Baldwin effect, which is implemented here not just by the learning process of the network alone, but also by changing the network architecture as part of the learning procedure. This can be seen as a combination of two different techniques, both help ing and improving on simple genetic search. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
647
test
1-hop neighbor's text information: "Learning and evolution in neural networks," : 1-hop neighbor's text information: "Adding Learning to the Cellular development of Neural Networks: Evolution and the Baldwin Effect," : This paper compares the efficiency of two encoding schemes for Artificial Neural Networks optimized by evolutionary algorithms. Direct Encoding encodes the weights for an a priori fixed neural network architecture. Cellular Encoding encodes both weights and the architecture of the neural network. In previous studies, Direct Encoding and Cellular Encoding have been used to create neural networks for balancing 1 and 2 poles attached to a cart on a fixed track. The poles are balanced by a controller that pushes the cart to the left or the right. In some cases velocity information about the pole and cart is provided as an input; in other cases the network must learn to balance a single pole without velocity information. A careful study of the behavior of these systems suggests that it is possible to balance a single pole with velocity information as an input and without learning to compute the velocity. A new fitness function is introduced that forces the neural network to compute the velocity. By using this new fitness function and tuning the syntactic constraints used with cellular encoding, we achieve a tenfold speedup over our previous study and solve a more difficult problem: balancing two poles when no information about the velocity is provided as input. 1-hop neighbor's text information: Adaptive global optimization with local search. : Target text information: "The Role of Development in Genetic Algorithms." : Technical Report Number CS94-394 Computer Science and Engineering, U.C.S.D. Abstract The developmental mechanisms transforming genotypic to phenotypic forms are typically omitted in formulations of genetic algorithms (GAs) in which these two representational spaces are identical. We argue that a careful analysis of developmental mechanisms is useful when understanding the success of several standard GA techniques, and can clarify the relationships between more recently proposed enhancements. We provide a framework which distinguishes between two developmental mechanisms | learning and maturation | while also showing several common effects on GA search. This framework is used to analyze how maturation and local search can change the dynamics of the GA. We observe that in some contexts, maturation and local search can be incorporated into the fitness evaluation, but illustrate reasons for considering them seperately. Further, we identify contexts in which maturation and local search can be distinguished from the fitness evaluation. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
2,381
test
1-hop neighbor's text information: Learning Switching Concepts: We consider learning in situations where the function used to classify examples may switch back and forth between a small number of different concepts during the course of learning. We examine several models for such situations: oblivious models in which switches are made independent of the selection of examples, and more adversarial models in which a single adversary controls both the concept switches and example selection. We show relationships between the more benign models and the p-concepts of Kearns and Schapire, and present polynomial-time algorithms for learning switches between two k-DNF formulas. For the most adversarial model, we present a model of success patterned after the popular competitive analysis used in studying on-line algorithms. We describe a randomized query algorithm for such adversarial switches between two monotone disjunctions that is "1-competitive" in that the total number of mistakes plus queries is with high probability bounded by the number of switches plus some fixed polynomial in n (the number of variables). We also use notions described here to provide sufficient conditions under which learning a p-concept class "with a decision rule" implies being able to learn the class "with a model of probability." 1-hop neighbor's text information: Boosting a Weak Learning Algorithm by Majority. : We present an algorithm for improving the accuracy of algorithms for learning binary concepts. The improvement is achieved by combining a large number of hypotheses, each of which is generated by training the given learning algorithm on a different set of examples. Our algorithm is based on ideas presented by Schapire in his paper "The strength of weak learnability", and represents an improvement over his results. The analysis of our algorithm provides general upper bounds on the resources required for learning in Valiant's polynomial PAC learning framework, which are the best general upper bounds known today. We show that the number of hypotheses that are combined by our algorithm is the smallest number possible. Other outcomes of our analysis are results regarding the representational power of threshold circuits, the relation between learnability and compression, and a method for parallelizing PAC learning algorithms. We provide extensions of our algorithms to cases in which the concepts are not binary and to the case where the accuracy of the learning algorithm depends on the distribution of the instances. 1-hop neighbor's text information: Long. Prediction, learning, uniform convergence, and scale-sensitive dimensions, : We present a new general-purpose algorithm for learning classes of [0; 1]-valued functions in a generalization of the prediction model, and prove a general upper bound on the expected absolute error of this algorithm in terms of a scale-sensitive generalization of the Vapnik dimension proposed by Alon, Ben-David, Cesa-Bianchi and Haussler. We give lower bounds implying that our upper bounds cannot be improved by more than a constant factor in general. We apply this result, together with techniques due to Haussler and to Benedek and Itai, to obtain new upper bounds on packing numbers in terms of this scale-sensitive notion of dimension. Using a different technique, we obtain new bounds on packing numbers in terms of Kearns and Schapire's fat-shattering function. We show how to apply both packing bounds to obtain improved general bounds on the sample complexity of agnostic learning. For each * > 0, we establish weaker sufficient and stronger necessary conditions for a class of [0; 1]-valued functions to be agnostically learnable to within *, and to be an *-uniform Glivenko-Cantelli class. Target text information: Efficient distribution-free learning of probabilistic concepts. : In this paper we investigate a new formal model of machine learning in which the concept (boolean function) to be learned may exhibit uncertain or probabilistic behavior|thus, the same input may sometimes be classified as a positive example and sometimes as a negative example. Such probabilistic concepts (or p-concepts) may arise in situations such as weather prediction, where the measured variables and their accuracy are insufficient to determine the outcome with certainty. We adopt from the Valiant model of learning [27] the demands that learning algorithms be efficient and general in the sense that they perform well for a wide class of p-concepts and for any distribution over the domain. In addition to giving many efficient algorithms for learning natural classes of p-concepts, we study and develop in detail an underlying theory of learning p-concepts. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
927
test
1-hop neighbor's text information: NP-Completeness of Minimum Rule Sets: Rule induction systems seek to generate rule sets which are optimal in the complexity of the rule set. This paper develops a formal proof of the NP-Completeness of the problem of generating the simplest rule set (MIN RS) which accurately predicts examples in the training set for a particular type of generalization algorithm algorithm and complexity measure. The proof is then informally extended to cover a broader spectrum of complexity measures and learning algorithms. Target text information: Martinez (1993.) The Design and Evaluation of a Rule Induction Algorithm. : This paper appeared in Proceedings of the 6th Australian Joint Conference on Artificial Intelligence, Melbourne, Australia, 17 Nov. 1993, pp. 348-355. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
1,550
test
1-hop neighbor's text information: Machine Learning Bias, Statistical Bias, and Statistical Variance of Decision Tree Algorithms. : The term "bias" is widely used|and with different meanings|in the fields of machine learning and statistics. This paper clarifies the uses of this term and shows how to measure and visualize the statistical bias and variance of learning algorithms. Statistical bias and variance can be applied to diagnose problems with machine learning bias, and the paper shows four examples of this. Finally, the paper discusses methods of reducing bias and variance. Methods based on voting can reduce variance, and the paper compares Breiman's bagging method and our own tree randomization method for voting decision trees. Both methods uniformly improve performance on data sets from the Irvine repository. Tree randomization yields perfect performance on the Letter Recognition task. A weighted nearest neighbor algorithm based on the infinite bootstrap is also introduced. In general, decision tree algorithms have moderate-to-high variance, so an important implication of this work is that variance|rather than appropriate or inappropriate machine learning bias|is an important cause of poor performance for decision tree algorithms. 1-hop neighbor's text information: Data-oriented methods for grapheme-to-phoneme conversion. : It is traditionally assumed that various sources of linguistic knowledge and their interaction should be formalised in order to be able to convert words into their phonemic representations with reasonable accuracy. We show that using supervised learning techniques, based on a corpus of transcribed words, the same and even better performance can be achieved, without explicit modeling of linguistic knowledge. In this paper we present two instances of this approach. A first model implements a variant of instance-based learning, in which a weighed similarity metric and a database of prototypical exemplars are used to predict new mappings. In the second model, grapheme-to-phoneme mappings are looked up in a compressed text-to-speech lexicon (table lookup) enriched with default mappings. We compare performance and accuracy of these approaches to a connectionist (backpropagation) approach and to the linguistic knowledge based approach. 1-hop neighbor's text information: Learning complex boolean functions : Algorithms and applications. : The most commonly used neural network models are not well suited to direct digital implementations because each node needs to perform a large number of operations between floating point values. Fortunately, the ability to learn from examples and to generalize is not restricted to networks of this type. Indeed, networks where each node implements a simple Boolean function (Boolean networks) can be designed in such a way as to exhibit similar properties. Two algorithms that generate Boolean networks from examples are presented. The results show that these algorithms generalize very well in a class of problems that accept compact Boolean network descriptions. The techniques described are general and can be applied to tasks that are not known to have that characteristic. Two examples of applications are presented: image reconstruction and hand-written character recognition. Target text information: Error-correcting output codes: A general method for improving multiclass inductive learning programs. : Multiclass learning problems involve finding a definition for an unknown function f(x) whose range is a discrete set containing k > 2 values (i.e., k "classes"). The definition is acquired by studying large collections of training examples of the form hx i ; f(x i )i. Existing approaches to this problem include (a) direct application of multiclass algorithms such as the decision-tree algorithms ID3 and CART, (b) application of binary concept learning algorithms to learn individual binary functions for each of the k classes, and (c) application of binary concept learning algorithms with distributed output codes such as those employed by Sejnowski and Rosenberg in the NETtalk system. This paper compares these three approaches to a new technique in which BCH error-correcting codes are employed as a distributed output representation. We show that these output representations improve the performance of ID3 on the NETtalk task and of backpropagation on an isolated-letter speech-recognition task. These results demonstrate that error-correcting output codes provide a general-purpose method for improving the performance of inductive learning programs on multi-class problems. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
1,711
test
1-hop neighbor's text information: Bumptrees for Efficient Function, Constraint, and Classification Learning, : A new class of data structures called bumptrees is described. These structures are useful for efficiently implementing a number of neural network related operations. An empirical comparison with radial basis functions is presented on a robot arm mapping learning task. Applications to density estimation, classification, and constraint representation and learning are also outlined. 1-hop neighbor's text information: Hoeffding Races: Accelerating Model Selection Search for Classification and Function Approximation, : Selecting a good model of a set of input points by cross validation is a computationally intensive process, especially if the number of possible models or the number of training points is high. Techniques such as gradient descent are helpful in searching through the space of models, but problems such as local minima, and more importantly, lack of a distance metric between various models reduce the applicability of these search methods. Hoeffding Races is a technique for finding a good model for the data by quickly discarding bad models, and concentrating the computational effort at differentiating between the better ones. This paper focuses on the special case of leave-one-out cross validation applied to memory-based learning algorithms, but we also argue that it is applicable to any class of model selection problems. 1-hop neighbor's text information: Prototype and feature selection by sampling and random mutation hill climbing algorithms. : With the goal of reducing computational costs without sacrificing accuracy, we describe two algorithms to find sets of prototypes for nearest neighbor classification. Here, the term prototypes refers to the reference instances used in a nearest neighbor computation the instances with respect to which similarity is assessed in order to assign a class to a new data item. Both algorithms rely on stochastic techniques to search the space of sets of prototypes and are simple to implement. The first is a Monte Carlo sampling algorithm; the second applies random mutation hill climbing. On four datasets we show that only three or four prototypes sufficed to give predictive accuracy equal or superior to a basic nearest neighbor algorithm whose run-time storage costs were approximately 10 to 200 times greater. We briefly investigate how random mutation hill climbing may be applied to select features and prototypes simultaneously. Finally, we explain the performance of the sampling algorithm on these datasets in terms of a statistical measure of the extent of clustering displayed by the target classes. Target text information: : Instance-based learning methods explicitly remember all the data that they receive. They usually have no training phase, and only at prediction time do they perform computation. Then, they take a query, search the database for similar datapoints and build an on-line local model (such as a local average or local regression) with which to predict an output value. In this paper we review the advantages of instance based methods for autonomous systems, but we also note the ensuing cost: hopelessly slow computation as the database grows large. We present and evaluate a new way of structuring a database and a new algorithm for accessing it that maintains the advantages of instance-based learning. Earlier attempts to combat the cost of instance-based learning have sacrificed the explicit retention of all data, or been applicable only to instance-based predictions based on a small number of near neighbors or have had to re-introduce an explicit training phase in the form of an interpolative data structure. Our approach builds a multiresolution data structure to summarize the database of experiences at all resolutions of interest simultaneously. This permits us to query the database with the same exibility as a conventional linear search, but at greatly reduced computational cost. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
783
test
1-hop neighbor's text information: Using qualitative relationships for bounding probability distributions. : We exploit qualitative probabilistic relationships among variables for computing bounds of conditional probability distributions of interest in Bayesian networks. Using the signs of qualitative relationships, we can implement abstraction operations that are guaranteed to bound the distributions of interest in the desired direction. By evaluating incrementally improved approximate networks, our algorithm obtains monotonically tightening bounds that converge to exact distributions. For supermodular utility functions, the tightening bounds monotonically reduce the set of admissible decision alternatives as well. 1-hop neighbor's text information: Incremental tradeoff resolution in qualitative probabilistic networks. : Qualitative probabilistic reasoning in a Bayesian network often reveals tradeoffs: relationships that are ambiguous due to competing qualitative influences. We present two techniques that combine qualitative and numeric probabilistic reasoning to resolve such tradeoffs, inferring the qualitative relationship between nodes in a Bayesian network. The first approach incrementally marginalizes nodes that contribute to the ambiguous qualitative relationships. The second approach evaluates approximate Bayesian networks for bounds of probability distributions, and uses these bounds to determinate qualitative relationships in question. This approach is also incremental in that the algorithm refines the state spaces of random variables for tighter bounds until the qualitative relationships are resolved. Both approaches provide systematic methods for tradeoff resolution at potentially lower computational cost than application of purely numeric methods. 1-hop neighbor's text information: Sonderforschungsbereich 314 K unstliche Intelligenz Wissensbasierte Systeme KI-Labor am Lehrstuhl f ur Informatik IV Numerical: Target text information: State-space abstraction for anytime evaluation of probabilistic networks. : One important factor determining the computa - tional complexity of evaluating a probabilistic network is the cardinality of the state spaces of the nodes. By varying the granularity of the state spaces, one can trade off accuracy in the result for computational efficiency. We present an any - time procedure for approximate evaluation of probabilistic networks based on this idea. On application to some simple networks, the proce - dure exhibits a smooth improvement in approxi - mation quality as computation time increases. This suggests that statespace abstraction is one more useful control parameter for designing real time probabilistic reasoners. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
1,236
test
1-hop neighbor's text information: (1998) "Sequential importance sampling for nonparametric Bayes models: The next generation", : There are two generations of Gibbs sampling methods for semi-parametric models involving the Dirichlet process. The first generation suffered from a severe drawback; namely that the locations of the clusters, or groups of parameters, could essentially become fixed, moving only rarely. Two strategies that have been proposed to create the second generation of Gibbs samplers are integration and appending a second stage to the Gibbs sampler wherein the cluster locations are moved. We show that these same strategies are easily implemented for the sequential importance sampler, and that the first strategy dramatically improves results. As in the case of Gibbs sampling, these strategies are applicable to a much wider class of models. They are shown to provide more uniform importance sampling weights and lead to additional Rao-Blackwellization of estimators. Steve MacEachern is Associate Professor, Department of Statistics, Ohio State University, Merlise Clyde is Assistant Professor, Institute of Statistics and Decision Sciences, Duke University, and Jun Liu is Assistant Professor, Department of Statistics, Stanford University. The work of the second author was supported in part by the National Science Foundation grants DMS-9305699 and DMS-9626135, and that of the last author by the National Science Foundation grants DMS-9406044, DMS-9501570, and the Terman Fellowship. 1-hop neighbor's text information: "Sampling from Multimodal Distributions Using Tempered Transitions," : Technical Report No. 9421, Department of Statistics, University of Toronto Abstract. I present a new Markov chain sampling method appropriate for distributions with isolated modes. Like the recently-developed method of "simulated tempering", the "tempered transition" method uses a series of distributions that interpolate between the distribution of interest and a distribution for which sampling is easier. The new method has the advantage that it does not require approximate values for the normalizing constants of these distributions, which are needed for simulated tempering, and can be tedious to estimate. Simulated tempering performs a random walk along the series of distributions used. In contrast, the tempered transitions of the new method move systematically from the desired distribution, to the easily-sampled distribution, and back to the desired distribution. This systematic movement avoids the inefficiency of a random walk, an advantage that unfortunately is cancelled by an increase in the number of interpolating distributions required. Because of this, the sampling efficiency of the tempered transition method in simple problems is similar to that of simulated tempering. On more complex distributions, however, simulated tempering and tempered transitions may perform differently. Which is better depends on the ways in which the interpolating distributions are "deceptive". Target text information: Importance Sampling: Technical Report No. 9805, Department of Statistics, University of Toronto Abstract. Simulated annealing | moving from a tractable distribution to a distribution of interest via a sequence of intermediate distributions | has traditionally been used as an inexact method of handling isolated modes in Markov chain samplers. Here, it is shown how one can use the Markov chain transitions for such an annealing sequence to define an importance sampler. The Markov chain aspect allows this method to perform acceptably even for high-dimensional problems, where finding good importance sampling distributions would otherwise be very difficult, while the use of importance weights ensures that the estimates found converge to the correct values as the number of annealing runs increases. This annealed importance sampling procedure resembles the second half of the previously-studied tempered transitions, and can be seen as a generalization of a recently-proposed variant of sequential importance sampling. It is also related to thermodynamic integration methods for estimating ratios of normalizing constants. Annealed importance sampling is most attractive when isolated modes are present, or when estimates of normalizing constants are required, but it may also be more generally useful, since its independent sampling allows one to bypass some of the problems of assessing convergence and autocorrelation in Markov chain samplers. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
807
test
1-hop neighbor's text information: Learning sequential tasks by incrementally adding higher orders, : An incremental, higher-order, non-recurrent network combines two properties found to be useful for learning sequential tasks: higher-order connections and incremental introduction of new units. The network adds higher orders when needed by adding new units that dynamically modify connection weights. Since the new units modify the weights at the next time-step with information from the previous step, temporal tasks can be learned without the use of feedback, thereby greatly simplifying training. Furthermore, a theoretically unlimited number of units can be added to reach into the arbitrarily distant past. Experiments with the Reber grammar have demonstrated speedups of two orders of magnitude over recurrent networks. 1-hop neighbor's text information: Induction of multiscale temporal structure. : Learning structure in temporally-extended sequences is a difficult computational problem because only a fraction of the relevant information is available at any instant. Although variants of back propagation can in principle be used to find structure in sequences, in practice they are not sufficiently powerful to discover arbitrary contingencies, especially those spanning long temporal intervals or involving high order statistics. For example, in designing a connectionist network for music composition, we have encountered the problem that the net is able to learn musical structure that occurs locally in time|e.g., relations among notes within a musical phrase|but not structure that occurs over longer time periods|e.g., relations among phrases. To address this problem, we require a means of constructing a reduced description of the sequence that makes global aspects more explicit or more readily detectable. I propose to achieve this using hidden units that operate with different time constants. Simulation experiments indicate that slower time-scale hidden units are able to pick up global structure, structure that simply can not be learned by standard Many patterns in the world are intrinsically temporal, e.g., speech, music, the unfolding of events. Recurrent neural net architectures have been devised to accommodate time-varying sequences. For example, the architecture shown in Figure 1 can map a sequence of inputs to a sequence of outputs. Learning structure in temporally-extended sequences is a difficult computational problem because the input pattern may not contain all the task-relevant information at any instant. Thus, back propagation. Target text information: Ring. Sequence learning with incremental higher-order neural networks. : An incremental, higher-order, non-recurrent neural-network combines two properties found to be useful for sequence learning in neural-networks: higher-order connections and the incremental introduction of new units. The incremental, higher-order neural-network adds higher orders when needed by adding new units that dynamically modify connection weights. The new units modify the weights at the next time-step with information from the previous step. Since a theoretically unlimited number of units can be added to the network, information from the arbitrarily distant past can be brought to bear on each prediction. Temporal tasks can thereby be learned without the use of feedback, in contrast to recurrent neural-networks. Because there are no recurrent connections, training is simple and fast. Experiments have demonstrated speedups of two orders of magnitude over recurrent networks. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
1,786
test
1-hop neighbor's text information: Inversion in time. : Inversion of multilayer synchronous networks is a method which tries to answer questions like "What kind of input will give a desired output?" or "Is it possible to get a desired output (under special input/output constraints)?". We will describe two methods of inverting a connectionist network. Firstly, we extend inversion via backpropagation (Linden/Kindermann [4], Williams [11]) to recurrent (El-man [1], Jordan [3], Mozer [5], Williams/Zipser [10]), time-delayed (Waibel at al. [9]) and discrete versions of continuous networks (Pineda [7], Pearlmutter [6]). The result of inversion is an input vector. The corresponding output vector is equal to the target vector except a small remainder. The knowledge of those attractors may help to understand the function and the generalization qualities of connectionist systems of this kind. Secondly, we introduce a new inversion method for proving the non-existence of an input combination under special constraints, e.g. in a subspace of the input space. This method works by iterative exclusion of invalid activation values. It might be a helpful way to judge the properties of a trained network. We conclude with simulation results of three different tasks: XOR, morse signal decoding and handwritten digit recognition. Target text information: ADAPTIVE LOOK-AHEAD PLANNING problem of finding good initial plans is solved by the use of: We present a new adaptive connectionist planning method. By interaction with an environment a world model is progressively constructed using the backpropagation learning algorithm. The planner constructs a look-ahead plan by iteratively using this model to predict future reinforcements. Future reinforcement is maximized to derive suboptimal plans, thus determining good actions directly from the knowledge of the model network (strategic level). This is done by gradient descent in action space. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
192
test
1-hop neighbor's text information: Toward optimal feature selection. : In this paper, we examine a method for feature subset selection based on Information Theory. Initially, a framework for defining the theoretically optimal, but computation-ally intractable, method for feature subset selection is presented. We show that our goal should be to eliminate a feature if it gives us little or no additional information beyond that subsumed by the remaining features. In particular, this will be the case for both irrelevant and redundant features. We then give an efficient algorithm for feature selection which computes an approximation to the optimal feature selection criterion. The conditions under which the approximate algorithm is successful are examined. Empirical results are given on a number of data sets, showing that the algorithm effectively han dles datasets with large numbers of features. 1-hop neighbor's text information: Efficient Locally Weighted Polynomial Regression. : Locally weighted polynomial regression (LWPR) is a popular instance-based algorithm for learning continuous non-linear mappings. For more than two or three inputs and for more than a few thousand dat-apoints the computational expense of predictions is daunting. We discuss drawbacks with previous approaches to dealing with this problem, and present a new algorithm based on a multiresolution search of a quickly-constructible augmented kd-tree. Without needing to rebuild the tree, we can make fast predictions with arbitrary local weighting functions, arbitrary kernel widths and arbitrary queries. The paper begins with a new, faster, algorithm for exact LWPR predictions. Next we introduce an approximation that achieves up to a two-orders-of-magnitude speedup with negligible accuracy losses. Increasing a certain approximation parameter achieves greater speedups still, but with a correspondingly larger accuracy degradation. This is nevertheless useful during operations such as the early stages of model selection and locating optima of a fitted surface. We also show how the approximations can permit real-time query-specific optimization of the kernel width. We conclude with a brief discussion of potential extensions for tractable instance-based learning on datasets that are too large to fit in a com puter's main memory. 1-hop neighbor's text information: Prototype and feature selection by sampling and random mutation hill climbing algorithms. : With the goal of reducing computational costs without sacrificing accuracy, we describe two algorithms to find sets of prototypes for nearest neighbor classification. Here, the term prototypes refers to the reference instances used in a nearest neighbor computation the instances with respect to which similarity is assessed in order to assign a class to a new data item. Both algorithms rely on stochastic techniques to search the space of sets of prototypes and are simple to implement. The first is a Monte Carlo sampling algorithm; the second applies random mutation hill climbing. On four datasets we show that only three or four prototypes sufficed to give predictive accuracy equal or superior to a basic nearest neighbor algorithm whose run-time storage costs were approximately 10 to 200 times greater. We briefly investigate how random mutation hill climbing may be applied to select features and prototypes simultaneously. Finally, we explain the performance of the sampling algorithm on these datasets in terms of a statistical measure of the extent of clustering displayed by the target classes. Target text information: On the Greediness of Feature Selection Algorithms: Based on our analysis and experiments using real-world datasets, we find that the greediness of forward feature selection algorithms does not severely corrupt the accuracy of function approximation using the selected input features, but improves the efficiency significantly. Hence, we propose three greedier algorithms in order to further enhance the efficiency of the feature selection processing. We provide empirical results for linear regression, locally weighted regression and k-nearest-neighbor models. We also propose to use these algorithms to develop an offline Chinese and Japanese handwriting recognition system with auto matically configured, local models. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
174
train
1-hop neighbor's text information: Rationality and its Roles in Reasoning (extended version), : The economic theory of rationality promises to equal mathematical logic in its importance for the mechanization of reasoning. We survey the growing literature on how the basic notions of probability, utility, and rational choice, coupled with practical limitations on information and resources, influence the design and analysis of reasoning and representation systems. 1-hop neighbor's text information: Impediments to Universal Preference-Based Default Theories: Research on nonmonotonic and default reasoning has identified several important criteria for preferring alternative default inferences. The theories of reasoning based on each of these criteria may uniformly be viewed as theories of rational inference, in which the reasoner selects maximally preferred states of belief. Though researchers have noted some cases of apparent conflict between the preferences supported by different theories, it has been hoped that these special theories of reasoning may be combined into a universal logic of nonmonotonic reasoning. We show that the different categories of preferences conflict more than has been realized, and adapt formal results from social choice theory to prove that every universal theory of default reasoning will violate at least one reasonable principle of rational reasoning. Our results can be interpreted as demonstrating that, within the preferential framework, we cannot expect much improvement on the rigid lexicographic priority mechanisms that have been proposed for conflict resolution. 1-hop neighbor's text information: On Decision-Theoretic Foundations for Defaults: In recent years, considerable effort has gone into understanding default reasoning. Most of this effort concentrated on the question of entailment, i.e., what conclusions are warranted by a knowledge-base of defaults. Surprisingly, few works formally examine the general role of defaults. We argue that an examination of this role is necessary in order to understand defaults, and suggest a concrete role for defaults: Defaults simplify our decision-making process, allowing us to make fast, approximately optimal decisions by ignoring certain possible states. In order to formalize this approach, we examine decision making in the framework of decision theory. We use probability and utility to measure the impact of possible states on the decision-making process. We accept a default if it ignores states with small impact according to our measure. We motivate our choice of measures and show that the resulting formalization of defaults satisfies desired properties of defaults, namely cumulative reasoning. Finally, we compare our approach with Poole's decision-theoretic defaults, and show how both can be combined to form an attractive framework for reasoning about decisions. We make numerous assumptions each day: the car will start, the road will not be blocked, there will be heavy traffic at 5pm, etc. Many of these assumptions are defeasible; we are willing to retract them given sufficient evidence. Humans naturally state defaults and draw conclusions from default information. Hence, defaults seem to play an important part in common-sense reasoning. To use such statements, however, we need a formal understanding of what defaults represent and what conclusions they admit. The problem of default entailment|roughly, what conclusions we should draw from a knowledge-base of defaults|has attracted a great deal of attention. Many researchers attempt to find "context-free" patterns of default reasoning (e.g., [ Kraus et al., 1990 ] ). As this research shows, much can be done in this approach. We claim, however, that the utility of this approach is limited; to gain a better understanding of defaults, we need to understand in what situations we should be willing to state a default. Our main thesis is that an investigation of defaults should elaborate their role in the behavior of the reasoning agent. This role should allow us to examine when a default is appropriate in terms of its implications on the agent's overall performance. In this paper, we suggest a particular role for defaults and show how this role allows us to provide a semantics for defaults. Of course, we do not claim that this is the only role defaults can play. In many applications, the end result of reasoning is a choice of actions. Usually, this choice is not optimal; there is too much uncertainty about the state of the world and the effects of actions to allow for an examination of all possibilities. We suggest that one role of defaults lies in simplifying our decision-making process by stating assumptions that reduce the space of examined possibilities. More precisely, we suggest that a default ' ! is a license to ignore : situations when our knowledge amounts to '. One particular suggestion that can be understood in this light is *-semantics [ Pearl, 1989 ] . In *-semantics, we accept a default ' ! if given the knowledge ', the probability of : is very small. This small probability of the :' states gives us a license to ignore them. Although probability plays an important part in our decisions, we claim that we should also examine the utility of our actions. For example, while most people think that it is highly unlikely that they will die next year, they also believe that they should not accept this as a default assumption in the context of a decision as to whether or not to buy life insurance. In this context, the stakes are too high to ignore this outcome, even though it is unlikely. We suggest that the license to ignore a set should be given based on its impact on our decision. To paraphrase this view, we should accept Bird ! Fly if assuming that the bird flies cannot get us into too much trouble. To formalize our intuitions we examine decision-making in the framework of decision theory [ Luce and Raiffa, 1957 ] . Decision theory represents a decision problem using several components: a set of possible states, a probability measure over these sets, and a utility function that assigns to each action and state a numerical value. Classical decision theory then uses the expected utility of an action as a measure of its "goodness". Target text information: Constructive belief and rational representation. : It is commonplace in artificial intelligence to divide an agent's explicit beliefs into two parts: the beliefs explicitly represented or manifest in memory, and the implicitly represented or constructive beliefs that are repeatedly reconstructed when needed rather than memorized. Many theories of knowledge view the relation between manifest and constructive beliefs as a logical relation, with the manifest beliefs representing the constructive beliefs through a logic of belief. This view, however, limits the ability of a theory to treat incomplete or inconsistent sets of beliefs in useful ways. We argue that a more illuminating view is that belief is the result of rational representation. In this theory, the agent obtains its constructive beliefs by using its manifest beliefs and preferences to rationally (in the sense of decision theory) choose the most useful conclusions indicated by the manifest beliefs. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
1,928
val
1-hop neighbor's text information: "Learning sequential decision rules using simulation models and competition," : The problem of learning decision rules for sequential tasks is addressed, focusing on the problem of learning tactical decision rules from a simple flight simulator. The learning method relies on the notion of competition and employs genetic algorithms to search the space of decision policies. Several experiments are presented that address issues arising from differences between the simulation model on which learning occurs and the target environment on which the decision rules are ultimately tested. 1-hop neighbor's text information: "Learning robot behaviors using genetic algorithms," : Genetic Algorithms are used to learn navigation and collision avoidance behaviors for robots. The learning is performed under simulation, and the resulting behaviors are then used to control the The approach to learning behaviors for robots described here reflects a particular methodology for learning via a simulation model. The motivation is that making mistakes on real systems may be costly or dangerous. In addition, time constraints might limit the number of experiences during learning in the real world, while in many cases, the simulation model can be made to run faster than real time. Since learning may require experimenting with behaviors that might occasionally produce unacceptable results if applied to the real world, or might require too much time in the real environment, we assume that hypothetical behaviors will be evaluated in a simulation model (the off-line system). As illustrated in Figure 1, the current best behavior can be placed in the real, on-line system, while learning continues in the off-line system [1]. The learning algorithm was designed to learn useful behaviors from simulations of limited fidelity. The expectation is that behaviors learned in these simulations will be useful in real-world environments. Previous studies have illustrated that knowledge learned under simulation is robust and might be applicable to the real world if the simulation is more general (i.e. has more noise, more varied conditions, etc.) than the real world environment [2]. Where this is not possible, it is important to identify the differences between the simulation and the world and note the effect upon the learning process. The research reported here continues to examine this hypothesis. The next section very briefly explains the learning algorithm (and gives pointers to where more extensive documentation can be found). After that, the actual robot is described. Then we describe the simulation of the robot. The task _______________ actual robot. 1-hop neighbor's text information: "Genetic and Non-Genetic Operators in Alecsys," : It is well known that standard learning classifier systems, when applied to many different domains, exhibit a number of problems: payoff oscillation, difficult to regulate interplay between the reward system and the background genetic algorithm (GA), rule chains instability, default hierarchies instability, are only a few. ALECSYS is a parallel version of a standard learning classifier system (CS), and as such suffers of these same problems. In this paper we propose some innovative solutions to some of these problems. We introduce the following original features. Mutespec, a new genetic operator used to specialize potentially useful classifiers. Energy, a quantity introduced to measure global convergence in order to apply the genetic algorithm only when the system is close to a steady state. Dynamical adjustment of the classifiers set cardinality, in order to speed up the performance phase of the algorithm. We present simulation results of experiments run in a simulated two-dimensional world in which a simple agent learns to follow a light source. Target text information: Robo-shepherd: Learning complex robotic behaviors. : This paper reports on recent results using genetic algorithms to learn decision rules for complex robot behaviors. The method involves evaluating hypothetical rule sets on a simulator and applying simulated evolution to evolve more effective rules. The main contributions of this paper are (1) the task learned is a complex behavior involving multiple mobile robots, and (2) the learned rules are verified through experiments on operational mobile robots. The case study involves a shepherding task in which one mobile robot attempts to guide another robot to a specified area. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
1,804
test
1-hop neighbor's text information: Veloso (1994). Planning and Learning by Analogical Reasoning. : Realistic and complex planning situations require a mixed-initiative planning framework in which human and automated planners interact to mutually construct a desired plan. Ideally, this joint cooperation has the potential of achieving better plans than either the human or the machine can create alone. Human planners often take a case-based approach to planning, relying on their past experience and planning by retrieving and adapting past planning cases. Planning by analogical reasoning in which generative and case-based planning are combined, as in Prodigy/Analogy, provides a suitable framework to study this mixed-initiative integration. However, having a human user engaged in this planning loop creates a variety of new research questions. The challenges we found creating a mixed-initiative planning system fall into three categories: planning paradigms differ in human and machine planning; visualization of the plan and planning process is a complex, but necessary task; and human users range across a spectrum of experience, both with respect to the planning domain and the underlying planning technology. This paper presents our approach to these three problems when designing an interface to incorporate a human into the process of planning by analogical reasoning with Prodigy/Analogy. The interface allows the user to follow both generative and case-based planning, it supports visualization of both plan and the planning rationale, and it addresses the variance in the experience of the user by allowing the user to control the presentation of information. 1-hop neighbor's text information: Supporting combined human and machine planning: The Prodigy 4.0 User Interface Version 2.0 (Tech. : Realistic and complex planning situations require a mixed-initiative planning framework in which human and automated planners interact to mutually construct a desired plan. Ideally, this joint cooperation has the potential of achieving better plans than either the human or the machine can create alone. Human planners often take a case-based approach to planning, relying on their past experience and planning by retrieving and adapting past planning cases. Planning by analogical reasoning in which generative and case-based planning are combined, as in Prodigy/Analogy, provides a suitable framework to study this mixed-initiative integration. However, having a human user engaged in this planning loop creates a variety of new research questions. The challenges we found creating a mixed-initiative planning system fall into three categories: planning paradigms differ in human and machine planning; visualization of the plan and planning process is a complex, but necessary task; and human users range across a spectrum of experience, both with respect to the planning domain and the underlying planning technology. This paper presents our approach to these three problems when designing an interface to incorporate a human into the process of planning by analogical reasoning with Prodigy/Analogy. The interface allows the user to follow both generative and case-based planning, it supports visualization of both plan and the planning rationale, and it addresses the variance in the experience of the user by allowing the user to control the presentation of information. * This research is sponsored as part of the DARPA/RL Knowledge Based Planning and Scheduling Initiative under grant number F30602-95-1-0018. A short version of this document appeared as Cox, M. T., & Veloso, M. M. (1997). Supporting combined human and machine planning: An interface for planning by analogical reasoning. In D. B. Leake & E. Plaza (Eds.), Case-Based Reasoning Research and Development: Second International Conference on Case-Based Reasoning (pp. 531-540). Berlin: Springer-Verlag. Target text information: Rationale-supported mixed-initiative case-based planning. : This paper introduces our work on mixed-initiative, rationale-supported planning. The work centers on the principled reuse and modification of past plans by exploiting their justification structure. The goal is to record as much as possible of the rationale underlying each planning decision in a mixed-initiative framework where human and machine planners interact. This rationale is used to determine which past plans are relevant to a new situation, to focus user's modification and replanning on different relevant steps when external circumstances dictate, and to ensure consistency in multi-user distributed scenarios. We build upon our previous work in Prodigy/Analogy, which incorporates algorithms to capture and reuse the rationale of an automated planner during its plan generation. To support a mixed-initiative environment, we have developed user interactive capabilities in the Prodigy planning and learning system. We are also working towards the integration of the rationale-supported plan reuse in Prodigy/Analogy with the plan retrieval and modification tools of ForMAT. Finally, we have focused on the user's input into the process of plan reuse, in particular when conditional planning is needed. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
2,526
val
1-hop neighbor's text information: An empirical comparison of selection measures for decision-tree induction. : Ourston and Mooney, 1990b ] D. Ourston and R. J. Mooney. Improving shared rules in multiple category domain theories. Technical Report AI90-150, Artificial Intelligence Labora tory, University of Texas, Austin, TX, December 1990. 1-hop neighbor's text information: Incremental induction of decision trees. : Technical Report 94-07 February 7, 1994 (updated April 25, 1994) This paper will appear in Proceedings of the Eleventh International Conference on Machine Learning. Abstract This paper presents an algorithm for incremental induction of decision trees that is able to handle both numeric and symbolic variables. In order to handle numeric variables, a new tree revision operator called `slewing' is introduced. Finally, a non-incremental method is given for finding a decision tree based on a direct metric of a candidate tree. Target text information: Geometric comparison of classifications and rule sets. : We present a technique for evaluating classifications by geometric comparison of rule sets. Rules are represented as objects in an n-dimensional hyperspace. The similarity of classes is computed from the overlap of the geometric class descriptions. The system produces a correlation matrix that indicates the degree of similarity between each pair of classes. The technique can be applied to classifications generated by different algorithms, with different numbers of classes and different attribute sets. Experimental results from a case study in a medical domain are included. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
0
Rule Learning
cora
2,142
test
1-hop neighbor's text information: Early stopping | but when? In Orr and Muller [1]. : Validation can be used to detect when overfitting starts during supervised training of a neural network; training is then stopped before convergence to avoid the overfitting ("early stopping"). The exact criterion used for validation-based early stopping, however, is usually chosen in an ad-hoc fashion or training is stopped interactively. This trick describes how to select a stopping criterion in a systematic fashion; it is a trick for either speeding learning procedures or improving generalization, whichever is more important in the particular situation. An empirical investigation on multi-layer perceptrons shows that there exists a tradeoff between training time and generalization: From the given mix of 1296 training runs using different 12 problems and 24 different network architectures I conclude slower stopping criteria allow for small improvements in generalization (here: about 4% on average), but cost much more training time (here: about factor 4 longer on average). Target text information: Fast pruning using principal components. : We present a new algorithm for eliminating excess parameters and improving network generalization after supervised training. The method, "Principal Components Pruning (PCP)", is based on principal component analysis of the node activations of successive layers of the network. It is simple, cheap to implement, and effective. It requires no network retraining, and does not involve calculating the full Hessian of the cost function. Only the weight and the node activity correlation matrices for each layer of nodes are required. We demonstrate the efficacy of the method on a regression problem using polynomial basis functions, and on an economic time series prediction problem using a two-layer, feedforward network. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
1,503
train
1-hop neighbor's text information: Selecting input variables using mutual information and nonparamteric density estimation. : In learning problems where a connectionist network is trained with a finite sized training set, better generalization performance is often obtained when unneeded weights in the network are eliminated. One source of unneeded weights comes from the inclusion of input variables that provide little information about the output variables. We propose a method for identifying and eliminating these input variables. The method first determines the relationship between input and output variables using nonparametric density estimation and then measures the relevance of input variables using the information theoretic concept of mutual information. We present results from our method on a simple toy problem and a nonlinear time series. 1-hop neighbor's text information: Predicting probability distributions: A connectionist approach. : Most traditional prediction techniques deliver the mean of the probability distribution (a single point). For multimodal processes, instead of predicting the mean of the probability distribution, it is important to predict the full distribution. This article presents a new connectionist method to predict the conditional probability distribution in response to an input. The main idea is to transform the problem from a regression to a classification problem. The conditional probability distribution network can perform both direct predictions and iterated predictions, a task which is specific for time series problems. We compare our method to fuzzy logic and discuss important differences, and also demonstrate the architecture on two time series. The first is the benchmark laser series used in the Santa Fe competition, a deterministic chaotic system. The second is a time series from a Markov process which exhibits structure on two time scales. The network produces multimodal predictions for this series. We compare the predictions of the network with a nearest-neighbor predictor and find that the conditional probability network is more than twice as likely a model. 1-hop neighbor's text information: Predictions with confidence intervals (local error bars). : We present a new method for obtaining local error bars, i.e., estimates of the confidence in the predicted value that depend on the input. We approach this problem of nonlinear regression in a maximum likelihood framework. We demonstrate our technique first on computer generated data with locally varying, normally distributed target noise. We then apply it to the laser data from the Santa Fe Time Series Competition. Finally, we extend the technique to estimate error bars for iterated predictions, and apply it to the exact competition task where it gives the best performance to date. Target text information: The Observer-Observation Dilemma in Neuro-Forecasting: Reliable Models From Unreliable Data Through CLEARNING: This paper introduces the idea of clearning, of simultaneously cleaning data and learning the underlying structure. The cleaning step can be viewed as top-down processing (the model modifies the data), and the learning step can be viewed as bottom-up processing (where the data modifies the model). After discussing the statistical foundation of the proposed method from a maximum likelihood perspective, we apply clearning to a notoriously hard problem where benchmark performances are very well known: the prediction of foreign exchange rates. On the difficult 1993-1994 test period, clearning in conjunction with pruning yields an annualized return between 35 and 40% (out-of-sample), significantly better than an otherwise identical network trained without cleaning. The network was started with 69 inputs and 15 hidden units and ended up with only 39 non-zero weights between inputs and hidden units. The resulting ultra-sparse final architectures obtained with clearning and pruning are immune against overfitting, even on very noisy problems since the cleaned data allow for a simpler model. Apart from the very competitive performance, clearning gives insight into the data: we show how to estimate the overall signal-to-noise ratio of each input variable, and we show that error estimates for each pattern can be used to detect and remove outliers, and to replace missing or corrupted data by cleaned values. Clearning can be used in any nonlinear regression or classification problem. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
668
val
1-hop neighbor's text information: Learning to Act using Real- Time Dynamic Programming. : fl The authors thank Rich Yee, Vijay Gullapalli, Brian Pinette, and Jonathan Bachrach for helping to clarify the relationships between heuristic search and control. We thank Rich Sutton, Chris Watkins, Paul Werbos, and Ron Williams for sharing their fundamental insights into this subject through numerous discussions, and we further thank Rich Sutton for first making us aware of Korf's research and for his very thoughtful comments on the manuscript. We are very grateful to Dimitri Bertsekas and Steven Sullivan for independently pointing out an error in an earlier version of this article. Finally, we thank Harry Klopf, whose insight and persistence encouraged our interest in this class of learning problems. This research was supported by grants to A.G. Barto from the National Science Foundation (ECS-8912623 and ECS-9214866) and the Air Force Office of Scientific Research, Bolling AFB (AFOSR-89-0526). 1-hop neighbor's text information: Learning from an automated training agent. : A learning agent employing reinforcement learning is hindered because it only receives the critic's sparse and weakly informative training information. We present an approach in which an automated training agent may also provide occasional instruction to the learner in the form of actions for the learner to perform. The learner has access to both the critic's feedback and the trainer's instruction. In the experiments, we vary the level of the trainer's interaction with the learner, from allowing the trainer to instruct the learner at almost every time step, to not allowing the trainer to respond at all. We also vary a parameter that controls how the learner incorporates the trainer's actions. The results show significant reductions in the average number of training trials necessary to learn to perform the task. Target text information: An Introspection Approach to Querying a Trainer: Technical Report 96-13 January 22, 1996 Abstract This paper introduces the Introspection Approach, a method by which a learning agent employing reinforcement learning can decide when to ask a training agent for instruction. When using our approach, we find that the same number of trainer's responses produced significantly faster learners than by having the learner ask for aid randomly. Guidance received via our approach is more informative than random guidance. Thus, we can reduce the interaction that the training agent has with the learning agent without reducing the speed with which the learner develops its policy. In fact, by being intelligent about when the learner asks for help, we can even increase the learning speed for the same level of trainer interaction. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
5
Reinforcement Learning
cora
128
test
1-hop neighbor's text information: Proben1: A set of neural network benchmark problems and benchmarking rules. : Proben1 is a collection of problems for neural network learning in the realm of pattern classification and function approximation plus a set of rules and conventions for carrying out benchmark tests with these or similar problems. Proben1 contains 15 data sets from 12 different domains. All datasets represent realistic problems which could be called diagnosis tasks and all but one consist of real world data. The datasets are all presented in the same simple format, using an attribute representation that can directly be used for neural network training. Along with the datasets, Proben1 defines a set of rules for how to conduct and how to document neural network benchmarking. The purpose of the problem and rule collection is to give researchers easy access to data for the evaluation of their algorithms and networks and to make direct comparison of the published results feasible. This report describes the datasets and the benchmarking rules. It also gives some basic performance measures indicating the difficulty of the various problems. These measures can be used as baselines for comparison. 1-hop neighbor's text information: A study of experimental evaluations of neural network learning algorithms: Current research practice. : 113 articles about neural network learning algorithms published in 1993 and 1994 are examined for the amount of experimental evaluation they contain. Every third of them does employ not even a single realistic or real learning problem. Only 6% of all articles present results for more than one problem using real world data. Furthermore, one third of all articles does not present any quantitative comparison with a previously known algorithm. These results indicate that the quality of research in the area of neural network learning algorithms needs improvement. The publication standards should be raised and easily accessible collections of example problems be built. Contents Target text information: Comparing Adaptive and Non-Adaptive Connection Pruning With Pure Early Stopping: Neural network pruning methods on the level of individual network parameters (e.g. connection weights) can improve generalization, as is shown in this empirical study. However, an open problem in the pruning methods known today (OBD, OBS, autoprune, epsiprune) is the selection of the number of parameters to be removed in each pruning step (pruning strength). This work presents a pruning method lprune that automatically adapts the pruning strength to the evolution of weights and loss of generalization during training. The method requires no algorithm parameter adjustment by the user. Results of statistical significance tests comparing autoprune, lprune, and static networks with early stopping are given, based on extensive experimentation with 14 different problems. The results indicate that training with pruning is often significantly better and rarely significantly worse than training with early stopping without pruning. Furthermore, lprune is often superior to autoprune (which is superior to OBD) on diagnosis tasks unless severe pruning early in the training process is required. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
340
val
1-hop neighbor's text information: Learning stochastic feedforward networks. : Connectionist learning procedures are presented for "sigmoid" and "noisy-OR" varieties of stochastic feedforward network. These networks are in the same class as the "belief networks" used in expert systems. They represent a probability distribution over a set of visible variables using hidden variables to express correlations. Conditional probability distributions can be exhibited by stochastic simulation for use in tasks such as classification. Learning from empirical data is done via a gradient-ascent method analogous to that used in Boltzmann machines, but due to the feedforward nature of the connections, the negative phase of Boltzmann machine learning is unnecessary. Experimental results show that, as a result, learning in a sigmoid feedforward network can be faster than in a Boltzmann machine. These networks have other advantages over Boltzmann machines in pattern classification and decision making applications, and provide a link between work on connectionist learning and work on the representation of expert knowledge. 1-hop neighbor's text information: Introduction to the Theory of Neural Computa 92 tion. : Neural computation, also called connectionism, parallel distributed processing, neural network modeling or brain-style computation, has grown rapidly in the last decade. Despite this explosion, and ultimately because of impressive applications, there has been a dire need for a concise introduction from a theoretical perspective, analyzing the strengths and weaknesses of connectionist approaches and establishing links to other disciplines, such as statistics or control theory. The Introduction to the Theory of Neural Computation by Hertz, Krogh and Palmer (subsequently referred to as HKP) is written from the perspective of physics, the home discipline of the authors. The book fulfills its mission as an introduction for neural network novices, provided that they have some background in calculus, linear algebra, and statistics. It covers a number of models that are often viewed as disjoint. Critical analyses and fruitful comparisons between these models 1-hop neighbor's text information: Mapping Bayesian networks to Boltzmann machines. : We study the task of tnding a maximal a posteriori (MAP) instantiation of Bayesian network variables, given a partial value assignment as an initial constraint. This problem is known to be NP-hard, so we concentrate on a stochastic approximation algorithm, simulated annealing. This stochastic algorithm can be realized as a sequential process on the set of Bayesian network variables, where only one variable is allowed to change at a time. Consequently, the method can become impractically slow as the number of variables increases. We present a method for mapping a given Bayesian network to a massively parallel Bolztmann machine neural network architecture, in the sense that instead of using the normal sequential simulated annealing algorithm, we can use a massively parallel stochastic process on the Boltzmann machine architecture. The neural network updating process provably converges to a state which solves a given MAP task. Target text information: Unsupevised learning of distributions on binary vectors using two-layer networks. : We present a distribution model for binary vectors, called the influence combination model and show how this model can be used as the basis for unsupervised learning algorithms for feature selection. The model is closely related to the Harmonium model defined by Smolensky [RM86][Ch.6]. In the first part of the paper we analyze properties of this distribution representation scheme. We show that arbitrary distributions of binary vectors can be approximated by the combination model. We show how the weight vectors in the model can be interpreted as high order correlation patterns among the input bits. We compare the combination model with the mixture model and with principle component analysis. In the second part of the paper we present two algorithms for learning the combination model from examples. The first algorithm is based on gradient ascent. Here we give a closed form for this gradient that is significantly easier to compute than the corresponding gradient for the general Boltzmann machine. The second learning algorithm is a greedy method that creates the hidden units and computes their weights one at a time. This method is a variant of projection pursuit density estimation. In the third part of the paper we give experimental results for these learning methods on synthetic data and on natural data of handwritten digit images. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
1,972
test
1-hop neighbor's text information: Importance Sampling: Technical Report No. 9805, Department of Statistics, University of Toronto Abstract. Simulated annealing | moving from a tractable distribution to a distribution of interest via a sequence of intermediate distributions | has traditionally been used as an inexact method of handling isolated modes in Markov chain samplers. Here, it is shown how one can use the Markov chain transitions for such an annealing sequence to define an importance sampler. The Markov chain aspect allows this method to perform acceptably even for high-dimensional problems, where finding good importance sampling distributions would otherwise be very difficult, while the use of importance weights ensures that the estimates found converge to the correct values as the number of annealing runs increases. This annealed importance sampling procedure resembles the second half of the previously-studied tempered transitions, and can be seen as a generalization of a recently-proposed variant of sequential importance sampling. It is also related to thermodynamic integration methods for estimating ratios of normalizing constants. Annealed importance sampling is most attractive when isolated modes are present, or when estimates of normalizing constants are required, but it may also be more generally useful, since its independent sampling allows one to bypass some of the problems of assessing convergence and autocorrelation in Markov chain samplers. 1-hop neighbor's text information: Importance Sampling: Technical Report No. 9805, Department of Statistics, University of Toronto Abstract. Simulated annealing | moving from a tractable distribution to a distribution of interest via a sequence of intermediate distributions | has traditionally been used as an inexact method of handling isolated modes in Markov chain samplers. Here, it is shown how one can use the Markov chain transitions for such an annealing sequence to define an importance sampler. The Markov chain aspect allows this method to perform acceptably even for high-dimensional problems, where finding good importance sampling distributions would otherwise be very difficult, while the use of importance weights ensures that the estimates found converge to the correct values as the number of annealing runs increases. This annealed importance sampling procedure resembles the second half of the previously-studied tempered transitions, and can be seen as a generalization of a recently-proposed variant of sequential importance sampling. It is also related to thermodynamic integration methods for estimating ratios of normalizing constants. Annealed importance sampling is most attractive when isolated modes are present, or when estimates of normalizing constants are required, but it may also be more generally useful, since its independent sampling allows one to bypass some of the problems of assessing convergence and autocorrelation in Markov chain samplers. Target text information: (1998) "Sequential importance sampling for nonparametric Bayes models: The next generation", : There are two generations of Gibbs sampling methods for semi-parametric models involving the Dirichlet process. The first generation suffered from a severe drawback; namely that the locations of the clusters, or groups of parameters, could essentially become fixed, moving only rarely. Two strategies that have been proposed to create the second generation of Gibbs samplers are integration and appending a second stage to the Gibbs sampler wherein the cluster locations are moved. We show that these same strategies are easily implemented for the sequential importance sampler, and that the first strategy dramatically improves results. As in the case of Gibbs sampling, these strategies are applicable to a much wider class of models. They are shown to provide more uniform importance sampling weights and lead to additional Rao-Blackwellization of estimators. Steve MacEachern is Associate Professor, Department of Statistics, Ohio State University, Merlise Clyde is Assistant Professor, Institute of Statistics and Decision Sciences, Duke University, and Jun Liu is Assistant Professor, Department of Statistics, Stanford University. The work of the second author was supported in part by the National Science Foundation grants DMS-9305699 and DMS-9626135, and that of the last author by the National Science Foundation grants DMS-9406044, DMS-9501570, and the Terman Fellowship. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
2,451
test
1-hop neighbor's text information: "Constructing deterministic finite-state automata in recurrent neural networks", : Recurrent neural networks that are trained to behave like deterministic finite-state automata (DFAs) can show deteriorating performance when tested on long strings. This deteriorating performance can be attributed to the instability of the internal representation of the learned DFA states. The use of a sigmoidal discriminant function together with the recurrent structure contribute to this instability. We prove that a simple algorithm can construct second-order recurrent neural networks with a sparse interconnection topology and sigmoidal discriminant function such that the internal DFA state representations are stable, i.e. the constructed network correctly classifies strings of arbitrary length. The algorithm is based on encoding strengths of weights directly into the neural network. We derive a relationship between the weight strength and the number of DFA states for robust string classification. For a DFA with n states and m input alphabet symbols, the constructive algorithm generates a "programmed" neural network with O(n) neurons and O(mn) weights. We compare our algorithm to other methods proposed in the literature. 1-hop neighbor's text information: Turing computability with neural nets. : This paper shows the existence of a finite neural network, made up of sigmoidal neurons, which simulates a universal Turing machine. It is composed of less than 10 5 synchronously evolving processors, interconnected linearly. High-order connections are not required. 1-hop neighbor's text information: Vapnik-Chervonenkis dimension of neural nets. The Handbook of Brain Theory and Neural Networks (M. : Most of the work on the Vapnik-Chervonenkis dimension of neural networks has been focused on feedforward networks. However, recurrent networks are also widely used in learning applications, in particular when time is a relevant parameter. This paper provides lower and upper bounds for the VC dimension of such networks. Several types of activation functions are discussed, including threshold, polynomial, piecewise-polynomial and sigmoidal functions. The bounds depend on two independent parameters: the number w of weights in the network, and the length k of the input sequence. In contrast, for feedforward networks, VC dimension bounds can be expressed as a function of w only. An important difference between recurrent and feedforward nets is that a fixed recurrent net can receive inputs of arbitrary length. Therefore we are particularly interested in the case k w. Ignoring multiplicative constants, the main results say roughly the following: For architectures with activation = any fixed nonlinear polynomial, the VC dimension is wk. For architectures with activation = any fixed piecewise polynomial, the VC dimension is between wk and w 2 k. For architectures with activation = H (threshold nets), the VC dimension is between w log(k=w) and minfwk log wk; w 2 +w log wkg. Forthe standard sigmoid (x) = 1=(1 + e x ), the VC dimension is between wk and w 4 k 2 . Target text information: "On the effect of analog noise on discrete-time analog computations", : We introduce a model for analog computation with discrete time in the presence of analog noise that is flexible enough to cover the most important concrete cases, such as noisy analog neural nets and networks of spiking neurons. This model subsumes the classical model for digital computation in the presence of noise. We show that the presence of arbitrarily small amounts of analog noise reduces the power of analog computational models to that of finite automata, and we also prove a new type of upper bound for the I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
1,488
test
1-hop neighbor's text information: Applying Case Retrieval Nets to diagnostic tasks in technical domains. accepted for: : This paper presents Objectdirected Case Retrieval Nets, a memory model developed for an application of Case-Based Reasoning to the task of technical diagnosis. The key idea is to store cases, i.e. observed symptoms and diagnoses, in a network and to enhance this network with an object model encoding knowledge about the devices in the application domain. Target text information: `Case Retrieval Nets and cognitive modelling\', : Implementation, and Results I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
1,179
val
1-hop neighbor's text information: Learning by Refining Algorithm Sketches, : In this paper we suggest a mechanism that improves significantly the performance of a top-down inductive logic programming (ILP) learning system. This improvement is achieved at the cost of giving to the system extra information that is not difficult to formulate. This information appears in the form of an algorithm sketch: an incomplete and somewhat vague representation of the computation related to a particular example. We describe which sketches are admissible, give details of the learning algorithm that exploits the information contained in the sketch. The experiments carried out with the implemented system (SKIL) have demonstrated the usefulness of the method and its potential in future applications. 1-hop neighbor's text information: Learning logical definitions from relations. : 1-hop neighbor's text information: Architecture for Iterative Learning of R ecursive Definitions, : In this paper we are concerned with the problem of inducing recursive Horn clauses from small sets of training examples. The method of iterative bootstrap induction is presented. In the first step, the system generates simple clauses, which can be regarded as properties of the required definition. Properties represent generalizations of the positive examples, simulating the effect of having larger number of examples. Properties are used subsequently to induce the required recursive definitions. This paper describes the method together with a series of experiments. The results support the thesis that iterative bootstrap induction is indeed an effective technique that could be of general use in ILP. Target text information: Integrity Constraints in ILP using a Monte Carlo approach: Many state-of-the-art ILP systems require large numbers of negative examples to avoid overgeneralization. This is a considerable disadvantage for many ILP applications, namely indu ctive program synthesis where relativelly small and sparse example sets are a more realistic scenario. Integrity constraints are first order clauses that can play the role of negative examples in an inductive process. One integrity constraint can replace a long list of ground negative examples. However, checking the consistency of a program with a set of integrity constraints usually involves heavy the orem-proving. We propose an efficient constraint satisfaction algorithm that applies to a wide variety of useful integrity constraints and uses a Monte Carlo strategy. It looks for inconsistencies by ra ndom generation of queries to the program. This method allows the use of integrity constraints instead of (or together with) negative examples. As a consequence programs to induce can be specified more rapidly by the user and the ILP system tends to obtain more accurate definitions. Average running times are not greatly affected by the use of integrity constraints compared to ground negative examples. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
0
Rule Learning
cora
307
test
1-hop neighbor's text information: 3-D vision and figure-ground separation by visual cortex. : 1-hop neighbor's text information: (1995) Brightness perception, illusory contours, and corticogeniculate feedback. : fl Partially supported by the Advanced Research Projects Agency (AFOSR 90-0083). y Partially supported by the Air Force Office of Scientific Research (AFOSR F49620-92-J-0499), the Advanced Research Projects Agency (ONR N00014-92-J-4015), and the Office of Naval Research (ONR N00014-91-J-4100). z Partially funded by the Air Force Office of Scientific Research (AFOSR F49620-92-J-0334) and the Office of Naval Research (ONR N00014-91-J-4100 and ONR N00014-94-1-0597). Target text information: Cortical Synchronization and Perceptual Framing: I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
637
test
1-hop neighbor's text information: Refining conversational case libraries. : Conversational case-based reasoning (CBR) shells (e.g., Inference's CBR Express) are commercially successful tools for supporting the development of help desk and related applications. In contrast to rule-based expert systems, they capture knowledge as cases rather than more problematic rules, and they can be incrementally extended. However, rather than eliminate the knowledge engineering bottleneck, they refocus it on case engineering, the task of carefully authoring cases according to library design guidelines to ensure good performance. Designing complex libraries according to these guidelines is difficult; software is needed to assist users with case authoring. We describe an approach for revising case libraries according to design guidelines, its implementation in Clire, and empirical results showing that, under some conditions, this approach can improve conversational CBR performance. Target text information: Correcting for Length Biasing in Conversational Case Scoring: Inference's conversational case-based reasoning (CCBR) approach, embedded in the CBR Content Navigator line of products, is susceptible to a bias in its case scoring algorithm. In particular, shorter cases tend to be given higher scores, assuming all other factors are held constant. This report summarizes our investigation for mediating this bias. We introduce an approach for eliminating this bias and evaluate how it affects retrieval performance for six case libraries. We also suggest explanations for these results, and note the limitations of our study. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
383
test
1-hop neighbor's text information: Bayesian graphical models for discrete data. : z York's research was supported by a NSF graduate fellowship. The authors are grateful to Julian Besag, David Bradshaw, Jeff Bradshaw, James Carlsen, David Draper, Ivar Heuch, Robert Kass, Augustine Kong, Steffen Lauritzen, Adrian Raftery, and James Zidek for helpful comments and discussions. 1-hop neighbor's text information: Decomposable Graphical Gaussian Model Determination. : We propose a methodology for Bayesian model determination in decomposable graphical Gaussian models. To achieve this aim we consider a hyper inverse Wishart prior distribution on the concentration matrix for each given graph. To ensure compatibility across models, such prior distributions are obtained by marginalisation from the prior conditional on the complete graph. We explore alternative structures for the hyperparameters of the latter, and their consequences for the model. Model determination is carried out by implementing a reversible jump MCMC sampler. In particular, the dimension-changing move we propose involves adding or dropping an edge from the graph. We characterise the set of moves which preserve the decomposability of the graph, giving a fast algorithm for maintaining the junction tree representation of the graph at each sweep. As state variable, we propose to use the incomplete variance-covariance matrix, containing only the elements for which the corresponding element of the inverse is nonzero. This allows all computations to be performed locally, at the clique level, which is a clear advantage for the analysis of large and complex data-sets. Finally, the statistical and computational performance of the procedure is illustrated by means of both artificial and real multidimensional data-sets. 1-hop neighbor's text information: Model selection and accounting for model uncertainty in graphical models using Occam\'s window. : We consider the problem of model selection and accounting for model uncertainty in high-dimensional contingency tables, motivated by expert system applications. The approach most used currently is a stepwise strategy guided by tests based on approximate asymptotic P -values leading to the selection of a single model; inference is then conditional on the selected model. The sampling properties of such a strategy are complex, and the failure to take account of model uncertainty leads to underestimation of uncertainty about quantities of interest. In principle, a panacea is provided by the standard Bayesian formalism which averages the posterior distributions of the quantity of interest under each of the models, weighted by their posterior model probabilities. Furthermore, this approach is optimal in the sense of maximising predictive ability. However, this has not been used in practice because computing the posterior model probabilities is hard and the number of models is very large (often greater than 10 11 ). We argue that the standard Bayesian formalism is unsatisfactory and we propose an alternative Bayesian approach that, we contend, takes full account of the true model uncertainty by averaging over a much smaller set of models. An efficient search algorithm is developed for finding these models. We consider two classes of graphical models that arise in expert systems: the recursive causal models and the decomposable fl David Madigan is Assistant Professor of Statistics and Adrian E. Raftery is Professor of Statistics and Sociology, Department of Statistics, GN-22, University of Washington, Seattle, WA 98195. Madigan's research was partially supported by the Graduate School Research Fund, University of Washington and by the NSF. Raftery's research was supported by ONR Contract no. N-00014-91-J-1074. The authors are grateful to Gregory Cooper, Leo Goodman, Shelby Haberman, David Hinkley, Graham Upton, Jon Wellner, Nanny Wermuth, Jeremy York, Walter Zucchini and two anonymous referees for helpful comments and discussions, and to Michael R. Butler for providing the data for the scrotal swellings example. Target text information: Markov Chain Monte Carlo Model Determination for Hierarchical and Graphical Models. : The Bayesian approach to comparing models involves calculating the posterior probability of each plausible model. For high-dimensional contingency tables, the set of plausible models is very large. We focus attention on reversible jump Markov chain Monte Carlo (Green, 1995) and develop strategies for calculating posterior probabilities of hierarchical, graphical or decomposable log-linear models. Even for tables of moderate size, these sets of models may be very large. The choice of suitable prior distributions for model parameters is also discussed in detail, and two examples are presented. For the first example, a 2 fi 3 fi 4 table, the model probabilities calculated using our reversible jump approach are compared with model probabilities calculated exactly or by using an alternative approximation. The second example is a 2 6 contingency table for which exact methods are infeasible, due to the large number of possible models. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
2,001
test
1-hop neighbor's text information: "Centering neural network gradient factors", : Technical Report IDSIA-19-97 Abstract. It has long been known that neural networks can learn faster when their input and hidden unit activities are centered about zero; recently we have extended this approach to also encompass the centering of error signals [2]. Here we generalize this notion to all factors involved in the network's gradient, leading us to propose centering the slope of hidden unit activation functions as well. Slope centering removes the linear component of backpropagated error; this improves credit assignment in networks with shortcut connections. Benchmark results show that this can speed up learning significantly without adversely affecting the trained network's generalization ability. 1-hop neighbor's text information: Plasticity-Mediated Competitive Learning: Differentiation between the nodes of a competitive learning network is conventionally achieved through competition on the basis of neural activity. Simple inhibitory mechanisms are limited to sparse representations, while decorrelation and factorization schemes that support distributed representations are computation-ally unattractive. By letting neural plasticity mediate the competitive interaction instead, we obtain diffuse, nonadaptive alternatives for fully distributed representations. We use this technique to simplify and improve our binary information gain optimization algorithm for feature extraction (Schraudolph and Sejnowski, 1993); the same approach could be used to improve other learning algorithms. 1-hop neighbor's text information: Empirical Entropy Manipulation for Real-World Problems: No finite sample is sufficient to determine the density, and therefore the entropy, of a signal directly. Some assumption about either the functional form of the density or about its smoothness is necessary. Both amount to a prior over the space of possible density functions. By far the most common approach is to assume that the density has a parametric form. By contrast we derive a differential learning rule called EMMA that optimizes entropy by way of kernel density estimation. Entropy and its derivative can then be calculated by sampling from this density estimate. The resulting parameter update rule is surprisingly simple and efficient. We will describe two real-world applications that can be solved efficiently and reliably using EMMA. In the first application EMMA is used to align 3D models to complex natural images. In the second application EMMA is used to detect and correct corruption in magnetic resonance images (MRI). Both applications are beyond the scope of existing parametric entropy models. Target text information: Unsupervised discrimination of clustered data via optimization of binary information gain. : We present the information-theoretic derivation of a learning algorithm that clusters unlabelled data with linear discriminants. In contrast to methods that try to preserve information about the input patterns, we maximize the information gained from observing the output of robust binary discriminators implemented with sigmoid nodes. We derive a local weight adaptation rule via gradient ascent in this objective, demonstrate its dynamics on some simple data sets, relate our approach to previous work and suggest directions in which it may be extended. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
2,511
test
1-hop neighbor's text information: Defining Relative Likelihood in Partially-Ordered Preferential Structures: Starting with a likelihood or preference order on worlds, we extend it to a likelihood ordering on sets of worlds in a natural way, and examine the resulting logic. Lewis earlier considered such a notion of relative likelihood in the context of studying counterfactuals, but he assumed a total preference order on worlds. Complications arise when examining partial orders that are not present for total orders. There are subtleties involving the exact approach to lifting the order on worlds to an order on sets of worlds. In addition, the axiomatization of the logic of relative likelihood in the case of partial orders gives insight into the connection between relative likelihood and default reasoning. 1-hop neighbor's text information: A qualitative Markov assumption and its implications for belief change. : The study of belief change has been an active area in philosophy and AI. In recent years, two special cases of belief change, belief revision and belief update, have been studied in detail. Roughly speaking, revision treats a surprising observation as a sign that previous beliefs were wrong, while update treats a surprising observation as an indication that the world has changed. In general, we would expect that an agent making an observation may both want to revise some earlier beliefs and assume that some change has occurred in the world. We define a novel approach to belief change that allows us to do this, by applying ideas from probability theory in a qualitative settings. The key idea is to use a qualitative Markov assumption, which says that state transitions are independent. We show that a recent approach to modeling qualitative uncertainty using plausibility measures allows us to make such a qualitative Markov assumption in a relatively straightforward way, and show how the Markov assumption can be used to provide an attractive belief-change model. 1-hop neighbor's text information: Rank-based systems: A simple approach to belief revision, belief update, and reasoning about evidence and actions. : We describe a ranked-model semantics for if-then rules admitting exceptions, which provides a coherent framework for many facets of evidential and causal reasoning. Rule priorities are automatically extracted form the knowledge base to facilitate the construction and retraction of plausible beliefs. To represent causation, the formalism incorporates the principle of Markov shielding which imposes a stratified set of independence constraints on rankings of interpretations. We show how this formalism resolves some classical problems associated with specificity, prediction and abduction, and how it offers a natural way of unifying belief revision, belief update, and reasoning about actions. Target text information: Plausibility measures and default reasoning. : In recent years, a number of different semantics for defaults have been proposed, such as preferential structures, *- semantics, possibilistic structures, and -rankings, that have been shown to be characterized by the same set of axioms, known as the KLM properties (for Kraus, Lehmann, and Magidor). While this was viewed as a surprise, we show here that it is almost inevitable. We do this by giving yet another semantics for defaults that uses plausibility measures, a new approach to modeling uncertainty that generalize other approaches, such as probability measures, belief functions, and possibility measures. We show that all the earlier approaches to default reasoning can be embedded in the framework of plausibility. We then provide a necessary and sufficient condition on plausibilities for the KLM properties to be sound, and an additional condition necessary and sufficient for the KLM properties to be complete. These conditions are easily seen to hold for all the earlier approaches, thus explaining why they are characterized by the KLM properties. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
1,057
test
1-hop neighbor's text information: A Statistical Approach to Solving the EBL Utility Problem, : Many "learning from experience" systems use information extracted from problem solving experiences to modify a performance element PE, forming a new element PE 0 that can solve these and similar problems more efficiently. However, as transformations that improve performance on one set of problems can degrade performance on other sets, the new PE 0 is not always better than the original PE; this depends on the distribution of problems. We therefore seek the performance element whose expected performance, over this distribution, is optimal. Unfortunately, the actual distribution, which is needed to determine which element is optimal, is usually not known. Moreover, the task of finding the optimal element, even knowing the distribution, is intractable for most interesting spaces of elements. This paper presents a method, palo, that side-steps these problems by using a set of samples to estimate the unknown distribution, and by using a set of transformations to hill-climb to a local optimum. This process is based on a mathematically rigorous form of utility analysis: in particular, it uses statistical techniques to determine whether the result of a proposed transformation will be better than the original system. We also present an efficient way of implementing this learning system in the context of a general class of performance elements, and include empirical evidence that this approach can work effectively. fl Much of this work was performed at the University of Toronto, where it was supported by the Institute for Robotics and Intelligent Systems and by an operating grant from the National Science and Engineering Research Council of Canada. We also gratefully acknowledge receiving many helpful comments from William Cohen, Dave Mitchell, Dale Schuurmans and the anonymous referees. 1-hop neighbor's text information: Scaling Up. : Partially observable Markov decision processes (pomdp's) model decision problems in which an agent tries to maximize its reward in the face of limited and/or noisy sensor feedback. While the study of pomdp's is motivated by a need to address realistic problems, existing techniques for finding optimal behavior do not appear to scale well and have been unable to find satisfactory policies for problems with more than a dozen states. After a brief review of pomdp's, this paper discusses several simple solution methods and shows that all are capable of finding near-optimal policies for a selection of extremely small pomdp's taken from the learning literature. In contrast, we show that none are able to solve a slightly larger and noisier problem based on robot navigation. We find that a combination of two novel approaches performs well on these problems and suggest methods for scaling to even larger and more complicated domains. Target text information: A Formal Framework for Speedup Learning from Problems and Solutions: Speedup learning seeks to improve the computational efficiency of problem solving with experience. In this paper, we develop a formal framework for learning efficient problem solving from random problems and their solutions. We apply this framework to two different representations of learned knowledge, namely control rules and macro-operators, and prove theorems that identify sufficient conditions for learning in each representation. Our proofs are constructive in that they are accompanied with learning algorithms. Our framework captures both empirical and explanation-based speedup learning in a unified fashion. We illustrate our framework with implementations in two domains: symbolic integration and Eight Puzzle. This work integrates many strands of experimental and theoretical work in machine learning, including empirical learning of control rules, macro-operator learning, I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
177
test
1-hop neighbor's text information: Towards improving case adaptability with a genetic algorithm. : Case combination is a difficult problem in Case Based Reasoning, as sub-cases often exhibit conflicts when merged together. In our previous work we formalized case combination by representing each case as a constraint satisfaction problem, and used the minimum conflicts algorithm to systematically synthesize the global solution. However, we also found instances of the problem in which the minimum conflicts algorithm does not perform case combination efficiently. In this paper we describe those situations in which initially retrieved cases are not easily adaptable, and propose a method by which to improve case adaptability with a genetic algorithm. We introduce a fitness function that maintains as much retrieved case information as possible, while also perturbing a sub-solution to allow subsequent case combination to proceed more efficiently. 1-hop neighbor's text information: Dynamic constraint satisfaction using case-based reasoning techniques. : The Dynamic Constraint Satisfaction Problem (DCSP) formalism has been gaining attention as a valuable and often necessary extension of the static CSP framework. Dynamic Constraint Satisfaction enables CSP techniques to be applied more extensively, since it can be applied in domains where the set of constraints and variables involved in the problem evolves with time. At the same time, the Case-Based Reasoning (CBR) community has been working on techniques by which to reuse existing solutions when solving new problems. We have observed that dynamic constraint satisfaction matches very closely the case-based reasoning process of case adaptation. These observations emerged from our previous work on combining CBR and CSP to achieve a constraint-based adaptation. This paper summarizes our previous results, describes the similarity of the challenges facing both DCSP and case adaptation, and shows how CSP and CBR can together begin to address these chal lenges. Target text information: A CBR Integration From Inception to Productization: Our case-based reasoning (CBR) integration with the constraint satisfaction problem (CSP) formalism has undergone several transformations on its journey from initial research idea to product-intent design. Both unexpected research results as well as interesting insights into the real-world applicability of the integrated methodology emerged as the integration was explored from alternative viewpoints. In this paper, the alternative viewpoints and the results that were enabled by these viewpoints are described. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
426
test
1-hop neighbor's text information: An unsupervised neural network for real-time, low-level control of a mobile robot: noise resistance, stability, and hardware implementation. : We have recently introduced a neural network mobile robot controller (NETMORC) that autonomously learns the forward and inverse odometry of a differential drive robot through an unsupervised learning-by-doing cycle. After an initial learning phase, the controller can move the robot to an arbitrary stationary or moving target while compensating for noise and other forms of disturbance, such as wheel slippage or changes in the robot's plant. In addition, the forward odometric map allows the robot to reach targets in the absence of sensory feedback. The controller is also able to adapt in response to long-term changes in the robot's plant, such as a change in the radius of the wheels. In this article we review the NETMORC architecture and describe its simplified algorithmic implementation, we present new, quantitative results on NETMORC's performance and adaptability under noise-free and noisy conditions, we compare NETMORC's performance on a trajectory-following task with the performance of an alternative controller, and we describe preliminary results on the hardware implementation of NETMORC with the mobile robot ROBUTER. Target text information: Neural competitive maps for reactive and adaptive navigation: We have recently introduced a neural network for reactive obstacle avoidance based on a model of classical and operant conditioning. In this article we describe the success of this model when implemented on two real autonomous robots. Our results show the promise of self-organizing neural networks in the domain of intelligent robotics. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
494
test
1-hop neighbor's text information: Robot shaping: Developing autonomous agents though learning. : Learning plays a vital role in the development of situated agents. In this paper, we explore the use of reinforcement learning to "shape" a robot to perform a predefined target behavior. We connect both simulated and real robots to A LECSYS, a parallel implementation of a learning classifier system with an extended genetic algorithm. After classifying different kinds of Animat-like behaviors, we explore the effects on learning of different types of agent's architecture (monolithic, flat and hierarchical) and of training strategies. In particular, hierarchical architecture requires the agent to learn how to coordinate basic learned responses. We show that the best results are achieved when both the agent's architecture and the training strategy match the structure of the behavior pattern to be learned. We report the results of a number of experiments carried out both in simulated and in real environments, and show that the results of simulations carry smoothly to real robots. While most of our experiments deal with simple reactive behavior, in one of them we demonstrate the use of a simple and general memory mechanism. As a whole, our experimental activity demonstrates that classifier systems with genetic algorithms can be practically employed to develop autonomous agents. 1-hop neighbor's text information: Strategy Learning with Multilayer Connectionist Represent ations. : Results are presented that demonstrate the learning and fine-tuning of search strategies using connectionist mechanisms. Previous studies of strategy learning within the symbolic, production-rule formalism have not addressed fine-tuning behavior. Here a two-layer connectionist system is presented that develops its search from a weak to a task-specific strategy and fine-tunes its performance. The system is applied to a simulated, real-time, balance-control task. We compare the performance of one-layer and two-layer networks, showing that the ability of the two-layer network to discover new features and thus enhance the original representation is critical to solving the balancing task. 1-hop neighbor's text information: A Framework for Combining Symbolic and Neural Learning. In: Artificial Intelligence and Neural Networks: Steps Toward Principled Integration. Honavar, : Technical Report 1123, Computer Sciences Department, University of Wisconsin - Madison, Nov. 1992 ABSTRACT This article describes an approach to combining symbolic and connectionist approaches to machine learning. A three-stage framework is presented and the research of several groups is reviewed with respect to this framework. The first stage involves the insertion of symbolic knowledge into neural networks, the second addresses the refinement of this prior knowledge in its neural representation, while the third concerns the extraction of the refined symbolic knowledge. Experimental results and open research issues are discussed. A shorter version of this paper will appear in Machine Learning. Target text information: Coordinating Reactive Behaviors keywords: reactive systems, planning and learning: Combinating reactivity with planning has been proposed as a means of compensating for potentially slow response times of planners while still making progress toward long term goals. The demands of rapid response and the complexity of many environments make it difficult to decompose, tune and coordinate reactive behaviors while ensuring consistency. Neural networks can address the tuning problem, but are less useful for decomposition and coordination. We hypothesize that interacting reactions can be decomposed into separate behaviors resident in separate networks and that the interaction can be coordinated through the tuning mechanism and a higher level controller. To explore these issues, we have implemented a neural network architecture as the reactive component of a two layer control system for a simulated race car. By varying the architecture, we test whether decomposing reactivity into separate behaviors leads to superior overall performance, coordination and learning convergence. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
5
Reinforcement Learning
cora
183
val
1-hop neighbor's text information: Bayesian Induction of Features in Temporal Domains: Most concept induction algorithms process concept instances described in terms of properties that remain constant over time. In temporal domains, instances are best described in terms of properties whose values vary with time. Data engineering is called upon in temporal domains to transform the raw data into an appropriate form for concept induction. I investigate a method for inducing features suitable for classifying finite, univariate, time series that are governed by unknown deterministic processes contaminated by noise. In a supervised setting, I induce piecewise polynomials of appropriate complexity to characterize the data in each class, using Bayesian model induction principles. In this study, I evaluate the proposed method empirically in a semi-deterministic domain: the waveform classification problem, originally presented in the CART book. I compared the classification accuracy of the proposed algorithm to the accuracy attained by C4.5 under various noise levels. Feature induction improved the classification accuracy in noisy situations, but degraded it when there was no noise. The results demonstrate the value of the proposed method in the presence of noise, and reveal a weakness shared by all classifiers using generative rather than discriminative models: sensitivity to model inaccuracies. Target text information: Learning to classify sensor data. : I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
1,001
test
1-hop neighbor's text information: "A Class of Algorithms for Identification in H 1 ", preprint. : 1-hop neighbor's text information: Identification in H 1 with Nonuniformly Spaced Frequency Response Measurements: In this paper, the problem of "system identification in H 1 " is investigated in the case when the given frequency response data is not necessarily on a uniformly spaced grid of frequencies. A large class of robustly convergent identification algorithms are derived. A particular algorithm is further examined and explicit worst case error bounds (in the H 1 norm) are derived for both discrete-time and continuous-time systems. Examples are provided to illustrate the application of the algorithms. 1-hop neighbor's text information: Worst-Case Identification of Nonlinear Fading Memory Systems: In this paper, the problem of asymptotic identification for fading memory systems in the presence of bounded noise is studied. For any experiment, the worst-case error is characterized in terms of the diameter of the worst-case uncertainty set. Optimal inputs that minimize the radius of uncertainty are studied and characterized. Finally, a convergent algorithm that does not require knowledge of the noise upper bound is furnished. The algorithm is based on interpolating data with spline functions, which are shown to be well suited for identification in the presence of bounded noise; more so than other basis functions such as polynomials. Target text information: Optimal and Robust Identification Under Bounded Disturbances", : This paper investigates the intrinsic limitation of worst-case identification of LTI systems using data corrupted by bounded disturbances, when the unknown plant is known to belong to a given model set. This is done by analyzing the optimal worst-case asymptotic error achievable by performing experiments using any bounded inputs and estimating the plant using any identification algorithm. First, it is shown that under some topological conditions on the model set, there is an identification algorithm which is asymptotically optimal for any input. Characterization of the optimal asymptotic error as a function of the inputs is also obtained. These results hold for any error metric and disturbance norm. Second, these general results are applied to three specific identification problems: identification of stable systems in the ` 1 norm, identification of stable rational systems in the H 1 norm, and identification of unstable rational systems in the gap metric. For each of these problems, the general characterization of optimal asymptotic error is used to find near-optimal inputs to minimize the error. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
1,099
val
1-hop neighbor's text information: "Case-based Reactive Navigation: A case-based method for on-line selection and adaptation of reactive control parameters in autonomous robotic systems", : This article presents a new line of research investigating on-line learning mechanisms for autonomous intelligent agents. We discuss a case-based method for dynamic selection and modification of behavior assemblages for a navigational system. The case-based reasoning module is designed as an addition to a traditional reactive control system, and provides more flexible performance in novel environments without extensive high-level reasoning that would otherwise slow the system down. The method is implemented in the ACBARR (A Case-BAsed Reactive Robotic) system, and evaluated through empirical simulation of the system on several different environments, including "box canyon" environments known to be problematic for reactive control systems in general. fl Technical Report GIT-CC-92/57, College of Computing, Georgia Institute of Technology, Atlanta, Geor gia, 1992. 1-hop neighbor's text information: "Multistrategy Learning in Reactive Control Systems for Autonomous Robotic Navigation," : This paper presents a self-improving reactive control system for autonomous robotic navigation. The navigation module uses a schema-based reactive control system to perform the navigation task. The learning module combines case-based reasoning and reinforcement learning to continuously tune the navigation system through experience. The case-based reasoning component perceives and characterizes the system's environment, retrieves an appropriate case, and uses the recommendations of the case to tune the parameters of the reactive control system. The reinforcement learning component refines the content of the cases based on the current experience. Together, the learning components perform on-line adaptation, resulting in improved performance as the reactive control system tunes itself to the environment, as well as on-line learning, resulting in an improved library of cases that capture environmental regularities necessary to perform on-line adaptation. The system is extensively evaluated through simulation studies using several performance metrics and system configurations. 1-hop neighbor's text information: Continuous case-based reasoning. : Case-based reasoning systems have traditionally been used to perform high-level reasoning in problem domains that can be adequately described using discrete, symbolic representations. However, many real-world problem domains, such as autonomous robotic navigation, are better characterized using continuous representations. Such problem domains also require continuous performance, such as continuous sensori-motor interaction with the environment, and continuous adaptation and learning during the performance task. We introduce a new method for continuous case-based reasoning, and discuss how it can be applied to the dynamic selection, modification, and acquisition of robot behaviors in autonomous navigation systems. We conclude with a general discussion of case-based reasoning issues addressed by this work. Target text information: Knowledge Compilation and Speedup Learning in Continuous Task Domains: Many techniques for speedup learning and knowledge compilation focus on the learning and optimization of macro-operators or control rules in task domains that can be characterized using a problem-space search paradigm. However, such a characterization does not fit well the class of task domains in which the problem solver is required to perform in a continuous manner. For example, in many robotic domains, the problem solver is required to monitor real-valued perceptual inputs and vary its motor control parameters in a continuous, on-line manner to successfully accomplish its task. In such domains, discrete symbolic states and operators are difficult to define. To improve its performance in continuous problem domains, a problem solver must learn, modify, and use continuous operators that continuously map input sensory information to appropriate control outputs. Additionally, the problem solver must learn the contexts in which those continuous operators are applicable. We propose a learning method that can compile sensorimo-tor experiences into continuous operators, which can then be used to improve performance of the problem solver. The method speeds up the task performance as well as results in improvements in the quality of the resulting solutions. The method is implemented in a robotic navigation system, which is evaluated through extensive experimen tation. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
184
test
1-hop neighbor's text information: Solving the multiple-instance problem with axis-parallel rectangles. : The multiple instance problem arises in tasks where the training examples are ambiguous: a single example object may have many alternative feature vectors (instances) that describe it, and yet only one of those feature vectors may be responsible for the observed classification of the object. This paper describes and compares three kinds of algorithms that learn axis-parallel rectangles to solve the multiple-instance problem. Algorithms that ignore the multiple instance problem perform very poorly. An algorithm that directly confronts the multiple instance problem (by attempting to identify which feature vectors are responsible for the observed classifications) performs best, giving 89% correct predictions on a musk-odor prediction task. The paper also illustrates the use of artificial data to debug and compare these algorithms. 1-hop neighbor's text information: Top-down induction of logical decision trees, : Top-down induction of decision trees (TDIDT) is a very popular machine learning technique. Up till now, it has mainly been used for propositional learning, but seldomly for relational learning or inductive logic programming. The main contribution of this paper is the introduction of logical decision trees, which make it possible to use TDIDT in inductive logic programming. An implementation of this top-down induction of logical decision trees, the 1-hop neighbor's text information: Applying ILP to diterpene structure elucidation from 13C NMR spectra. : We present a novel application of ILP to the problem of diterpene structure elucidation from 13 C NMR spectra. Diterpenes are organic compounds of low molecular weight that are based on a skeleton of 20 carbon atoms. They are of significant chemical and commercial interest because of their use as lead compounds in the search for new pharmaceutical effectors. The structure elucidation of diterpenes based on 13 C NMR spectra is usually done manually by human experts with specialized background knowledge on peak patterns and chemical structures. In the process, each of the 20 skeletal atoms is assigned an atom number that corresponds to its proper place in the skeleton and the diterpene is classified into one of the possible skeleton types. We address the problem of learning classification rules from a database of peak patterns for diterpenes with known structure. Recently, propositional learning was successfully applied to learn classification rules from spectra with assigned atom numbers. As the assignment of atom numbers is a difficult process in itself (and possibly indistinguishable from the classification process), we apply ILP, i.e., relational learning, to the problem of classifying spectra without assigned atom numbers. Target text information: Lookahead and discretization in ILP. : We present and evaluate two methods for improving the performance of ILP systems. One of them is discretization of numerical attributes, based on Fayyad and Irani's text [9], but adapted and extended in such a way that it can cope with some aspects of discretization that only occur in relational learning problems (when indeterminate literals occur). The second technique is lookahead. It is a well-known problem in ILP that a learner cannot always assess the quality of a refinement without knowing which refinements will be enabled afterwards, i.e. without looking ahead in the refinement lattice. We present a simple method for specifying when lookahead is to be used, and what kind of lookahead is interesting. Both the discretization and lookahead techniques are evaluated experimentally. The results show that both techniques improve the quality of the induced theory, while computational costs are acceptable. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
0
Rule Learning
cora
1,724
test
1-hop neighbor's text information: Probabilistic Reasoning under Ignorance: 1-hop neighbor's text information: Anytime Influence Diagrams: 1-hop neighbor's text information: Belief maintenance in bayesian networks. : Target text information: Belief maintenance with probabilistic logic. : I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
2,194
test
1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming. 1-hop neighbor's text information: Eiben and C.A. Schippers. Multi-parent\'s niche: n-ary crossovers on NK-landscapes. : Using the multi-parent diagonal and scanning crossover in GAs reproduction operators obtain an adjustable arity. Hereby sexuality becomes a graded feature instead of a Boolean one. Our main objective is to relate the performance of GAs to the extent of sexuality used for reproduction on less arbitrary functions then those reported in the current literature. We investigate GA behaviour on Kauffman's NK-landscapes that allow for systematic characterization and user control of ruggedness of the fitness landscape. We test GAs with a varying extent of sexuality, ranging from asexual to 'very sexual'. Our tests were performed on two types of NK-landscapes: landscapes with random and landscapes with nearest neighbour epistasis. For both landscape types we selected landscapes from a range of ruggednesses. The results confirm the superiority of (very) sexual recombination on mildly epistatic problems. 1-hop neighbor's text information: Orgy in the computer: Multi-parent reproduction in genetic algorithms. : In this paper we investigate the phenomenon of multi-parent reproduction, i.e. we study recombination mechanisms where an arbitrary n > 1 number of parents participate in creating children. In particular, we discuss scanning crossover that generalizes the standard uniform crossover and diagonal crossover that generalizes 1-point crossover, and study the effects of different number of parents on the GA behavior. We conduct experiments on tough function optimization problems and observe that by multi-parent operators the performance of GAs can be enhanced significantly. We also give a theoretical foundation by showing how these operators work on distributions. Target text information: (1995) Genetic algorithms with multi-parent recombination. : In this paper we investigate genetic algorithms where more than two parents are involved in the recombination operation. In particular, we introduce gene scanning as a reproduction mechanism that generalizes classical crossovers, such as n-point crossover or uniform crossover, and is applicable to an arbitrary number (two or more) of parents. We performed extensive tests for optimizing numerical functions, the TSP and graph coloring to observe the effect of different numbers of parents. The experiments show that 2-parent recombination is outperformed when using more parents on the classical DeJong functions. For the other problems the results are not conclusive, in some cases 2 parents are optimal, while in some others more parents are better. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
1,926
test
1-hop neighbor's text information: Some studies in machine learning using the game of Checkers. : 1-hop neighbor's text information: Discovering complex Othello strategies through evolutionary neural networks. : An approach to develop new game playing strategies based on artificial evolution of neural networks is presented. Evolution was directed to discover strategies in Othello against a random-moving opponent and later against an ff-fi search program. The networks discovered first a standard positional strategy, and subsequently a mobility strategy, an advanced strategy rarely seen outside of tournaments. The latter discovery demonstrates how evolutionary neural networks can develop novel solutions by turning an initial disadvantage into an advantage in a changed environment. 1-hop neighbor's text information: Learning to predict by the methods of temporal differences. : This article introduces a class of incremental learning procedures specialized for prediction|that is, for using past experience with an incompletely known system to predict its future behavior. Whereas conventional prediction-learning methods assign credit by means of the difference between predicted and actual outcomes, the new methods assign credit by means of the difference between temporally successive predictions. Although such temporal-difference methods have been used in Samuel's checker player, Holland's bucket brigade, and the author's Adaptive Heuristic Critic, they have remained poorly understood. Here we prove their convergence and optimality for special cases and relate them to supervised-learning methods. For most real-world prediction problems, temporal-difference methods require less memory and less peak computation than conventional methods; and they produce more accurate predictions. We argue that most problems to which supervised learning is currently applied are really prediction problems of the sort to which temporal-difference methods can be applied to advantage. Target text information: Using genetic programming to evolve board evaluation functions. : In this paper, we employ the genetic programming paradigm to enable a computer to learn to play strategies for the ancient Egyptian boardgame Senet by evolving board evaluation functions. Formulating the problem in terms of board evaluation functions made it feasible to evaluate the fitness of game playing strategies by using tournament-style fitness evaluation. The game has elements of both strategy and chance. Our approach learns strategies which enable the computer to play consistently at a reasonably skillful level. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
2,129
test
1-hop neighbor's text information: Self-Organization and Associative Memory, : Selective suppression of transmission at feedback synapses during learning is proposed as a mechanism for combining associative feedback with self-organization of feedforward synapses. Experimental data demonstrates cholinergic suppression of synaptic transmission in layer I (feedback synapses), and a lack of suppression in layer IV (feed-forward synapses). A network with this feature uses local rules to learn mappings which are not linearly separable. During learning, sensory stimuli and desired response are simultaneously presented as input. Feedforward connections form self-organized representations of input, while suppressed feedback connections learn the transpose of feedfor-ward connectivity. During recall, suppression is removed, sensory input activates the self-organized representation, and activity generates the learned response. 1-hop neighbor's text information: EEG Signal Classification with Different Signal Representations for a large number of hidden units.: If several mental states can be reliably distinguished by recognizing patterns in EEG, then a paralyzed person could communicate to a device like a wheelchair by composing sequencesof these mental states. In this article, we report on a study comparing four representations of EEG signals and their classification by a two-layer neural network with sigmoid activation functions. The neural network is implemented on a CNAPS server (128 processor, SIMD architecture) by Adaptive Solutions, Inc., gaining a 100-fold decrease in training time over a Sun Target text information: Determining mental state from EEG signals using neural networks. : EEG analysis has played a key role in the modeling of the brain's cortical dynamics, but relatively little effort has been devoted to developing EEG as a limited means of communication. If several mental states can be reliably distinguished by recognizing patterns in EEG, then a paralyzed person could communicate to a device like a wheelchair by composing sequences of these mental states. EEG pattern recognition is a difficult problem and hinges on the success of finding representations of the EEG signals in which the patterns can be distinguished. In this article, we report on a study comparing three EEG representations, the unprocessed signals, a reduced-dimensional representation using the Karhunen-Loeve transform, and a frequency-based representation. Classification is performed with a two-layer neural network implemented on a CNAPS server (128 processor, SIMD architecture) by Adaptive Solutions, Inc.. Execution time comparisons show over a hundred-fold speed up over a Sun Sparc 10. The best classification accuracy on untrained samples is 73% using the frequency-based representation. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
1,468
val
1-hop neighbor's text information: A modular Q-learning architecture for manipulator task decomposition, : Compositional Q-Learning (CQ-L) (Singh 1992) is a modular approach to learning to perform composite tasks made up of several elemental tasks by reinforcement learning. Skills acquired while performing elemental tasks are also applied to solve composite tasks. Individual skills compete for the right to act and only winning skills are included in the decomposition of the composite task. We extend the original CQ-L concept in two ways: (1) a more general reward function, and (2) the agent can have more than one actuator. We use the CQ-L architecture to acquire skills for performing composite tasks with a simulated two-linked manipulator having large state and action spaces. The manipulator is a non-linear dynamical system and we require its end-effector to be at specific positions in the workspace. Fast function approximation in each of the Q-modules is achieved through the use of an array of Cerebellar Model Articulation Controller (CMAC) (Albus 1-hop neighbor's text information: Transfer of Learning by Composing Solutions of Elemental Sequential Tasks, : Although building sophisticated learning agents that operate in complex environments will require learning to perform multiple tasks, most applications of reinforcement learning have focussed on single tasks. In this paper I consider a class of sequential decision tasks (SDTs), called composite sequential decision tasks, formed by temporally concatenating a number of elemental sequential decision tasks. Elemental SDTs cannot be decomposed into simpler SDTs. I consider a learning agent that has to learn to solve a set of elemental and composite SDTs. I assume that the structure of the composite tasks is unknown to the learning agent. The straightforward application of reinforcement learning to multiple tasks requires learning the tasks separately, which can waste computational resources, both memory and time. I present a new learning algorithm and a modular architecture that learns the decomposition of composite SDTs, and achieves transfer of learning by sharing the solutions of elemental SDTs across multiple composite SDTs. The solution of a composite SDT is constructed by computationally inexpensive modifications of the solutions of its constituent elemental SDTs. I provide a proof of one aspect of the learning algorithm. 1-hop neighbor's text information: "Exploration and model building in mobile robot domains", : I present first results on COLUMBUS, an autonomous mobile robot. COLUMBUS operates in initially unknown, structured environments. Its task is to explore and model the environment efficiently while avoiding collisions with obstacles. COLUMBUS uses an instance-based learning technique for modeling its environment. Real-world experiences are generalized via two artificial neural networks that encode the characteristics of the robot's sensors, as well as the characteristics of typical environments the robot is assumed to face. Once trained, these networks allow for knowledge transfer across different environments the robot will face over its lifetime. COLUMBUS' models represent both the expected reward and the confidence in these expectations. Exploration is achieved by navigating to low confidence regions. An efficient dynamic programming method is employed in background to find minimal-cost paths that, executed by the robot, maximize exploration. COLUMBUS operates in real-time. It has been operating successfully in an office building environment for periods up to hours. Target text information: The efficient learning of multiple task sequences, : I present a modular network architecture and a learning algorithm based on incremental dynamic programming that allows a single learning agent to learn to solve multiple Markovian decision tasks (MDTs) with significant transfer of learning across the tasks. I consider a class of MDTs, called composite tasks, formed by temporally concatenating a number of simpler, elemental MDTs. The architecture is trained on a set of composite and elemental MDTs. The temporal structure of a composite task is assumed to be unknown and the architecture learns to produce a temporal decomposition. It is shown that under certain conditions the solution of a composite MDT can be constructed by computationally inexpensive modifications of the solutions of its constituent elemental MDTs. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
5
Reinforcement Learning
cora
1,532
test
1-hop neighbor's text information: Toward efficient agnostic learning. : In this paper we initiate an investigation of generalizations of the Probably Approximately Correct (PAC) learning model that attempt to significantly weaken the target function assumptions. The ultimate goal in this direction is informally termed agnostic learning, in which we make virtually no assumptions on the target function. The name derives from the fact that as designers of learning algorithms, we give up the belief that Nature (as represented by the target function) has a simple or succinct explanation. We give a number of positive and negative results that provide an initial outline of the possibilities for agnostic learning. Our results include hardness results for the most obvious generalization of the PAC model to an agnostic setting, an efficient and general agnostic learning method based on dynamic programming, relationships between loss functions for agnostic learning, and an algorithm for a learning problem that involves hidden variables. 1-hop neighbor's text information: Efficient distribution-free learning of probabilistic concepts. : In this paper we investigate a new formal model of machine learning in which the concept (boolean function) to be learned may exhibit uncertain or probabilistic behavior|thus, the same input may sometimes be classified as a positive example and sometimes as a negative example. Such probabilistic concepts (or p-concepts) may arise in situations such as weather prediction, where the measured variables and their accuracy are insufficient to determine the outcome with certainty. We adopt from the Valiant model of learning [27] the demands that learning algorithms be efficient and general in the sense that they perform well for a wide class of p-concepts and for any distribution over the domain. In addition to giving many efficient algorithms for learning natural classes of p-concepts, we study and develop in detail an underlying theory of learning p-concepts. 1-hop neighbor's text information: "A General Lower Bound on the Number of Examples Needed for Learning," : We prove a lower bound of ( 1 * ln 1 ffi + VCdim(C) * ) on the number of random examples required for distribution-free learning of a concept class C, where VCdim(C) is the Vapnik-Chervonenkis dimension and * and ffi are the accuracy and confidence parameters. This improves the previous best lower bound of ( 1 * ln 1 ffi + VCdim(C)), and comes close to the known general upper bound of O( 1 ffi + VCdim(C) * ln 1 * ) for consistent algorithms. We show that for many interesting concept classes, including kCNF and kDNF, our bound is actually tight to within a constant factor. Target text information: Long. Prediction, learning, uniform convergence, and scale-sensitive dimensions, : We present a new general-purpose algorithm for learning classes of [0; 1]-valued functions in a generalization of the prediction model, and prove a general upper bound on the expected absolute error of this algorithm in terms of a scale-sensitive generalization of the Vapnik dimension proposed by Alon, Ben-David, Cesa-Bianchi and Haussler. We give lower bounds implying that our upper bounds cannot be improved by more than a constant factor in general. We apply this result, together with techniques due to Haussler and to Benedek and Itai, to obtain new upper bounds on packing numbers in terms of this scale-sensitive notion of dimension. Using a different technique, we obtain new bounds on packing numbers in terms of Kearns and Schapire's fat-shattering function. We show how to apply both packing bounds to obtain improved general bounds on the sample complexity of agnostic learning. For each * > 0, we establish weaker sufficient and stronger necessary conditions for a class of [0; 1]-valued functions to be agnostically learnable to within *, and to be an *-uniform Glivenko-Cantelli class. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
2,523
test
1-hop neighbor's text information: Density Networks and their Application to Protein Modelling: I define a latent variable model in the form of a neural network for which only target outputs are specified; the inputs are unspecified. Although the inputs are missing, it is still possible to train this model by placing a simple probability distribution on the unknown inputs and maximizing the probability of the data given the parameters. The model can then discover for itself a description of the data in terms of an underlying latent variable space of lower dimensionality. I present preliminary results of the application of these models to protein data. 1-hop neighbor's text information: A generalized hidden Markov model for the recognition of human genes in DNA. : We present a statistical model of genes in DNA. A Generalized Hidden Markov Model (GHMM) provides the framework for describing the grammar of a legal parse of a DNA sequence (Stormo & Haussler 1994). Probabilities are assigned to transitions between states in the GHMM and to the generation of each nucleotide base given a particular state. Machine learning techniques are applied to optimize these probabilities using a standardized training set. Given a new candidate sequence, the best parse is deduced from the model using a dynamic programming algorithm to identify the path through the model with maximum probability. The GHMM is flexible and modular, so new sensors and additional states can be inserted easily. In addition, it provides simple solutions for integrating cardinality constraints, reading frame constraints, "indels", and homology searching. The description and results of an implementation of such a gene-finding model, called Genie, is presented. The exon sensor is a codon frequency model conditioned on windowed nucleotide frequency and the preceding codon. Two neural networks are used, as in (Brunak, Engelbrecht, & Knudsen 1991), for splice site prediction. We show that this simple model performs quite well. For a cross-validated standard test set of 304 genes [ftp://www-hgc.lbl.gov/pub/genesets] in human DNA, our gene-finding system identified up to 85% of protein-coding bases correctly with a specificity of 80%. 58% of exons were exactly identified with a specificity of 51%. Genie is shown to perform favorably compared with several other gene-finding systems. 1-hop neighbor's text information: Dirichlet mixtures: A method for improving detection of weak but significant protein sequence homology. COS. : This paper presents the mathematical foundations of Dirichlet mixtures, which have been used to improve database search results for homologous sequences, when a variable number of sequences from a protein family or domain are known. We present a method for condensing the information in a protein database into a mixture of Dirichlet densities. These mixtures are designed to be combined with observed amino acid frequencies, to form estimates of expected amino acid probabilities at each position in a profile, hidden Markov model, or other statistical model. These estimates give a statistical model greater generalization capacity, such that remotely related family members can be more reliably recognized by the model. Dirichlet mixtures have been shown to outperform substitution matrices and other methods for computing these expected amino acid distributions in database search, resulting in fewer false positives and false negatives for the families tested. This paper corrects a previously published formula for estimating these expected probabilities, and contains complete derivations of the Dirichlet mixture formulas, methods for optimizing the mixtures to match particular databases, and suggestions for efficient implementation. Target text information: Hidden Markov models in computational biology: Applications to protein modeling. : Hidden Markov Models (HMMs) are applied to the problems of statistical modeling, database searching and multiple sequence alignment of protein families and protein domains. These methods are demonstrated on the globin family, the protein kinase catalytic domain, and the EF-hand calcium binding motif. In each case the parameters of an HMM are estimated from a training set of unaligned sequences. After the HMM is built, it is used to obtain a multiple alignment of all the training sequences. It is also used to search the SWISS-PROT 22 database for other sequences that are members of the given protein family, or contain the given domain. The HMM produces multiple alignments of good quality that agree closely with the alignments produced by programs that incorporate three-dimensional structural information. When employed in discrimination tests (by examining how closely the sequences in a database fit the globin, kinase and EF-hand HMMs), the HMM is able to distinguish members of these families from non-members with a high degree of accuracy. Both the HMM and PRO-FILESEARCH (a technique used to search for relationships between a protein sequence and multiply aligned sequences) perform better in these tests than PROSITE (a dictionary of sites and patterns in proteins). The HMM appears to have a slight advantage I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
1,608
test
1-hop neighbor's text information: (1996) "Bayesian analysis of mixtures of mixtures," : Discrete mixtures of normal distributions are widely used in modeling amplitude fluctuations of electrical potentials at synapses of human, and other animal nervous systems. The usual framework has independent data values y j arising as y j = j + x n 0 +j where the means j come from some discrete prior G() and the unknown x n 0 +j 's and observed x j ; j = 1; : : : ; n 0 are gaussian noise terms. A practically important development of the associated statistical methods is the issue of non-normality of the noise terms, often the norm rather than the exception in the neurological context. We have recently developed models, based on convolutions of Dirichlet process mixtures, for such problems. Explicitly, we model the noise data values x j as arising from a Dirich-let process mixture of normals, in addition to modeling the location prior G() as a Dirichlet process itself. This induces a Dirichlet mixture of mixtures of normals, whose analysis may be developed using Gibbs sampling techniques. We discuss these models and their analysis, and illustrate in the context of neurological response analysis. Target text information: COMPUTING DISTRIBUTIONS OF ORDER STATISTICS: Recurrence relationships among the distribution functions of order statistics of independent, but not identically distributed, random quantities are derived. These results extend known theory and provide computationally practicable algorithms for a variety of problems. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
489
val
1-hop neighbor's text information: Dynamic constraint satisfaction using case-based reasoning techniques. : The Dynamic Constraint Satisfaction Problem (DCSP) formalism has been gaining attention as a valuable and often necessary extension of the static CSP framework. Dynamic Constraint Satisfaction enables CSP techniques to be applied more extensively, since it can be applied in domains where the set of constraints and variables involved in the problem evolves with time. At the same time, the Case-Based Reasoning (CBR) community has been working on techniques by which to reuse existing solutions when solving new problems. We have observed that dynamic constraint satisfaction matches very closely the case-based reasoning process of case adaptation. These observations emerged from our previous work on combining CBR and CSP to achieve a constraint-based adaptation. This paper summarizes our previous results, describes the similarity of the challenges facing both DCSP and case adaptation, and shows how CSP and CBR can together begin to address these chal lenges. 1-hop neighbor's text information: Linking adaptation and similarity learning. : In current CBR systems, case adaptation is usually performed by rule-based methods that use task-specific rules hand-coded by the system developer. The ability to define those rules depends on knowledge of the task and domain that may not be available a priori, presenting a serious impediment to endowing CBR systems with the needed adaptation knowledge. This paper describes ongoing research on a method to address this problem by acquiring adaptation knowledge from experience. The method uses reasoning from scratch, based on introspective reasoning about the requirements for successful adaptation, to build up a library of adaptation cases that are stored for future reuse. We describe the tenets of the approach and the types of knowledge it requires. We sketch initial computer implementation, lessons learned, and open questions for further study. 1-hop neighbor's text information: Synergy and Commonality in Case-Based and Constraint-Based Reasoning: Although Case-Based Reasoning (CBR) is a natural formulation for many problems, our previous work on CBR as applied to design made it apparent that there were elements of the CBR paradigm that prevented it from being more widely applied. At the same time, we were evaluating Constraint Satisfaction techniques for design, and found a commonality in motivation between repair-based constraint satisfaction problems (CSP) and case adaptation. This led us to combine the two methodologies in order to gain the advantages of CSP for case-based reasoning, allowing CBR to be more widely and flexibly applied. In combining the two methodologies, we found some unexpected synergy and commonality between the approaches. This paper describes the synergy and commonality that emerged as we combined case-based and constraint-based reasoning, and gives a brief overview of our continuing and future work on exploiting the emergent synergy when combining these reasoning modes. Target text information: Towards improving case adaptability with a genetic algorithm. : Case combination is a difficult problem in Case Based Reasoning, as sub-cases often exhibit conflicts when merged together. In our previous work we formalized case combination by representing each case as a constraint satisfaction problem, and used the minimum conflicts algorithm to systematically synthesize the global solution. However, we also found instances of the problem in which the minimum conflicts algorithm does not perform case combination efficiently. In this paper we describe those situations in which initially retrieved cases are not easily adaptable, and propose a method by which to improve case adaptability with a genetic algorithm. We introduce a fitness function that maintains as much retrieved case information as possible, while also perturbing a sub-solution to allow subsequent case combination to proceed more efficiently. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
969
test
1-hop neighbor's text information: Differential Evolution - a Simple and Efficient Heuristic for Global Optimization over Continuous Spaces, Journal of Global Optimization, : A new heuristic approach for minimizing possibly nonlinear and non differentiable continuous space functions is presented. By means of an extensive testbed, which includes the De Jong functions, it will be demonstrated that the new method converges faster and with more certainty than Adaptive Simulated Annealing as well as the Annealed Nelder&Mead approach, both of which have a reputation for being very powerful. The new method requires few control variables, is robust, easy to use and lends itself very well to parallel computation. Target text information: On the Usage of Differential Evolution for Function Optimization, : assumed unless otherwise stated. Basically, DE generates new parameter vectors by adding the weighted difference between two population vectors to a third vector. If the resulting vector yields a lower objective function value than a predetermined population member, the newly generated vector replaces the vector, with which it was compared, in the next generation; otherwise, the old vector is retained. This basic principle, however, is extended when it comes to the practical variants of DE. For example an existing vector can be perturbed by adding more than one weighted difference vector to it. In most cases, it is also worthwhile to mix the parameters of the old vector with those of the perturbed one before comparing the objective function values. Several variants of DE which have proven to be useful will be described in the I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
2,472
test