content stringlengths 633 9.91k | label stringclasses 7 values | category stringclasses 7 values | dataset stringclasses 1 value | node_id int64 0 2.71k | split stringclasses 3 values |
|---|---|---|---|---|---|
1-hop neighbor's text information: Irrelevant features and the subset selection problem. : We address the problem of finding a subset of features that allows a supervised induction algorithm to induce small high-accuracy concepts. We examine notions of relevance and irrelevance, and show that the definitions used in the machine learning literature do not adequately partition the features into useful categories of relevance. We present definitions for irrelevance and for two degrees of relevance. These definitions improve our understanding of the behavior of previous subset selection algorithms, and help define the subset of features that should be sought. The features selected should depend not only on the features and the target concept, but also on the induction algorithm. We describe a method for feature subset selection using cross-validation that is applicable to any induction algorithm, and discuss experiments conducted with ID3 and C4.5 on artificial and real datasets.
1-hop neighbor's text information: Dietterich (1991). Learning with Many Irrelevant Features. : In many domains, an appropriate inductive bias is the MIN-FEATURES bias, which prefers consistent hypotheses definable over as few features as possible. This paper defines and studies this bias. First, it is shown that any learning algorithm implementing the MIN-FEATURES bias requires fi( 1 * [2 p + p ln n]) training examples to guarantee PAC-learning a concept having p relevant features out of n available features. This bound is only logarithmic in the number of irrelevant features. The paper also presents a quasi-polynomial time algorithm, FOCUS, which implements MIN-FEATURES. Experimental studies are presented that compare FOCUS to the ID3 and FRINGE algorithms. These experiments show that| contrary to expectations|these algorithms do not implement good approximations of MIN-FEATURES. The coverage, sample complexity, and generalization performance of FOCUS is substantially better than either ID3 or FRINGE on learning problems where the MIN-FEATURES bias is appropriate. This suggests that, in practical applications, training data should be preprocessed to remove irrelevant features before being
Target text information: Feature selection methods for classifications. Intelligent Data Analysis: : Feature selection is a problem of choosing a subset of relevant features. In general, only exhaustive search can bring about the optimal subset. With a monotonic measure, exhaustive search can be avoided without sacrificing optimality. Unfortunately, most error- or distance-based measures are not monotonic. A new measure is employed in this work that is monotonic and fast to compute. The search for relevant features according to this measure is guaranteed to be complete but not exhaustive. Experiments are conducted for verification.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 1 | Neural Networks | cora | 1,442 | test |
1-hop neighbor's text information: Warmuth "How to use expert advice", : We analyze algorithms that predict a binary value by combining the predictions of several prediction strategies, called experts. Our analysis is for worst-case situations, i.e., we make no assumptions about the way the sequence of bits to be predicted is generated. We measure the performance of the algorithm by the difference between the expected number of mistakes it makes on the bit sequence and the expected number of mistakes made by the best expert on this sequence, where the expectation is taken with respect to the randomization in the predictions. We show that the minimum achievable difference is on the order of the square root of the number of mistakes of the best expert, and we give efficient algorithms that achieve this. Our upper and lower bounds have matching leading constants in most cases. We then show how this leads to certain kinds of pattern recognition/learning algorithms with performance bounds that improve on the best results currently known in this context. We also compare our analysis to the case in which log loss is used instead of the expected number of mistakes.
1-hop neighbor's text information: Context-sensitive learning methods for text categorization. : Two recently implemented machine learning algorithms, RIPPER and sleeping- experts for phrases, are evaluated on a number of large text categorization problems. These algorithms both construct classifiers that allow the "context" of a word w to affect how (or even whether) the presence or absence of w will contribute to a classification. However, RIPPER and sleeping-experts differ radically in many other respects: differences include different notions as to what constitutes a context, different ways of combining contexts to construct a classifier, different methods to search for a combination of contexts, and different criteria as to what contexts should be included in such a combination. In spite of these differences, both RIPPER and sleeping-experts perform extremely well across a wide variety of categorization problems, generally outperforming previously applied learning methods. We view this result as a confirmation of the usefulness of classifiers that represent contextual information.
1-hop neighbor's text information: Applying Winnow to Context Sensitive Spelling Correction, : Multiplicative weight-updating algorithms such as Winnow have been studied extensively in the COLT literature, but only recently have people started to use them in applications. In this paper, we apply a Winnow-based algorithm to a task in natural language: context-sensitive spelling correction. This is the task of fixing spelling errors that happen to result in valid words, such as substituting to for too, casual for causal, and so on. Previous approaches to this problem have been statistics-based; we compare Winnow to one of the more successful such approaches, which uses Bayesian classifiers. We find that: (1) When the standard (heavily-pruned) set of features is used to describe problem instances, Winnow performs comparably to the Bayesian method; (2) When the full (unpruned) set of features is used, Winnow is able to exploit the new features and convincingly outperform Bayes; and (3) When a test set is encountered that is dissimilar to the training set, Winnow is better than Bayes at adapting to the unfamiliar test set, using a strategy we will present for combining learning on the training set with unsupervised learning on the (noisy) test set.
Target text information: Mistake-driven learning in text categorization. : Learning problems in the text processing domain often map the text to a space whose dimensions are the measured features of the text, e.g., its words. Three characteristic properties of this domain are (a) very high dimensionality, (b) both the learned concepts and the instances reside very sparsely in the feature space, and (c) a high variation in the number of active features in an instance. In this work we study three mistake-driven learning algorithms for a typical task of this nature - text categorization. We argue that these algorithms which categorize documents by learning a linear separator in the feature space have a few properties that make them ideal for this domain. We then show that a quantum leap in performance is achieved when we further modify the algorithms to better address some of the specific characteristics of the domain. In particular, we demonstrate (1) how variation in document length can be tolerated by either normalizing feature weights or by using negative weights, (2) the positive effect of applying a threshold range in training, (3) alternatives in considering feature frequency, and (4) the benefits of discarding features while training. Overall, we present an algorithm, a variation of Littlestone's Winnow, which performs significantly better than any other algorithm tested on this task using a similar feature set.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 4 | Theory | cora | 1,825 | train |
1-hop neighbor's text information: Hierarchical Mixtures of Experts and the EM Algorithm, : We present a tree-structured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIM's). Learning is treated as a maximum likelihood problem; in particular, we present an Expectation-Maximization (EM) algorithm for adjusting the parameters of the architecture. We also develop an on-line learning algorithm in which the parameters are updated incrementally. Comparative simulation results are presented in the robot dynamics domain. *We want to thank Geoffrey Hinton, Tony Robinson, Mitsuo Kawato and Daniel Wolpert for helpful comments on the manuscript. This project was supported in part by a grant from the McDonnell-Pew Foundation, by a grant from ATR Human Information Processing Research Laboratories, by a grant from Siemens Corporation, by by grant IRI-9013991 from the National Science Foundation, and by grant N00014-90-J-1942 from the Office of Naval Research. The project was also supported by NSF grant ASC-9217041 in support of the Center for Biological and Computational Learning at MIT, including funds provided by DARPA under the HPCC program, and NSF grant ECS-9216531 to support an Initiative in Intelligent Control at MIT. Michael I. Jordan is a NSF Presidential Young Investigator.
1-hop neighbor's text information: Prior, stabilizers and basis functions : from regularization to radial, tensor and additive splines. : We had previously shown that regularization principles lead to approximation schemes which are equivalent to networks with one layer of hidden units, called Regularization Networks. In particular we had discussed how standard smoothness functionals lead to a subclass of regularization networks, the well-known Radial Basis Functions approximation schemes. In this paper we show that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models and some of the neural networks. In particular we introduce new classes of smoothness functionals that lead to different classes of basis functions. Additive splines as well as some tensor product splines can be obtained from appropriate classes of smoothness functionals. Furthermore, the same extension that leads from Radial Basis Functions (RBF) to Hyper Basis Functions (HBF) also leads from additive models to ridge approximation models, containing as special cases Breiman's hinge functions and some forms of Projection Pursuit Regression. We propose to use the term Generalized Regularization Networks for this broad class of approximation schemes that follow from an extension of regularization. In the probabilistic interpretation of regularization, the different classes of basis functions correspond to different classes of prior probabilities on the approximating function spaces, and therefore to different types of smoothness assumptions. In the final part of the paper, we show the relation between activation functions of the Gaussian and sigmoidal type by considering the simple case of the kernel G(x) = jxj. In summary, different multilayer networks with one hidden layer, which we collectively call Generalized Regularization Networks, correspond to different classes of priors and associated smoothness functionals in a classical regularization principle. Three broad classes are a) Radial Basis Functions that generalize into Hyper Basis Functions, b) some tensor product splines, and c) additive splines that generalize into schemes of the type of ridge approximation, hinge functions and one-hidden-layer perceptrons. This paper describes research done within the Center for Biological and Computational Learning in the Department of Brain and Cognitive Sciences and at the Artificial Intelligence Laboratory. This research is sponsored by grants from the Office of Naval Research under contracts N00014-91-J-1270 and N00014-92-J-1879; by a grant from the National Science Foundation under contract ASC-9217041 (which includes funds from DARPA provided under the HPCC program); and by a grant from the National Institutes of Health under contract NIH 2-S07-RR07047. Additional support is provided by the North Atlantic Treaty Organization, ATR Audio and Visual Perception Research Laboratories, Mitsubishi Electric Corporation, Sumitomo Metal Industries, and Siemens AG. Support for the A.I. Laboratory's artificial intelligence research is provided by ONR contract N00014-91-J-4038. Tomaso Poggio is supported by the Uncas and Helen Whitaker Chair at the Whitaker College, Massachusetts Institute of Technology. c fl Massachusetts Institute of Technology, 1993
Target text information: A.C. Tsoi, A.D. Back, "Function approximation with neural networks and local methods: bias, variance and smoothness", : We review the use of global and local methods for estimating a function mapping R m ) R n from samples of the function containing noise. The relationship between the methods is examined and an empirical comparison is performed using the multi-layer perceptron (MLP) global neural network model, the single nearest-neighbour model, a linear local approximation (LA) model, and the following commonly used datasets: the Mackey-Glass chaotic time series, the Sunspot time series, British English Vowel data, TIMIT speech phonemes, building energy prediction data, and the sonar dataset. We find that the simple local approximation models often outperform the MLP. No criterion such as classification/prediction, size of the training set, dimensionality of the training set, etc. can be used to distinguish whether the MLP or the local approximation method will be superior. However, we find that if we consider histograms of the k-NN density estimates for the training datasets then we can choose the best performing method a priori by selecting local approximation when the spread of the density histogram is large and choosing the MLP otherwise. This result correlates with the hypothesis that the global MLP model is less appropriate when the characteristics of the function to be approximated varies throughout the input space. We discuss the results, the smoothness assumption often made in function approximation, and the bias/variance dilemma.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 1 | Neural Networks | cora | 910 | test |
1-hop neighbor's text information: Bumptrees for Efficient Function, Constraint, and Classification Learning, : A new class of data structures called bumptrees is described. These structures are useful for efficiently implementing a number of neural network related operations. An empirical comparison with radial basis functions is presented on a robot arm mapping learning task. Applications to density estimation, classification, and constraint representation and learning are also outlined.
1-hop neighbor's text information: Prototype and feature selection by sampling and random mutation hill climbing algorithms. : With the goal of reducing computational costs without sacrificing accuracy, we describe two algorithms to find sets of prototypes for nearest neighbor classification. Here, the term prototypes refers to the reference instances used in a nearest neighbor computation the instances with respect to which similarity is assessed in order to assign a class to a new data item. Both algorithms rely on stochastic techniques to search the space of sets of prototypes and are simple to implement. The first is a Monte Carlo sampling algorithm; the second applies random mutation hill climbing. On four datasets we show that only three or four prototypes sufficed to give predictive accuracy equal or superior to a basic nearest neighbor algorithm whose run-time storage costs were approximately 10 to 200 times greater. We briefly investigate how random mutation hill climbing may be applied to select features and prototypes simultaneously. Finally, we explain the performance of the sampling algorithm on these datasets in terms of a statistical measure of the extent of clustering displayed by the target classes.
1-hop neighbor's text information: Hoeffding Races: Accelerating Model Selection Search for Classification and Function Approximation, : Selecting a good model of a set of input points by cross validation is a computationally intensive process, especially if the number of possible models or the number of training points is high. Techniques such as gradient descent are helpful in searching through the space of models, but problems such as local minima, and more importantly, lack of a distance metric between various models reduce the applicability of these search methods. Hoeffding Races is a technique for finding a good model for the data by quickly discarding bad models, and concentrating the computational effort at differentiating between the better ones. This paper focuses on the special case of leave-one-out cross validation applied to memory-based learning algorithms, but we also argue that it is applicable to any class of model selection problems.
Target text information: : Instance-based learning methods explicitly remember all the data that they receive. They usually have no training phase, and only at prediction time do they perform computation. Then, they take a query, search the database for similar datapoints and build an on-line local model (such as a local average or local regression) with which to predict an output value. In this paper we review the advantages of instance based methods for autonomous systems, but we also note the ensuing cost: hopelessly slow computation as the database grows large. We present and evaluate a new way of structuring a database and a new algorithm for accessing it that maintains the advantages of instance-based learning. Earlier attempts to combat the cost of instance-based learning have sacrificed the explicit retention of all data, or been applicable only to instance-based predictions based on a small number of near neighbors or have had to re-introduce an explicit training phase in the form of an interpolative data structure. Our approach builds a multiresolution data structure to summarize the database of experiences at all resolutions of interest simultaneously. This permits us to query the database with the same exibility as a conventional linear search, but at greatly reduced computational cost.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 2 | Case Based | cora | 776 | test |
1-hop neighbor's text information: Latent and manifest monotonicity in item response models:
1-hop neighbor's text information: A survey of theory and methods of invariant item ordering. To appear, : This work was initiated while Junker was visiting the University of Utrecht with the support of a Carnegie Mellon University Faculty Development Grant, and the generous hospitality of the Social Sciences Faculty, University of Utrecht. Additional support was provided by the Office of Naval Research, Cognitive Sciences Division, Grant N00014-87-K-0277 and the National Institute of Mental Health, Training Grant MH15758.
Target text information: A characterization of monotone unidimensional latent variable models. :
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 6 | Probabilistic Methods | cora | 2,490 | test |
1-hop neighbor's text information: A Knowledge-Based Framework for Belief Change, Part II: Revision and Update. : The study of belief change has been an active area in philosophy and AI. In recent years two special cases of belief change, belief revision and belief update, have been studied in detail. In a companion paper [FH94b] we introduced a new framework to model belief change. This framework combines temporal and epistemic modalities with a notion of plausibility, allowing us to examine the changes of beliefs over time. In this paper we show how belief revision and belief update can be captured in our framework. This allows us to compare the assumptions made by each method and to better understand the principles underlying them. In particular, it allows us to understand the source of Gardenfors' triviality result for belief revision [Gar86] and suggests a way of mitigating the problem. It also shows that Katsuno and Mendelzon's notion of belief update [KM91a] depends on several strong assumptions that may limit its applicability in AI.
1-hop neighbor's text information: Modeling belief in dynamic systems. Part I: foundations. : The study of belief change has been an active area in philosophy and AI. In recent years two special cases of belief change, belief revision and belief update, have been studied in detail. In a companion paper [Friedman and Halpern 1997a], we introduce a new framework to model belief change. This framework combines temporal and epistemic modalities with a notion of plausibility, allowing us to examine the change of beliefs over time. In this paper, we show how belief revision and belief update can be captured in our framework. This allows us to compare the assumptions made by each method, and to better understand the principles underlying them. In particular, it shows that Katsuno and Mendelzon's notion of belief update [Katsuno and Mendelzon 1991a] depends on several strong assumptions that may limit its applicability in artificial intelligence. Finally, our analysis allow us to identify a notion of minimal change that underlies a broad range of belief change operations including revision and update. fl Some of this work was done while both authors were at the IBM Almaden Research Center. The first author was also at Stanford while much of the work was done. IBM and Stanford's support are gratefully acknowledged. The work was also supported in part by the Air Force Office of Scientific Research (AFSC), under Contract F49620-91-C-0080 and grant F94620-96-1-0323 and by NSF under grants IRI-95-03109 and IRI-96-25901. The first author was also supported in part by an IBM Graduate Fellowship and by Rockwell Science Center. A preliminary version of this paper appears in J. Doyle, E. Sandewall, and P. Torasso (Eds.), Principles of Knowledge Representation and Reasoning: Proc. Fourth International Conference (KR '94), 1994, pp. 190-201, under the title "A knowledge-based framework for belief change, Part II: revision and update."
Target text information: A LOGICAL APPROACH TO REASONING ABOUT UNCERTAINTY: A TUTORIAL: fl This paper will appear in Discourse, Interaction, and Communication, X. Arrazola, K. Korta, and F. J. Pelletier, eds., Kluwer, 1997. Much of this work was performed while the author was at IBM Almaden Research Center. IBM's support is gratefully acknowledged.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 6 | Probabilistic Methods | cora | 328 | test |
1-hop neighbor's text information: On the convergence properties of the EM algorithm. : In this article we investigate the relationship between the two popular algorithms, the EM algorithm and the Gibbs sampler. We show that the approximate rate of convergence of the Gibbs sampler by Gaussian approximation is equal to that of the corresponding EM type algorithm. This helps in implementing either of the algorithms as improvement strategies for one algorithm can be directly transported to the other. In particular, by running the EM algorithm we know approximately how many iterations are needed for convergence of the Gibbs sampler. We also obtain a result that under conditions, the EM algorithm used for finding the maximum likelihood estimates can be slower to converge than the corresponding Gibbs sampler for Bayesian inference which uses proper prior distributions. We illustrate our results in a number of realistic examples all based on the generalized linear mixed models.
Target text information: "Convergence in norm for alternating expectation-maximization (em) type algorithms," : We provide a sufficient condition for convergence of a general class of alternating estimation-maximization (EM) type continuous-parameter estimation algorithms with respect to a given norm. This class includes EM, penalized EM, Green's OSL-EM, and other approximate EM algorithms. The convergence analysis can be extended to include alternating coordinate-maximization EM algorithms such as Meng and Rubin's ECM and Fessler and Hero's SAGE. The condition for monotone convergence can be used to establish norms under which the distance between successive iterates and the limit point of the EM-type algorithm approaches zero monotonically. For illustration, we apply our results to estimation of Poisson rate parameters in emission tomography and establish that in the final iterations the logarithm of the EM iterates converge monotonically in a weighted Euclidean norm.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 6 | Probabilistic Methods | cora | 2,429 | test |
1-hop neighbor's text information: On the Effectiveness of Evolutionary Search in High-Dimensional NK-Landscapes: NK-landscapes offer the ability to assess the performance of evolutionary algorithms on problems with different degrees of epistasis. In this paper, we study the performance of six algorithms in NK-landscapes with low and high dimension while keeping the amount of epistatic interactions constant. The results show that compared to genetic local search algorithms, the performance of standard genetic algorithms employing crossover or mutation significantly decreases with increasing problem size. Furthermore, with increasing K, crossover based algorithms are in both cases outperformed by mutation based algorithms. However, the relative performance differences between the algorithms grow significantly with the dimension of the search space, indicating that it is important to consider high-dimensional landscapes for evaluating the performance of evolutionary algorithms.
Target text information: Smith (1995), "A genetic approach to the quadratic assignment problem", : Augmenting genetic algorithms with local search heuristics is a promising approach to the solution of combinatorial optimization problems. In this paper, a genetic local search approach to the quadratic assignment problem (QAP) is presented. New genetic operators for realizing the approach are described, and its performance is tested on various QAP instances containing between 30 and 256 facilities/locations. The results indicate that the proposed algorithm is able to arrive at high quality solutions in a relatively short time limit: for the largest publicly known prob lem instance, a new best solution could be found.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 3 | Genetic Algorithms | cora | 1,342 | test |
1-hop neighbor's text information: BRACE: A Paradigm For the Discretization of Continuously Valued Data, : Discretization of continuously valued data is a useful and necessary tool because many learning paradigms assume nominal data. A list of objectives for efficient and effective discretization is presented. A paradigm called BRACE (Boundary Ranking And Classification Evaluation) that attempts to meet the objectives is presented along with an algorithm that follows the paradigm. The paradigm meets many of the objectives, with potential for extension to meet the remainder. Empirical results have been promising. For these reasons BRACE has potential as an effective and efficient method for discretization of continuously valued data. A further advantage of BRACE is that it is general enough to be extended to other types of clustering/unsupervised learning.
1-hop neighbor's text information: Priority ASOCS. : This paper presents an ASOCS (Adaptive Self-Organizing Concurrent System) model for massively parallel processing of incrementally defined rule systems in such areas as adaptive logic, robotics, logical inference, and dynamic control. An ASOCS is an adaptive network composed of many simple computing elements operating asynchronously and in parallel. An ASOCS can operate in either a data processing mode or a learning mode. During data processing mode, an ASOCS acts as a parallel hardware circuit. During learning mode, an ASOCS incorporates a rule expressed as a Boolean conjunction in a distributed fashion in time logarithmic in the number of rules. This paper proposes a learning algorithm and architecture for Priority ASOCS. This new ASOCS model uses rules with priorities. The new model has significant learning time and space complexity improvements over previous models. Non-von Neumann architectures such as neural networks attack the word-at-a-time bottleneck of traditional computing systems [1]. Neural networks learn input-output mappings using highly distributed processing and memory [10,11,12]. Their numerous simple processing elements with modifiable weighted links permit a high degree of parallelism. A typical neural network has fixed topology. It learns by modifying weighted links between nodes. A new class of connectionist architectures has been proposed called ASOCS (Adaptive Self-Organizing Concurrent Systems) [4,5]. ASOCS models support efficient computation through self-organized learning and parallel execution. Learning is done through the incremental presentation of rules and/or examples. ASOCS models learn by modifying their topology. Data types include Boolean and multi-state variables; recent models support analog variables. The model incorporates rules into an adaptive logic network in a parallel and self organizing fashion. In processing mode, ASOCS supports fully parallel execution on actual inputs according to the learned rules. The adaptive logic network acts as a parallel hardware circuit during execution, mapping n input boolean vectors into m output boolean vectors, in a combinatoric fashion. The overall philosophy of ASOCS follows the high level goals of current neural network models. However, the mechanisms of learning and execution vary significantly. The ASOCS logic network is topologically dynamic with the network growing to efficiently fit the specific application. Current ASOCS models are based on digital nodes. ASOCS also supports use of symbolic and heuristic learning mechanisms, thus combining the parallelism and distributed nature of connectionist computing with the potential power of AI symbolic learning. A proof of concept ASOCS chip has been developed [2].
1-hop neighbor's text information: A self-adjusting dynamic logic module. : This paper presents an ASOCS (Adaptive Self-Organizing Concurrent System) model for massively parallel processing of incrementally defined rule systems in such areas as adaptive logic, robotics, logical inference, and dynamic control. An ASOCS is an adaptive network composed of many simple computing elements operating asynchronously and in parallel. This paper focuses on Adaptive Algorithm 2 (AA2) and details its architecture and learning algorithm. AA2 has significant memory and knowledge maintenance advantages over previous ASOCS models. An ASOCS can operate in either a data processing mode or a learning mode. During learning mode, the ASOCS is given a new rule expressed as a boolean conjunction. The AA2 learning algorithm incorporates the new rule in a distributed fashion in a short, bounded time. During data processing mode, the ASOCS acts as a parallel hardware circuit.
Target text information: Automatic Feature Extraction in Machine Learning: This thesis presents a machine learning model capable of extracting discrete classes out of continuous valued input features. This is done using a neurally inspired novel competitive classifier (CC) which feeds the discrete classifications forward to a supervised machine learning model. The supervised learning model uses the discrete classifications and perhaps other information available to solve a problem. The supervised learner then generates feedback to guide the CC into potentially more useful classifications of the continuous valued input features. Two supervised learning models are combined with the CC creating ASOCS-AFE and ID3-AFE. Both models are simulated and the results are analyzed. Based on these results, several areas of future research are proposed.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 1 | Neural Networks | cora | 320 | test |
1-hop neighbor's text information: Bayes factors and model uncertainty. : Technical Report no. 255 Department of Statistics, University of Washington August 1993; Revised March 1994
1-hop neighbor's text information: Accounting for model uncertainty in survival analysis improves predictive performance (with Discussion). In Bayesian Statistics 5 (J.M. : Survival analysis is concerned with finding models to predict the survival of patients or to assess the efficacy of a clinical treatment. A key part of the model-building process is the selection of the predictor variables. It is standard to use a stepwise procedure guided by a series of significance tests to select a single model, and then to make inference conditionally on the selected model. However, this ignores model uncertainty, which can be substantial. We review the standard Bayesian model averaging solution to this problem and extend it to survival analysis, introducing partial Bayes factors to do so for the Cox proportional hazards model. In two examples, taking account of model uncertainty enhances predictive performance, to an extent that could be clinically useful.
1-hop neighbor's text information: Principal Curve Clustering with Noise. : Technical Report 317 Department of Statistics University of Washington. 1 Derek Stanford is Graduate Research Assistant and Adrian E. Raftery is Professor of Statistics and Sociology, both at the Department of Statistics, University of Washington, Box 354322, Seattle, WA 98195-4322, USA. E-mail: stanford@stat.washington.edu and raftery@stat.washington.edu. Web: http://www.stat.washington.edu/raftery. This research was supported by ONR grants N00014-96-1-0192 and N00014-96-1-0330. The authors are grateful to Simon Byers, Gilles Celeux and Christian Posse for helpful discussions.
Target text information: A reference Bayesian test for nested hypotheses and its relationship to the Schwarz criterion. :
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 6 | Probabilistic Methods | cora | 1,619 | test |
1-hop neighbor's text information: Surgery: Object localization has applications in many areas of engineering and science. The goal is to spatially locate an arbitrarily-shaped object. In many applications, it is desirable to minimize the number of measurements collected for this purpose, while ensuring sufficient localization accuracy. In surgery, for example, collecting a large number of localization measurements may either extend the time required to perform a surgical procedure, or increase the radiation dosage to which a patient is exposed. Localization accuracy is a function of the spatial distribution of discrete measurements over an object when measurement noise is present. In [Simon et al., 1995a], metrics were presented to evaluate the information available from a set of discrete object measurements. In this study, new approaches to the discrete point data selection problem are described. These include hillclimbing, genetic algorithms (GAs), and Population-Based Incremental Learning (PBIL). Extensions of the standard GA and PBIL methods, which employ multiple parallel populations, are explored. The results of extensive empirical testing are provided. The results suggest that a combination of PBIL and hillclimbing result in the best overall performance. A computer-assisted surgical system which incorporates some of the methods presented in this paper is currently being evaluated in cadaver trials. Evolution-Based Methods for Selecting Point Data Shumeet Baluja was supported by a National Science Foundation Graduate Student Fellowship and a Graduate Student Fellowship from the National Aeronautics and Space Administration, administered by the Lyndon B. Johnson Space Center, Houston, TX. David Simon was partially supported by a National Science Foundation National Challenge grant (award IRI-9422734). for Object Localization: Applications to
1-hop neighbor's text information: An evolutionary approach to com-binatorial optimization problems. : The paper reports on the application of genetic algorithms, probabilistic search algorithms based on the model of organic evolution, to NP-complete combinatorial optimization problems. In particular, the subset sum, maximum cut, and minimum tardy task problems are considered. Except for the fitness function, no problem-specific changes of the genetic algorithm are required in order to achieve results of high quality even for the problem instances of size 100 used in the paper. For constrained problems, such as the subset sum and the minimum tardy task, the constraints are taken into account by incorporating a graded penalty term into the fitness function. Even for large instances of these highly multimodal optimization problems, an iterated application of the genetic algorithm is observed to find the global optimum within a number of runs. As the genetic algorithm samples only a tiny fraction of the search space, these results are quite encouraging.
1-hop neighbor's text information: Chapter 4 Empirical comparison of stochastic algorithms Empirical comparison of stochastic algorithms in a graph: There are several stochastic methods that can be used for solving NP-hard optimization problems approximatively. Examples of such algorithms include (in order of increasing computational complexity) stochastic greedy search methods, simulated annealing, and genetic algorithms. We investigate which of these methods is likely to give best performance in practice, with respect to the computational effort each requires. We study this problem empirically by selecting a set of stochastic algorithms with varying computational complexity, and by experimentally evaluating for each method how the goodness of the results achieved improves with increasing computational time. For the evaluation, we use a graph optimization problem, which is closely related to several real-world practical problems. To get a wider perspective of the goodness of the achieved results, the stochastic methods are also compared against special-case greedy heuristics. This investigation suggests that although genetic algorithms can provide good results, simpler stochastic algorithms can achieve similar performance more quickly.
Target text information: Stochastic hillclimbing as a baseline method for evaluating genetic algorithms. : We investigate the effectiveness of stochastic hillclimbing as a baseline for evaluating the performance of genetic algorithms (GAs) as combinatorial function optimizers. In particular, we address four problems to which GAs have been applied in the literature: the maximum cut problem, Koza's 11-multiplexer problem, MDAP (the Multiprocessor Document Allocation Problem), and the jobshop problem. We demonstrate that simple stochastic hillclimbing methods are able to achieve results comparable or superior to those obtained by the GAs designed to address these four problems. We further illustrate, in the case of the jobshop problem, how insights obtained in the formulation of a stochastic hillclimbing algorithm can lead to improvements in the encoding used by a GA. fl Department of Computer Science, University of California at Berkeley. Supported by a NASA Graduate Fellowship. This paper was written while the author was a visiting researcher at the Ecole Normale Superieure-rue d'Ulm, Groupe de BioInformatique, France. E-mail: juels@cs.berkeley.edu y Department of Mathematics, University of California at Berkeley. Supported by an NDSEG Graduate Fellowship. E-mail: wattenbe@math.berkeley.edu
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 3 | Genetic Algorithms | cora | 1,665 | test |
1-hop neighbor's text information: Extensions of Fill\'s algorithm for perfect simulation. : Fill's algorithm for perfect simulation for attractive finite state space models, unbiased for user impatience, is presented in terms of stochastic recursive sequences and extended in two ways. Repulsive discrete Markov random fields with two coding sets like the auto-Poisson distribution on a lattice with 4-neighbourhood can be treated as monotone systems if a particular partial ordering and quasi-maximal and quasi-minimal states are used. Fill's algorithm then applies directly. Combining Fill's rejection sampling with sandwiching leads to a version of the algorithm, which works for general discrete conditionally specified repulsive models. Extensions to other types of models are briefly discussed.
1-hop neighbor's text information: Perfect Simulation in Stochastic Geometry: Simulation plays an important role in stochastic geometry and related fields, because all but the simplest random set models tend to be intractable to analysis. Many simulation algorithms deliver (approximate) samples of such random set models, for example by simulating the equilibrium distribution of a Markov chain such as a spatial birth-and-death process. The samples usually fail to be exact because the algorithm simulates the Markov chain for a long but finite time, and thus convergence to equilibrium is only approximate. The seminal work by Propp and Wilson made an important contribution to simulation by proposing a coupling method, Coupling from the Past (CFTP), which delivers perfect, that is to say exact, simulations of Markov chains. In this paper we introduce this new idea of perfect simulation and illustrate it using two common models in stochastic geometry: the dead leaves model and a Boolean model conditioned to cover a finite set of points.
Target text information: (1997) Perfect Simulation of some Point Processes for the Impatient User, Advances in Applied Probability, Stochastic Geometry and Statistical Applications, : Recently Propp and Wilson [14] have proposed an algorithm, called Coupling from the Past (CFTP), which allows not only an approximate but perfect (i.e. exact) simulation of the stationary distribution of certain finite state space Markov chains. Perfect Sampling using CFTP has been successfully extended to the context of point processes, amongst other authors, by Haggstrom et al. [5]. In [5] Gibbs sampling is applied to a bivariate point process, the penetrable spheres mixture model [19]. However, in general the running time of CFTP in terms of number of transitions is not independent of the state sampled. Thus an impatient user who aborts long runs may introduce a subtle bias, the user impatience bias. Fill [3] introduced an exact sampling algorithm for finite state space Markov chains which, in contrast to CFTP, is unbiased for user impatience. Fill's algorithm is a form of rejection sampling and similar to CFTP requires sufficient mono-tonicity properties of the transition kernel used. We show how Fill's version of rejection sampling can be extended to an infinite state space context to produce an exact sample of the penetrable spheres mixture process and related models. Following [5] we use Gibbs sampling and make use of the partial order of the mixture model state space. Thus
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 6 | Probabilistic Methods | cora | 2,283 | test |
1-hop neighbor's text information: Learning default concepts. : Classical concepts, based on necessary and sufficient defining conditions, cannot classify logically insufficient object descriptions. Many reasoning systems avoid this limitation by using "default concepts" to classify incompletely described objects. This paper addresses the task of learning such default concepts from observational data. We first model the underlying performance task | classifying incomplete examples | as a probabilistic process that passes random test examples through a "blocker" that can hide object attributes from the classifier. We then address the task of learning accurate default concepts from random training examples. After surveying the learning techniques that have been proposed for this task in the machine learning and knowledge representation literatures, and investigating their relative merits, we present a more data-efficient learning technique, developed from well-known statistical principles. Finally, we extend Valiant's pac-learning framework to this context and obtain a number of useful learnability results. Appears in the Proceedings of the Tenth Canadian Conference on Artificial Intelligence (CSCSI-94),
Target text information: Knowing what doesn\'t matter: Exploiting (intentionally) omitted superfluous data. : Most inductive inference algorithms (i.e., "learners") work most effectively when their training data contain completely specified labeled samples. In many diagnostic tasks, however, the data will include the values of only some of the attributes; we model this as a blocking process that hides the values of those attributes from the learner. While blockers that remove the values of critical attributes can handicap a learner, this paper instead focuses on blockers that remove only superfluous attribute values, i.e., values that are not needed to classify an instance, given the values of the other unblocked attributes. We first motivate and formalize this model of "superfluous-value blocking," and then demonstrate that these omissions can be useful, by showing that certain classes that seem hard to learn in the general PAC model | viz., decision trees | are trivial to learn in this setting, and can even be learned in a manner that is very robust to classification noise. We also discuss how this model can be extended to deal with (1) theory revision (i.e., modifying an existing decision tree); (2) "complex" attributes (which correspond to combinations of other atomic attributes); (3) blockers that occasionally include superfluous values or exclude required values; and (4) other hypothesis classes (e.g., DNF formulae). Declaration: This paper has not already been accepted by and is not currently under review for a journal or another conference, nor will it be submitted for such during IJCAI's review period. fl This is an extended version of a paper that appeared in working notes of the 1994 AAAI Fall Symposium on "Relevance", New Orleans, November 1994. y Authors listed alphabetically. We gratefully acknowledge receiving helpful comments from Dale Schuurmans and George Drastal.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 4 | Theory | cora | 12 | test |
1-hop neighbor's text information: Comparison of neural net and conventional techniques for lighting control. : We compare two techniques for lighting control in an actual room equipped with seven banks of lights and photoresistors to detect the lighting level at four sensing points. Each bank of lights can be independently set to one of sixteen intensity levels. The task is to determine the device intensity levels that achieve a particular configuration of sensor readings. One technique we explored uses a neural network to approximate the mapping between sensor readings and device intensity levels. The other technique we examined uses a conventional feedback control loop. The neural network approach appears superior both in that it does not require experimentation on the fly (and hence fluctuating light intensity levels during settling, and lengthy settling times) and in that it can deal with complex interactions that conventional control techniques do not handle well. This comparison was performed as part of the "Adaptive House" project, which is described briefly. Further directions for control in the
1-hop neighbor's text information: "Refining PID controllers using neural networks," : The Kbann approach uses neural networks to refine knowledge that can be written in the form of simple propositional rules. We extend this idea further by presenting the Manncon algorithm by which the mathematical equations governing a PID controller determine the topology and initial weights of a network, which is further trained using backpropagation. We apply this method to the task of controlling the outflow and temperature of a water tank, producing statistically-significant gains in accuracy over both a standard neural network approach and a non-learning PID controller. Furthermore, using the PID knowledge to initialize the weights of the network produces statistically less variation in testset accuracy when compared to networks initialized with small random numbers.
Target text information: The Neural Network House: An overview. : Typical home comfort systems utilize only rudimentary forms of energy management and conservation. The most sophisticated technology in common use today is an automatic setback thermostat. Tremendous potential remains for improving the efficiency of electric and gas usage. However, home residents who are ignorant of the physics of energy utilization cannot design environmental control strategies, but neither can energy management experts who are ignorant of the behavior patterns of the inhabitants. Adaptive control seems the only alternative. We have begun building an adaptive control system that can infer appropriate rules of operation for home comfort systems based on the lifestyle of the inhabitants and energy conservation goals. Recent research has demonstrated the potential of neural networks for intelligent control. We are constructing a prototype control system in an actual residence using neural network reinforcement learning and prediction techniques. The residence is equipped with sensors to provide information about environmental conditions (e.g., temperatures, ambient lighting level, sound and motion in each room) and actuators to control the gas furnace, electric space heaters, gas hot water heater, lighting, motorized blinds, ceiling fans, and dampers in the heating ducts. This paper presents an overview of the project as it now stands.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 5 | Reinforcement Learning | cora | 897 | test |
1-hop neighbor's text information: Paying attention to the right things: Issues of focus in case-based creative design. : Case-based reasoning can be used to explain many creative design processes, since much creativity stems from using old solutions in novel ways. To understand the role cases play, we conducted an exploratory study of a seven-week student creative design project. This paper discusses the observations we made and the issues that arise in understanding and modeling creative design processes. We found particularly interesting the role of imagery in reminding and in evaluating design options. This included visualization, mental simulation, gesturing, and even sound effects. An important class of issues we repeatedly encounter in our modeling efforts concerns the focus of the designer. (For example, which problem constraints should be reformulated? Which evaluative issues should be raised?) Cases help to address these focus issues.
1-hop neighbor's text information: Opportunistic Reasoning: A Design Perspective. : An essential component of opportunistic behavior is opportunity recognition, the recognition of those conditions that facilitate the pursuit of some suspended goal. Opportunity recognition is a special case of situation assessment, the process of sizing up a novel situation. The ability to recognize opportunities for reinstating suspended problem contexts (one way in which goals manifest themselves in design) is crucial to creative design. In order to deal with real world opportunity recognition, we attribute limited inferential power to relevant suspended goals. We propose that goals suspended in the working memory monitor the internal (hidden) representations of the currently recognized objects. A suspended goal is satisfied when the current internal representation and a suspended goal match. We propose a computational model for working memory and we compare it with other relevant theories of opportunistic planning. This working memory model is implemented as part of our IMPROVISER system.
1-hop neighbor's text information: From Design Experiences to Generic Mechanisms: Model-Based Learning in Analogical Design. : Analogical reasoning plays an important role in design. In particular, cross-domain analogies appear to be important in innovative and creative design. However, making cross-domain analogies is hard and often requires abstractions common to the source and target domains. Recent work in case-based design suggests that generic mechanisms are one type of abstractions useful in adapting past designs. However, one important yet unexplored issue is where these generic mechanisms come from. We hypothesize that they are acquired incrementally from design experiences in familiar domains by generalization over patterns of regularity. Three important issues in generalization from experiences are what to generalize from an experience, how far to generalize, and what methods to use. In this paper, we describe how structure-behavior-function models of designs in a familiar domain provide the content, and together with the problem-solving context in which learning occurs, also provide the constraints for learning generic mechanisms from design experiences. In particular, we describe the model-based learning method with a scenario of learning of feedback mechanism.
Target text information: an Opportunistic Enterprise: Tech Report GIT-COGSCI-97/04 Abstract This paper identifies goal handling processes that begin to account for the kind of processes involved in invention. We identify new kinds of goals with special properties and mechanisms for processing such goals, as well as means of integrating opportunism, deliberation, and social interaction into goal/plan processes. We focus on invention goals, which address significant enterprises associated with an inventor. Invention goals represent seed goals of an expert, around which the whole knowledge of an expert gets reorganized and grows more or less opportunistically. Invention goals reflect the idiosyncrasy of thematic goals among experts. They constantly increase the sensitivity of individuals for particular events that might contribute to their satisfaction. Our exploration is based on a well-documented example: the invention of the telephone by Alexander Graham Bell. We propose mechanisms to explain: (1) how Bell's early thematic goals gave rise to the new goals to invent the multiple telegraph and the telephone, and (2) how the new goals interacted opportunistically. Finally, we describe our computational model, ALEC, that accounts for the role of goals in invention.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 2 | Case Based | cora | 171 | test |
1-hop neighbor's text information: Word Perfect Corp. LIA: A Location-Independent Transformation for ASOCS Adaptive Algorithm 2: Most Artificial Neural Networks (ANNs) have a fixed topology during learning, and often suffer from a number of shortcomings as a result. ANNs that use dynamic topologies have shown ability to overcome many of these problems. Adaptive Self Organizing Concurrent Systems (ASOCS) are a class of learning models with inherently dynamic topologies. This paper introduces Location-Independent Transformations (LITs) as a general strategy for implementing learning models that use dynamic topologies efficiently in parallel hardware. A LIT creates a set of location-independent nodes, where each node computes its part of the network output independent of other nodes, using local information. This type of transformation allows efficient support for adding and deleting nodes dynamically during learning. In particular, this paper presents the Location - Independent ASOCS (LIA) model as a LIT for ASOCS Adaptive Algorithm 2. The description of LIA gives formal definitions for LIA algorithms. Because LIA implements basic ASOCS mechanisms, these definitions provide a formal description of basic ASOCS mechanisms in general, in addition to LIA.
1-hop neighbor's text information: A VLSI Implementation of a Parallel, Self-Organizing Learning Model, : This paper presents a VLSI implementation of the Priority Adaptive Self-Organizing Concurrent System (PASOCS) learning model that is built using a multi-chip module (MCM) substrate. Many current hardware implementations of neural network learning models are direct implementations of classical neural network structures|a large number of simple computing nodes connected by a dense number of weighted links. PASOCS is one of a class of ASOCS (Adaptive Self-Organizing Concurrent System) connectionist models whose overall goal is the same as classical neural networks models, but whose functional mechanisms differ significantly. This model has potential application in areas such as pattern recognition, robotics, logical inference, and dynamic control.
1-hop neighbor's text information: Growing Layers of Perceptrons: Introducing the Extentron Algorithm, : vations of perceptrons: (1) when the perceptron learning algorithm cycles among hyperplanes, the hyperplanes may be compared to select one that gives a best split of the examples, and (2) it is always possible for the perceptron to build a hyper- plane that separates at least one example from all the rest. We describe the Extentron which grows multi-layer networks capable of distinguishing non- linearly-separable data using the simple perceptron rule for linear threshold units. The resulting algorithm is simple, very fast, scales well to large prob - lems, retains the convergence properties of the perceptron, and can be completely specified using only two parameters. Results are presented comparing the Extentron to other neural network paradigms and to symbolic learning systems.
Target text information: An Efficient Transformation for Implementing Two-Layer FeedForward Neural Networks. : Most Artificial Neural Networks (ANNs) have a fixed topology during learning, and often suffer from a number of shortcomings as a result. Variations of ANNs that use dynamic topologies have shown ability to overcome many of these problems. This paper introduces Location-Independent Transformations (LITs) as a general strategy for implementing distributed feedforward networks that use dynamic topologies (dynamic ANNs) efficiently in parallel hardware. A LIT creates a set of location-independent nodes, where each node computes its part of the network output independent of other nodes, using local information. This type of transformation allows efficient support for adding and deleting nodes dynamically during learning. In particular, this paper presents an LIT for dynamic Backpropagation networks with a single hidden layer. The complexity of both learning and execution algorithms is O(n+p+logm) for a single pattern, where nis the number of inputs, p is the number of outputs, and m is the number of hidden nodes in the original network. Keywords: Neural Networks, Backpropagation, Implementation Design, Dynamic Topologies, Reconfigurable Architectures.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 1 | Neural Networks | cora | 1,673 | val |
1-hop neighbor's text information: Efficient algorithms for learning to play repeated games against computationally bounded adversaries. : We study the problem of efficiently learning to play a game optimally against an unknown adversary chosen from a computationally bounded class. We both contribute to the line of research on playing games against finite automata, and expand the scope of this research by considering new classes of adversaries. We introduce the natural notions of games against recent history adversaries (whose current action is determined by some simple boolean formula on the recent history of play), and games against statistical adversaries (whose current action is determined by some simple function of the statistics of the entire history of play). In both cases we give efficient algorithms for learning to play penny-matching and a more difficult game called contract . We also give the most powerful positive result to date for learning to play against finite automata, an efficient algorithm for learning to play any game against any finite automata with probabilistic actions and low cover time.
1-hop neighbor's text information: Slonim. The power of team exploration: Two robots can learn unlabeled directed graphs. : We show that two cooperating robots can learn exactly any strongly-connected directed graph with n indistinguishable nodes in expected time polynomial in n. We introduce a new type of homing sequence for two robots which helps the robots recognize certain previously-seen nodes. We then present an algorithm in which the robots learn the graph and the homing sequence simultaneously by wandering actively through the graph. Unlike most previous learning results using homing sequences, our algorithm does not require a teacher to provide counterexamples. Furthermore, the algorithm can use efficiently any additional information available that distinguishes nodes. We also present an algorithm in which the robots learn by taking random walks. The rate at which a random walk converges to the stationary distribution is characterized by the conductance of the graph. Our random-walk algorithm learns in expected time polynomial in n and in the inverse of the conductance and is more efficient than the homing-sequence algorithm for high-conductance graphs.
1-hop neighbor's text information: Exactly learning automata with small cover time. : We present algorithms for exactly learning unknown environments that can be described by deterministic finite automata. The learner performs a walk on the target automaton, where at each step it observes the output of the state it is at, and chooses a labeled edge to traverse to the next state. The learner has no means of a reset, and does not have access to a teacher that answers equivalence queries and gives the learner counterexamples to its hypotheses. We present two algorithms: The first is for the case in which the outputs observed by the learner are always correct, and the second is for the case in which the outputs might be corrupted by random noise. The running times of both algorithms are polynomial in the cover time of the underlying graph of the target automaton.
Target text information: The Power of a Pebble: Exploring and Mapping Directed Graphs: Exploring and mapping an unknown environment is a fundamental problem, which is studied in a variety of contexts. Many works have focused on finding efficient solutions to restricted versions of the problem. In this paper, we consider a model that makes very limited assumptions on the environment and solve the mapping problem in this general setting. We model the environment by an unknown directed graph G, and consider the problem of a robot exploring and mapping G. We do not assume that the vertices of G are labeled, and thus the robot has no hope of succeeding unless it is given some means of distinguishing between vertices. For this reason we provide the robot with a pebble a device that it can place on a vertex and use to identify the vertex later. In this paper we show: (1) If the robot knows an upper bound on the number of vertices then it can learn the graph efficiently with only one pebble. (2) If the robot does not know an upper bound on the number of vertices n, then fi(log log n) pebbles are both necessary and sufficient. In both cases our algorithms are deterministic.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 4 | Theory | cora | 260 | test |
1-hop neighbor's text information: Generality versus size in genetic programming. : Genetic Programming (GP) uses variable size representations as programs. Size becomes an important and interesting emergent property of the structures evolved by GP. The size of programs can be both a controlling and a controlled factor in GP search. Size influences the efficiency of the search process and is related to the generality of solutions. This paper analyzes the size and generality issues in standard GP and GP using subroutines and addresses the question whether such an analysis can help control the search process. We relate the size, generalization and modularity issues for programs evolved to control an agent in a dynamic and non-deterministic environment, as exemplified by the Pac-Man game.
Target text information: Evolving a generalised behavior: Artificial ant problem revisited. : This research aims to demonstrate that a solution for artificial ant problem [4] is very likely to be non-general and relying on the specific characteristics of the Santa Fe trail. It then presents a consistent method which promotes producing general solutions. Using the concepts of training and testing from machine learning research, the method can be useful in producing general behaviours for simulation environments.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 3 | Genetic Algorithms | cora | 2,119 | test |
1-hop neighbor's text information: Learning Concept Classification Rules Using Genetic Algorithms. : In this paper, we explore the use of genetic algorithms (GAs) as a key element in the design and implementation of robust concept learning systems. We describe and evaluate a GA-based system called GABIL that continually learns and refines concept classification rules from its interaction with the environment. The use of GAs is motivated by recent studies showing the effects of various forms of bias built into different concept learning systems, resulting in systems that perform well on certain concept classes (generally, those well matched to the biases) and poorly on others. By incorporating a GA as the underlying adaptive search mechanism, we are able to construct a concept learning system that has a simple, unified architecture with several important features. First, the system is surprisingly robust even with minimal bias. Second, the system can be easily extended to incorporate traditional forms of bias found in other concept learning systems. Finally, the architecture of the system encourages explicit representation of such biases and, as a result, provides for an important additional feature: the ability to dynamically adjust system bias. The viability of this approach is illustrated by comparing the performance of GABIL with that of four other more traditional concept learners (AQ14, C4.5, ID5R, and IACL) on a variety of target concepts. We conclude with some observations about the merits of this approach and about possible extensions.
Target text information: STRUCTURAL LEARNING OF FUZZY RULES FROM NOISED EXAMPLES: Inductive learning algorithms try to obtain the knowledge of a system from a set of examples. One of the most difficult problems in machine learning consists in getting the structure of this knowledge. We propose an algorithm able to manage with fuzzy information and able to learn the structure of the rules that represent the system. The algorithm gives a reasonable small set of fuzzy rules that represent the original set of examples.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 3 | Genetic Algorithms | cora | 689 | test |
1-hop neighbor's text information: D.M. Chiarulli, On-Line Prediction of Multiprocessor Memory Access Patterns, : Technical Report UMIACS-TR-96-59 and CS-TR-3676 Institute for Advanced Computer Studies University of Maryland College Park, MD 20742 Abstract Shared memory multiprocessors require reconfigurable interconnection networks (INs) for scalability. These INs are reconfigured by an IN control unit. However, these INs are often plagued by undesirable reconfiguration time that is primarily due to control latency, the amount of time delay that the control unit takes to decide on a desired new IN configuration. To reduce control latency, a trainable prediction unit (PU) was devised and added to the IN controller. The PUs job is to anticipate and reduce control configuration time, the major component of the control latency. Three different on-line prediction techniques were tested to learn and predict repetitive memory access patterns for three typical parallel processing applications, the 2-D relaxation algorithm, matrix multiply and Fast Fourier Transform. The predictions were then used by a routing control algorithm to reduce control latency by configuring the IN to provide needed memory access paths before they were requested. Three prediction techniques were used and tested: 1). a Markov predictor, 2). a linear predictor and 3). a time delay neural network (TDNN) predictor. As expected, different predictors performed best on different applications, however, the TDNN produced the best overall results.
Target text information: D.M. Chiarulli, Predictive Control of Opto-Electronic Reconfigurable Interconnection Networks using Neural Networks, : Opto-electronic reconfigurable interconnection networks are limited by significant control latency when used in large multiprocessor systems. This latency is the time required to analyze the current traffic and reconfigure the network to establish the required paths. The goal of latency hiding is to minimize the effect of this control overhead. In this paper, we introduce a technique that performs latency hiding by learning the patterns of communication traffic and using that information to anticipate the need for communication paths. Hence, the network provides the required communication paths before a request for a path is made. In this study, the communication patterns (memory accesses) of a parallel program are used as input to a time delay neural network (TDNN) to perform on-line training and prediction. These predicted communication patterns are used by the interconnection network controller that provides routes for the memory requests. Based on our experiments, the neural network was able to learn highly repetitive communication patterns, and was thus able to predict the allocation of communication paths, resulting in a reduction of communication latency.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 1 | Neural Networks | cora | 901 | val |
1-hop neighbor's text information: Probabilistic evaluation of sequential plans from causal models with hidden variables. : The paper concerns the probabilistic evaluation of plans in the presence of unmeasured variables, each plan consisting of several concurrent or sequential actions. We establish a graphical criterion for recognizing when the effects of a given plan can be predicted from passive observations on measured variables only. When the criterion is satisfied, a closed-form expression is provided for the probability that the plan will achieve a specified goal.
1-hop neighbor's text information: UNIVERSAL FORMULAS FOR TREATMENT EFFECTS FROM NONCOMPLIANCE DATA: This paper establishes formulas that can be used to bound the actual treatment effect in any experimental study in which treatment assignment is random but subject compliance is imperfect. These formulas provide the tightest bounds on the average treatment effect that can be inferred given the distribution of assignments, treatments, and responses. Our results reveal that even with high rates of noncompliance, experimental data can yield significant and sometimes accurate information on the effect of a treatment on the population.
1-hop neighbor's text information: Causal inference from indirect experiments. : Indirect experiments are studies in which randomized control is replaced by randomized encouragement, that is, subjects are encouraged, rather than forced to receive treatment programs. The purpose of this paper is to bring to the attention of experimental researchers simple mathematical results that enable us to assess, from indirect experiments, the strength with which causal influences operate among variables of interest. The results reveal that despite the laxity of the encouraging instrument, indirect experimentation can yield significant and sometimes accurate information on the impact of a program on the population as a whole, as well as on the particular individuals who participated in the program.
Target text information: "Causal diagrams for experimental research," : The primary aim of this paper is to show how graphical models can be used as a mathematical language for integrating statistical and subject-matter information. In particular, the paper develops a principled, nonparametric framework for causal inference, in which diagrams are queried to determine if the assumptions available are sufficient for identifying causal effects from nonexperimental data. If so the diagrams can be queried to produce mathematical expressions for causal effects in terms of observed distributions; otherwise, the diagrams can be queried to suggest additional observations or auxiliary experiments from which the desired inferences can be obtained. Key words: Causal inference, graph models, structural equations, treatment effect.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 6 | Probabilistic Methods | cora | 2,464 | test |
1-hop neighbor's text information: Possible world partition sequences: A unifying framework for uncertain reasoning. : When we work with information from multiple sources, the formalism each employs to handle uncertainty may not be uniform. In order to be able to combine these knowledge bases of different formats, we need to first establish a common basis for characterizing and evaluating the different formalisms, and provide a semantics for the combined mechanism. A common framework can provide an infrastructure for building an integrated system, and is essential if we are to understand its behavior. We present a unifying framework based on an ordered partition of possible worlds called partition sequences, which corresponds to our intuitive notion of biasing towards certain possible scenarios when we are uncertain of the actual situation. We show that some of the existing formalisms, namely, default logic, autoepistemic logic, probabilistic conditioning and thresholding (generalized conditioning), and possibility theory can be incorporated into this general framework.
1-hop neighbor's text information: Uncertain inferences and uncertain conclusions. : Uncertainty may be taken to characterize inferences, their conclusions, their premises or all three. Under some treatments of uncertainty, the inference itself is never characterized by uncertainty. We explore both the signiflcance of uncertainty in the premises and in the conclusion of an argument that involves uncertainty. We argue that for uncertainty to characterize the conclusion of an inference is natural, but that there is an interplay between uncertainty in the premises and uncertainty in the procedure of argument itself. We show that it is possible in principle to incorporate all uncertainty in the premises, rendering uncertainty arguments deductively valid. But we then argue (1) that this does not reect human argument, (2) that it is computa-tionally costly, and (3) that the gain in simplicity obtained by allowing uncertainty in inference can sometimes outweigh the loss of exibility it entails. keywords: uncertainty, inference, logic, argument, decision, premises.
Target text information: Sequential Thresholds: Context Sensitive Default Extensions: Default logic encounters some conceptual difficulties in representing common sense reasoning tasks. We argue that we should not try to formulate modular default rules that are presumed to work in all or most circumstances. We need to take into account the importance of the context which is continuously evolving during the reasoning process. Sequential thresholding is a quantitative counterpart of default logic which makes explicit the role context plays in the construction of a non-monotonic extension. We present a semantic characterization of generic non-monotonic reasoning, as well as the instan-tiations pertaining to default logic and sequential thresholding. This provides a link between the two mechanisms as well as a way to integrate the two that can be beneficial to both.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 6 | Probabilistic Methods | cora | 258 | test |
1-hop neighbor's text information: The Estimation of Probabilities in Attribute Selection Measures for Decision Structure Induction in Proceeding of the European Summer School on Machine Learning, : In this paper we analyze two well-known measures for attribute selection in decision tree induction, informativity and gini index. In particular, we are interested in the influence of different methods for estimating probabilities on these two measures. The results of experiments show that different measures, which are obtained by different probability estimation methods, determine the preferential order of attributes in a given node. Therefore, they determine the structure of a constructed decision tree. This feature can be very beneficial, especially in real-world applications where several different trees are often required.
1-hop neighbor's text information: An empirical comparison of selection measures for decision-tree induction. : Ourston and Mooney, 1990b ] D. Ourston and R. J. Mooney. Improving shared rules in multiple category domain theories. Technical Report AI90-150, Artificial Intelligence Labora tory, University of Texas, Austin, TX, December 1990.
Target text information: Learning Decision Trees from Decision Rules:: A method and initial results from a comparative study ABSTRACT A standard approach to determining decision trees is to learn them from examples. A disadvantage of this approach is that once a decision tree is learned, it is difficult to modify it to suit different decision making situations. Such problems arise, for example, when an attribute assigned to some node cannot be measured, or there is a significant change in the costs of measuring attributes or in the frequency distribution of events from different decision classes. An attractive approach to resolving this problem is to learn and store knowledge in the form of decision rules, and to generate from them, whenever needed, a decision tree that is most suitable in a given situation. An additional advantage of such an approach is that it facilitates building compact decision trees , which can be much simpler than the logically equivalent conventional decision trees (by compact trees are meant decision trees that may contain branches assigned a set of values , and nodes assigned derived attributes, i.e., attributes that are logical or mathematical functions of the original ones). The paper describes an efficient method, AQDT-1, that takes decision rules generated by an AQ-type learning system (AQ15 or AQ17), and builds from them a decision tree optimizing a given optimality criterion. The method can work in two modes: the standard mode , which produces conventional decision trees, and compact mode, which produces compact decision trees. The preliminary experiments with AQDT-1 have shown that the decision trees generated by it from decision rules (conventional and compact) have outperformed those generated from examples by the well-known C4.5 program both in terms of their simplicity and their predictive accuracy.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 0 | Rule Learning | cora | 844 | test |
1-hop neighbor's text information: "Neural networks with quadratic VC dimension," : This paper shows that neural networks which use continuous activation functions have VC dimension at least as large as the square of the number of weights w. This result settles a long-standing open question, namely whether the well-known O(w log w) bound, known for hard-threshold nets, also held for more general sigmoidal nets. Implications for the number of samples needed for valid gen eralization are discussed.
1-hop neighbor's text information: Partition-based uniform error bounds, : This paper develops probabilistic bounds on out-of-sample error rates for several classifiers using a single set of in-sample data. The bounds are based on probabilities over partitions of the union of in-sample and out-of-sample data into in-sample and out-of-sample data sets. The bounds apply when in-sample and out-of-sample data are drawn from the same distribution. Partition-based bounds are stronger than VC-type bounds, but they require more computation.
1-hop neighbor's text information: Alternative error bounds for the classifier chosen by early stopping, :
Target text information: Similar classifiers and VC error bounds. : We improve error bounds based on VC analysis for classes with sets of similar classifiers. We apply the new error bounds to separating planes and artificial neural networks. Key words machine learning, learning theory, generalization, Vapnik-Chervonenkis, separating planes, neural networks.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 4 | Theory | cora | 2,497 | test |
1-hop neighbor's text information: "Extracting tree-structured representations of trained networks," : A significant limitation of neural networks is that the representations they learn are usually incomprehensible to humans. We present a novel algorithm, Trepan, for extracting comprehensible, symbolic representations from trained neural networks. Our algorithm uses queries to induce a decision tree that approximates the concept represented by a given network. Our experiments demonstrate that Trepan is able to produce decision trees that maintain a high level of fidelity to their respective networks while being comprehensible and accurate. Unlike previous work in this area, our algorithm is general in its applicability and scales well to large net works and problems with high-dimensional input spaces.
1-hop neighbor's text information: Learning from bad data. : The data describing resolutions to telephone network local loop "troubles," from which we wish to learn rules for dispatching technicians, are notoriously unreliable. Anecdotes abound detailing reasons why a resolution entered by a technician would not be valid, ranging from sympathy to fear to ignorance to negligence to management pressure. In this paper, we describe four different approaches to dealing with the problem of "bad" data in order first to determine whether machine learning has promise in this domain, and then to determine how well machine learning might perform. We then offer evidence that machine learning can help to build a dispatching method that will perform better than the system currently in place.
1-hop neighbor's text information: Generating accurate and diverse members of a neural-network ensemble. : Neural-network ensembles have been shown to be very accurate classification techniques. Previous work has shown that an effective ensemble should consist of networks that are not only highly correct, but ones that make their errors on different parts of the input space as well. Most existing techniques, however, only indirectly address the problem of creating such a set of networks. In this paper we present a technique called Addemup that uses genetic algorithms to directly search for an accurate and diverse set of trained networks. Addemup works by first creating an initial population, then uses genetic operators to continually create new networks, keeping the set of networks that are as accurate as possible while disagreeing with each other as much as possible. Experiments on three DNA problems show that Addemup is able to generate a set of trained networks that is more accurate than several existing approaches. Experiments also show that Addemup is able to effectively incorporate prior knowledge, if available, to improve the quality of its ensemble.
Target text information: Using Neural Networks to Automatically Refine Expert System Knowledge Bases: Experiments in the NYNEX MAX Domain: In this paper we describe our study of applying knowledge-based neural networks to the problem of diagnosing faults in local telephone loops. Currently, NYNEX uses an expert system called MAX to aid human experts in diagnosing these faults; however, having an effective learning algorithm in place of MAX would allow easy portability between different maintenance centers, and easy updating when the phone equipment changes. We find that (i) machine learning algorithms have better accuracy than MAX, (ii) neural networks perform better than decision trees, (iii) neural network ensembles perform better than standard neural networks, (iv) knowledge-based neural networks perform better than standard neural networks, and (v) an ensemble of knowledge-based neural networks performs the best.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 1 | Neural Networks | cora | 138 | val |
1-hop neighbor's text information: Reinforcement Learning with Modular Neural Networks for Control. : Reinforcement learning methods can be applied to control problems with the objective of optimizing the value of a function over time. They have been used to train single neural networks that learn solutions to whole tasks. Jacobs and Jordan [5] have shown that a set of expert networks combined via a gating network can more quickly learn tasks that can be decomposed. Even the decomposition can be learned. Inspired by Boyan's work of modular neural networks for learning with temporal-difference methods [4], we modify the reinforcement learning algorithm called Q-Learning to train a modular neural network to solve a control problem. The resulting algorithm is demonstrated on the classical pole-balancing problem. The advantage of such a method is that it makes it possible to deal with complex dynamic control problem effectively by using task decomposition and competitive learning.
1-hop neighbor's text information: Learning to predict by the methods of temporal differences. : This article introduces a class of incremental learning procedures specialized for prediction|that is, for using past experience with an incompletely known system to predict its future behavior. Whereas conventional prediction-learning methods assign credit by means of the difference between predicted and actual outcomes, the new methods assign credit by means of the difference between temporally successive predictions. Although such temporal-difference methods have been used in Samuel's checker player, Holland's bucket brigade, and the author's Adaptive Heuristic Critic, they have remained poorly understood. Here we prove their convergence and optimality for special cases and relate them to supervised-learning methods. For most real-world prediction problems, temporal-difference methods require less memory and less peak computation than conventional methods; and they produce more accurate predictions. We argue that most problems to which supervised learning is currently applied are really prediction problems of the sort to which temporal-difference methods can be applied to advantage.
1-hop neighbor's text information: Some studies in machine learning using the game of Checkers. :
Target text information: Learning to Play Games from Experience: An Application of Artificial Neural Networks and Temporal Difference Learning. :
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 1 | Neural Networks | cora | 1,573 | test |
1-hop neighbor's text information: A new learning algorithm for blind signal separation. : A new on-line learning algorithm which minimizes a statistical dependency among outputs is derived for blind separation of mixed signals. The dependency is measured by the average mutual information (MI) of the outputs. The source signals and the mixing matrix are unknown except for the number of the sources. The Gram-Charlier expansion instead of the Edgeworth expansion is used in evaluating the MI. The natural gradient approach is used to minimize the MI. A novel activation function is proposed for the on-line learning algorithm which has an equivariant property and is easily implemented on a neural network like model. The validity of the new learning algorithm is verified by computer simulations.
1-hop neighbor's text information: Learning linear, sparse, factorial codes. : In previous work (Olshausen & Field 1996), an algorithm was described for learning linear sparse codes which, when trained on natural images, produces a set of basis functions that are spatially localized, oriented, and bandpass (i.e., wavelet-like). This note shows how the algorithm may be interpreted within a maximum-likelihood framework. Several useful insights emerge from this connection: it makes explicit the relation to statistical independence (i.e., factorial coding), it shows a formal relationship to the algorithm of Bell and Sejnowski (1995), and it suggests how to adapt parameters that were previously fixed. This report describes research done within the Center for Biological and Computational Learning in the Department of Brain and Cognitive Sciences at the Massachusetts Institute of Technology. This research is sponsored by an Individual National Research Service Award to B.A.O. (NIMH F32-MH11062) and by a grant from the National Science Foundation under contract ASC-9217041 (this award includes funds from ARPA provided under the HPCC program) to CBCL.
1-hop neighbor's text information: An Information Maximization Approach to Blind Separation and Blind Deconvolution. : We derive a new self-organising learning algorithm which maximises the information transferred in a network of non-linear units. The algorithm does not assume any knowledge of the input distributions, and is defined here for the zero-noise limit. Under these conditions, information maximisation has extra properties not found in the linear case (Linsker 1989). The non-linearities in the transfer function are able to pick up higher-order moments of the input distributions and perform something akin to true redundancy reduction between units in the output representation. This enables the network to separate statistically independent components in the inputs: a higher-order generalisation of Principal Components Analysis. We apply the network to the source separation (or cocktail party) problem, successfully separating unknown mixtures of up to ten speakers. We also show that a variant on the network architecture is able to perform blind deconvolution (cancellation of unknown echoes and reverberation in a speech signal). Finally, we derive dependencies of information transfer on time delays. We suggest that information max-imisation provides a unifying framework for problems in `blind' signal processing. fl Please send comments to tony@salk.edu. This paper will appear as Neural Computation, 7, 6, 1004-1034 (1995). The reference for this version is: Technical Report no. INC-9501, February 1995, Institute for Neural Computation, UCSD, San Diego, CA 92093-0523.
Target text information: Analyzing Hyperspectral Data with Independent Component Analysis: Hyperspectral image sensors provide images with a large number of contiguous spectral channels per pixel and enable information about different materials within a pixel to be obtained. The problem of spectrally unmixing materials may be viewed as a specific case of the blind source separation problem where data consists of mixed signals (in this case minerals) and the goal is to determine the contribution of each mineral to the mix without prior knowledge of the minerals in the mix. The technique of Independent Component Analysis (ICA) assumes that the spectral components are close to statistically independent and provides an unsupervised method for blind source separation. We introduce contextual ICA in the context of hyperspectral data analysis and apply the method to mineral data from synthetically mixed minerals and real image signatures.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 1 | Neural Networks | cora | 319 | test |
1-hop neighbor's text information: Constraining of weights using regularities. : In this paper we study how global optimization methods (like genetic algorithms) can be used to train neural networks. We introduce the notion of regularity, for studying properties of the error function that expand the search space in an artificial way. Regularities are used to generate constraints on the weights of the network. In order to find a satisfiable set of constraints we use a constraint logic programming system. Then the training of the network becomes a constrained optimization problem. We also relate the notion of regularity to so-called network transformations.
1-hop neighbor's text information: Evolutionary training of clp-constrained neural networks. : The paper is concerned with the integration of constraint logic programming systems (CLP) with systems based on genetic algorithms (GA). The resulting framework is tailored for applications that require a first phase in which a number of constraints need to be generated, and a second phase in which an optimal solution satisfying these constraints is produced. The first phase is carried by the CLP and the second one by the GA. We present a specific framework where ECL i PS e (ECRC Common Logic Programming System) and GENOCOP (GEnetic algorithm for Numerical Optimization for COnstrained Problems) are integrated in a framework called CoCo (COmputational intelligence plus COnstraint logic programming). The CoCo system is applied to the training problem for neural networks. We consider constrained networks, e.g. neural networks with shared weights, constraints on the weights for example domain constraints for hardware implementation etc. Then ECL i PS e is used to generate the chromosome representation together with other constraints which ensure, in most cases, that each network is specified by exactly one chromosome. Thus the problem becomes a constrained optimization problem, where the optimization criterion is to optimize the error of the network, and GENOCOP is used to find an optimal solution. Note: The work of the second author was partially supported by SION, a department of the NWO, the National Foundation for Scientific Research. This work has been carried out while the third author was visiting CWI, Amsterdam, and the fourth author was visiting Leiden University.
Target text information: Forward-Tracking: A Technique for Searching Beyond Failure: In many applications, such as decision support, negotiation, planning, scheduling, etc., one needs to express requirements that can only be partially satisfied. In order to express such requirements, we propose a technique called forward-tracking. Intuitively, forward-tracking is a kind of dual of chronological back-tracking: if a program globally fails to find a solution, then a new execution is started from a program point and a state `forward' in the computation tree. This search technique is applied to constraint logic programming, obtaining a powerful extension that preserves all the useful properties of the original scheme. We report on the successful practical application of forward-tracking to the evolutionary training of (constrained) neural networks.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 3 | Genetic Algorithms | cora | 653 | val |
1-hop neighbor's text information: Learning active classifiers. : Many classification algorithms are "passive", in that they assign a class-label to each instance based only on the description given, even if that description is incomplete. In contrast, an active classifier can | at some cost | obtain the values of missing attributes, before deciding upon a class label. The expected utility of using an active classifier depends on both the cost required to obtain the additional attribute values and the penalty incurred if it outputs the wrong classification. This paper considers the problem of learning near-optimal active classifiers, using a variant of the probably-approximately-correct (PAC) model. After defining the framework | which is perhaps the main contribution of this paper | we describe a situation where this task can be achieved efficiently, but then show that the task is often intractable.
1-hop neighbor's text information: A systematic description of greedy optimisation algorithms for cost sensitive generalisation: This paper defines a class of problems involving combinations of induction and (cost) optimisation. A framework is presented that systematically describes problems that involve construction of decision trees or rules, optimising accuracy as well as measurement- and misclassification costs. It does not present any new algorithms but shows how this framework can be used to configure greedy algorithms for constructing such trees or rules. The framework covers a number of existing algorithms. Moreover, the framework can also be used to define algorithm configurations with new functionalities, as expressed in their evaluation functions.
1-hop neighbor's text information: Boosting Trees for Cost-Sensitive Classifications: This paper explores two boosting techniques for cost-sensitive tree classification in the situation where misclassification costs change very often. Ideally, one would like to have only one induction, and use the induced model for different misclassification costs. Thus, it demands robustness of the induced model against cost changes. Combining multiple trees gives robust predictions against this change. We demonstrate that ordinary boosting combined with the minimum expected cost criterion to select the prediction class is a good solution under this situation. We also introduce a variant of the ordinary boosting procedure which utilizes the cost information during training. We show that the proposed technique performs better than the ordinary boosting in terms of misclassification cost. However, this technique requires to induce a set of new trees every time the cost changes. Our empirical investigation also reveals some interesting behavior of boosting decision trees for cost-sensitive classification.
Target text information: Cost-Sensitive Classification: Empirical Evaluation of a Hybrid Genetic Decision Tree Induction Algorithm. : This paper introduces ICET, a new algorithm for costsensitive classification. ICET uses a genetic algorithm to evolve a population of biases for a decision tree induction algorithm. The fitness function of the genetic algorithm is the average cost of classification when using the decision tree, including both the costs of tests (features, measurements) and the costs of classification errors. ICET is compared here with three other algorithms for costsensitive classification EG2, CS-ID3, and IDX and also with C4.5, which classifies without regard to cost. The five algorithms are evaluated empirically on five real-world medical datasets. Three sets of experiments are performed. The first set examines the baseline performance of the five algorithms on the five datasets and establishes that ICET performs significantly better than its competitors. The second set tests the robustness of ICET under a variety of conditions and shows that ICET maintains its advantage. The third set looks at ICETs search in bias space and discovers a way to improve the search.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 3 | Genetic Algorithms | cora | 2,136 | test |
1-hop neighbor's text information: Learning logical definitions from relations. :
1-hop neighbor's text information: a stochastic approach to Inductive Logic Programming. : Current systems in the field of Inductive Logic Programming (ILP) use, primarily for the sake of efficiency, heuristically guided search techniques. Such greedy algorithms suffer from local optimization problem. Present paper describes a system named SFOIL, that tries to alleviate this problem by using a stochastic search method, based on a generalization of simulated annealing, called Markovian neural network. Various tests were performed on benchmark, and real-world domains. The results show both, advantages and weaknesses of stochastic approach.
1-hop neighbor's text information: Learning Trees and Rules with Set-valued Features. : In most learning systems examples are represented as fixed-length "feature vectors", the components of which are either real numbers or nominal values. We propose an extension of the feature-vector representation that allows the value of a feature to be a set of strings; for instance, to represent a small white and black dog with the nominal features size and species and the set-valued feature color, one might use a feature vector with size=small, species=canis-familiaris and color=fwhite,blackg. Since we make no assumptions about the number of possible set elements, this extension of the traditional feature-vector representation is closely connected to Blum's "infinite attribute" representation. We argue that many decision tree and rule learning algorithms can be easily extended to set-valued features. We also show by example that many real-world learning problems can be efficiently and naturally represented with set-valued features; in particular, text categorization problems and problems that arise in propositionalizing first-order representations lend themselves to set-valued features.
Target text information: Stochastic pro-positionalization of non-determinate background knowledge. : It is a well-known fact that propositional learning algorithms require "good" features to perform well in practice. So a major step in data engineering for inductive learning is the construction of good features by domain experts. These features often represent properties of structured objects, where a property typically is the occurrence of a certain substructure having certain properties. To partly automate the process of "feature engineering", we devised an algorithm that searches for features which are defined by such substructures. The algorithm stochastically conducts a top-down search for first-order clauses, where each clause represents a binary feature. It differs from existing algorithms in that its search is not class-blind, and that it is capable of considering clauses ("context") of almost arbitrary length (size). Preliminary experiments are favorable, and support the view that this approach is promising.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 0 | Rule Learning | cora | 0 | test |
1-hop neighbor's text information: Finding opponents worth beating: Methods for competitive co-evolution. : We consider "competitive coevolution," in which fitness is based on direct competition among individuals selected from two independently evolving populations of "hosts" and "parasites." Competitive coevolution can lead to an "arms race," in which the two populations reciprocally drive one another to increasing levels of performance and complexity. We use the games of Nim and 3-D Tic-Tac-Toe as test problems to explore three new techniques in competitive coevolution. "Competitive fitness sharing" changes the way fitness is measured, "shared sampling" provides a method for selecting a strong, diverse set of parasites, and the "hall of fame" encourages arms races by saving good individuals from prior generations. We provide several different motivations for these methods, and mathematical insights into their use. Experimental comparisons are done, and a detailed analysis of these experiments is presented in terms of testing issues, diversity, extinction, arms race progress measurements, and drift.
1-hop neighbor's text information: "Using genetic algorithms to explore pattern recognition in the immune system," : We describe an immune system model based on a universe of binary strings. The model is directed at understanding the pattern recognition processes and learning that take place at both the individual and species levels in the immune system. The genetic algorithm (GA) is a central component of our model. In the paper we study the behavior of the GA on two pattern recognition problems that are relevant to natural immune systems. Finally, we compare our model with explicit fitness sharing techniques for genetic algorithms, and show that our model implements a form of implicit fitness sharing.
1-hop neighbor's text information: "A Coevolutionary Approach to Learning Sequential Decision Rules", : We present a coevolutionary approach to learning sequential decision rules which appears to have a number of advantages over non-coevolutionary approaches. The coevolutionary approach encourages the formation of stable niches representing simpler sub-behaviors. The evolutionary direction of each subbehavior can be controlled independently, providing an alternative to evolving complex behavior using intermediate training steps. Results are presented showing a significant learning rate speedup over a non-coevolutionary approach in a simulated robot domain. In addition, the results suggest the coevolutionary approach may lead to emer gent problem decompositions.
Target text information: Automatic Modularization by Speciation: Real-world problems are often too difficult to be solved by a single monolithic system. There are many examples of natural and artificial systems which show that a modular approach can reduce the total complexity of the system while solving a difficult problem satisfactorily. The success of modular artificial neural networks in speech and image processing is a typical example. However, designing a modular system is a difficult task. It relies heavily on human experts and prior knowledge about the problem. There is no systematic and automatic way to form a modular system for a problem. This paper proposes a novel evolutionary learning approach to designing a modular system automatically, without human intervention. Our starting point is speciation, using a technique based on fitness sharing. While speciation in genetic algorithms is not new, no effort has been made towards using a speciated population as a complete modular system. We harness the specialized expertise in the species of an entire population, rather than a single individual, by introducing a gating algorithm. We demonstrate our approach to automatic modularization by improving co-evolutionary game learning. Following earlier researchers, we learn to play iterated prisoner's dilemma. We review some problems of earlier co-evolutionary learning, and explain their poor generalization ability and sudden mass extinctions. The generalization ability of our approach is significantly better than past efforts. Using the specialized expertise of the entire speciated population though a gating algorithm, instead of the best individual, is the main contributor to this improvement.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 3 | Genetic Algorithms | cora | 558 | test |
1-hop neighbor's text information: Bayes factors and model uncertainty. : Technical Report no. 255 Department of Statistics, University of Washington August 1993; Revised March 1994
Target text information: Covariate Selection in Hierarchical Models of Hospital Admission Counts: A Bayes Factor Approach 1: TECHNICAL REPORT No. 268 Department of Statistics, GN-22 University of Washington Seattle, Washington 98195 USA 1 Susan L. Rosenkranz is Pew Health Policy Postdoctoral Fellow at the Institute for Health Policy Studies, Box 0936, University of California at San Francisco, San Francisco, CA 94143, and Adrian E. Raftery is Professor of Statistics and Sociology, Department of Statistics, GN-22, University of Washington, Seattle, WA 98195. Rosenkranz's research was supported by the National Research Service Award 5T32CA 09168-17 from the National Cancer Institute. The authors are grateful to Paula Diehr and Kevin Cain for helpful discussions.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 6 | Probabilistic Methods | cora | 139 | test |
1-hop neighbor's text information: Unsupervised learning with the soft-means algorithm. : This note describes a useful adaptation of the `peak seeking' regime used in unsupervised learning processes such as competitive learning and `k-means'. The adaptation enables the learning to capture low-order probability effects and thus to more fully capture the probabilistic structure of the training data.
Target text information: Learning Where To Go without Knowing Where That Is: The Acquisition of a Non-reactive Mobot: In the path-imitation task, one agent traces out a path through a second agent's sensory field. The second agent then has to reproduce that path exactly, i.e. move through the sequence of locations visited by the first agent. This is a non-trivial behaviour whose acquisition might be expected to involve special-purpose (i.e., strongly biased) learning machinery. However, the present paper shows this is not the case. The behaviour can be acquired using a fairly primitive learning regime provided that the agent's environment can be made to pass through a specific sequence of dynamic states.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 3 | Genetic Algorithms | cora | 112 | test |
1-hop neighbor's text information: "Evolving Visual Routines," : Traditional machine vision assumes that the vision system recovers a a complete, labeled description of the world [ Marr, 1982 ] . Recently, several researchers have criticized this model and proposed an alternative model which considers perception as a distributed collection of task-specific, task-driven visual routines [ Aloimonos, 1993, Ullman, 1987 ] . Some of these researchers have argued that in natural living systems these visual routines are the product of natural selection [ Ramachandran, 1985 ] . So far, researchers have hand-coded task-specific visual routines for actual implementations (e.g. [ Chapman, 1993 ] ). In this paper we propose an alternative approach in which visual routines for simple tasks are evolved using an artificial evolution approach. We present results from a series of runs on actual camera images, in which simple routines were evolved using Genetic Programming techniques [ Koza, 1992 ] . The results obtained are promising: the evolved routines are able to correctly classify up to 93% of the images, which is better than the best algorithm we were able to write by hand.
1-hop neighbor's text information: A distributed, component-based ge-netic programming system in C++. Research Note RN/96/2, : GP-COM is a genetic programming system based around this conceptual structure. Components in the system are loosely coupled and are defined only by their interfaces to other components. Components may be easily swapped to test potential improvements to the process, or to apply the system to a enw problem domain. The system consists of a collection of components glued together by high-level scripts.
Target text information: Evolving Edge Detectors with Genetic Programming edge detectors for 1-D signals and image profiles. The: images. We apply genetic programming techniques to the production of high
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 3 | Genetic Algorithms | cora | 531 | val |
1-hop neighbor's text information: On the Computational Power of Neural Nets. : Report SYCON-91-11 ABSTRACT This paper deals with the simulation of Turing machines by neural networks. Such networks are made up of interconnections of synchronously evolving processors, each of which updates its state according to a "sigmoidal" linear combination of the previous states of all units. The main result states that one may simulate all Turing machines by nets, in linear time. In particular, it is possible to give a net made up of about 1,000 processors which computes a universal partial-recursive function. (This is an update of Report SYCON-91-08; new results include the simulation in linear time of binary-tape machines, as opposed to the unary alphabets used in the previous version.)
1-hop neighbor's text information: "State observability in recurrent neural networks," : Report SYCON-92-07rev ABSTRACT We obtain a characterization of observability for a class of nonlinear systems which appear in neural networks research.
1-hop neighbor's text information: "Linear systems with sign-observations," : This paper deals with systems that are obtained from linear time-invariant continuous-or discrete-time devices followed by a function that just provides the sign of each output. Such systems appear naturally in the study of quantized observations as well as in signal processing and neural network theory. Results are given on observability, minimal realizations, and other system-theoretic concepts. Certain major differences exist with the linear case, and other results generalize in a surprisingly straightforward manner.
Target text information: Interconnected Automata and Linear Systems: A Theoretical Framework in Discrete-Time In Hybrid Systems III: Verification: This paper summarizes the definitions and several of the main results of an approach to hybrid systems, which combines finite automata and linear systems, developed by the author in the early 1980s. Some related more recent results are briefly mentioned as well.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 1 | Neural Networks | cora | 224 | val |
1-hop neighbor's text information: "Learning to Segment Images Using Dynamic Feature Binding," : Despite the fact that complex visual scenes contain multiple, overlapping objects, people perform object recognition with ease and accuracy. One operation that facilitates recognition is an early segmentation process in which features of objects are grouped and labeled according to which object they belong. Current computational systems that perform this operation are based on predefined grouping heuristics. We describe a system called MAGIC that learns how to group features based on a set of presegmented examples. In many cases, MAGIC discovers grouping heuristics similar to those previously proposed, but it also has the capability of finding nonintuitive structural regularities in images. Grouping is performed by a relaxation network that attempts to dynamically bind related features. Features transmit a complex-valued signal (amplitude and phase) to one another; binding can thus be represented by phase locking related features. MAGIC's training procedure is a generalization of recurrent back propagation to complex-valued units.
1-hop neighbor's text information: "Learning Feature-based Semantics with Simple Recurrent Networks," : The paper investigates the possibilities for using simple recurrent networks as transducers which map sequential natural language input into non-sequential feature-based semantics. The networks perform well on sentences containing a single main predicate (encoded by transitive verbs or prepositions) applied to multiple-feature objects (encoded as noun-phrases with adjectival modifiers), and shows robustness against ungrammatical inputs. A second set of experiments deals with sentences containing embedded structures. Here the network is able to process multiple levels of sentence-final embeddings but only one level of center-embedding. This turns out to be a consequence of the network's inability to retain information that is not reflected in the outputs over intermediate phases of processing. Two extensions to Elman's [9] original recurrent network architecture are introduced.
1-hop neighbor's text information: Best-first model merging for dynamic learning and recognition. : Best-first model merging is a general technique for dynamically choosing the structure of a neural or related architecture while avoiding overfitting. It is applicable to both learning and recognition tasks and often generalizes significantly better than fixed structures. We demonstrate the approach applied to the tasks of choosing radial basis functions for function learning, choosing local affine models for curve and constraint surface modelling, and choosing the structure of a balltree or bumptree to maximize efficiency of access.
Target text information: L 0 |The First Four Years Abstract A summary of the progress and plans of:
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 1 | Neural Networks | cora | 723 | test |
1-hop neighbor's text information: Learning in design: From Characterizing Dimensions to Working Systems: The application of machine learning (ML) to solve practical problems is complex. Only recently, due to the increased promise of ML in solving real problems and the experienced difficulty of their use, has this issue started to attract attention. This difficulty arises from the complexity of learning problems and the large variety of available techniques. In order to understand this complexity and begin to overcome it, it is important to construct a characterization of learning situations. Building on previous work that dealt with the practical use of ML, a set of dimensions is developed, contrasted with another recent proposal, and illustrated with a project on the development of a decision-support system for marine propeller design. The general research opportunities that emerge from the development of the dimensions are discussed. Leading toward working systems, a simple model is presented for setting priorities in research and in selecting learning tasks within large projects. Central to the development of the concepts discussed in this paper is their use in future projects and the recording of their successes, limitations, and failures.
1-hop neighbor's text information: An empirical comparison of selection measures for decision-tree induction. : Ourston and Mooney, 1990b ] D. Ourston and R. J. Mooney. Improving shared rules in multiple category domain theories. Technical Report AI90-150, Artificial Intelligence Labora tory, University of Texas, Austin, TX, December 1990.
Target text information: "New roles for machine learning in design," : Research on machine learning in design has concentrated on the use and development of techniques that can solve simple well-defined problems. Invariably, this effort, while important at the early stages of the development of the field, cannot scale up to address real design problems since all existing techniques are based on simplifying assumptions that do not hold for real design. In particular they do not address the dependence on context and multiple, often conflicting, interests that are constitutive of design. This paper analyzes the present situation and criticizes a number of prevailing views. Subsequently, the paper offers an alternative approach whose goal is to advance the use of machine learning in design practice. The approach is partially integrated into a modeling system called n-dim. The use of machine learning in n-dim is presented and open research issues are outlined.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 2 | Case Based | cora | 2,190 | test |
1-hop neighbor's text information: Rearrangement of receptive field topography after intracortical and peripheral stimulation: The role of plasticity in: Intracortical microstimulation (ICMS) of a single site in the somatosensory cortex of rats and monkeys for 2-6 hours produces a large increase in the number of neurons responsive to the skin region corresponding to the ICMS-site receptive field (RF), with very little effect on the position and size of the ICMS-site RF, and the response evoked at the ICMS site by tactile stimulation (Recanzone et al., 1992b). Large changes in RF topography are observed following several weeks of repetitive stimulation of a restricted skin region in monkeys (Jenkins et al., 1990; Recanzone et al., 1992acde). Repetitive stimulation of a localized skin region in monkeys produced by training the monkeys in a tactile frequency discrimination task improves their performance (Recanzone et al., 1992a). It has been suggested that these changes in RF topography are caused by competitive learning in excitatory pathways (Grajski & Merzenich, 1990; Jenkins et al., 1990; Recanzone et al., 1992abcde). ICMS almost simultaneously excites excitatory and inhibitory terminals and excitatory and inhibitory cortical neurons within a few microns of the stimulating electrode. Thus, this paper investigates the implications of the possibility that lateral inhibitory pathways too may undergo synaptic plasticity during ICMS. Lateral inhibitory pathways may also undergo synaptic plasticity in adult animals during peripheral conditioning. The "EXIN" (afferent excitatory and lateral inhibitory) synaptic plasticity rules
1-hop neighbor's text information: Plasticity in cortical neuron properties: Modeling the effects of an NMDA antagonist and a GABA: Infusion of a GABA agonist (Reiter & Stryker, 1988) and infusion of an NMDA receptor antagonist (Bear et al., 1990), in the primary visual cortex of kittens during monocular deprivation, shifts ocular dominance toward the closed eye, in the cortical region near the infusion site. This reverse ocular dominance shift has been previously modeled by variants of a covariance synaptic plasticity rule (Bear et al., 1990; Clothiaux et al., 1991; Miller et al., 1989; Reiter & Stryker, 1988). Kasamatsu et al. (1997, 1998) showed that infusion of an NMDA receptor antagonist in adult cat primary visual cortex changes ocular dominance distribution, reduces binocularity, and reduces orientation and direction selectivity. This paper presents a novel account of the effects of these pharmacological treatments, based on the EXIN synaptic plasticity rules (Marshall, 1995), which include both an instar afferent excitatory and an outstar lateral inhibitory rule. Functionally, the EXIN plasticity rules enhance the efficiency, discrimination, and context-sensitivity of a neural network's representation of perceptual patterns (Marshall, 1995; Marshall & Gupta, 1998). The EXIN model decreases lateral inhibition from neurons outside the infusion site (control regions) to neurons inside the infusion region, during monocular deprivation. In the model, plasticity in afferent pathways to neurons affected by the pharmacological treatments is assumed to be blocked , as opposed to previous models (Bear et al., 1990; Miller et al., 1989; Reiter & Stryker, 1988), in which afferent pathways from the open eye to neurons in the infusion region are weakened . The proposed model is consistent with results suggesting that long-term plasticity can be blocked by NMDA antagonists or by postsynaptic hyperpolarization (Bear et al., 1990; Dudek & Bear, 1992; Goda & Stevens, 1996; Kirkwood et al., 1993). Since the role of plasticity in lateral inhibitory pathways in producing cortical plasticity has not received much attention, several predictions are made based on the EXIN lateral inhibitory plasticity rule.
1-hop neighbor's text information: An Information Maximization Approach to Blind Separation and Blind Deconvolution. : We derive a new self-organising learning algorithm which maximises the information transferred in a network of non-linear units. The algorithm does not assume any knowledge of the input distributions, and is defined here for the zero-noise limit. Under these conditions, information maximisation has extra properties not found in the linear case (Linsker 1989). The non-linearities in the transfer function are able to pick up higher-order moments of the input distributions and perform something akin to true redundancy reduction between units in the output representation. This enables the network to separate statistically independent components in the inputs: a higher-order generalisation of Principal Components Analysis. We apply the network to the source separation (or cocktail party) problem, successfully separating unknown mixtures of up to ten speakers. We also show that a variant on the network architecture is able to perform blind deconvolution (cancellation of unknown echoes and reverberation in a speech signal). Finally, we derive dependencies of information transfer on time delays. We suggest that information max-imisation provides a unifying framework for problems in `blind' signal processing. fl Please send comments to tony@salk.edu. This paper will appear as Neural Computation, 7, 6, 1004-1034 (1995). The reference for this version is: Technical Report no. INC-9501, February 1995, Institute for Neural Computation, UCSD, San Diego, CA 92093-0523.
Target text information: Generalization and exclusive allocation of credit in unsupervised category learning. Network: : Acknowledgements: This research was supported in part by the Office of Naval Research (Cognitive and Neural Sciences, N00014-93-1-0208) and by the Whitaker Foundation (Special Opportunity Grant). We thank George Kalarickal, Charles Schmitt, William Ross, and Douglas Kelly for valuable discussions.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 1 | Neural Networks | cora | 2,516 | val |
1-hop neighbor's text information: GENE REGULATION AND BIOLOGICAL DEVELOPMENT IN NEURAL NETWORKS: AN EXPLORATORY MODEL:
1-hop neighbor's text information: An Artificial Life Model for Investigating the Evolution of Modularity: To investigate the issue of how modularity emerges in nature, we present an Artificial Life model that allow us to reproduce on the computer both the organisms (i.e., robots that have a genotype, a nervous system, and sensory and motor organs) and the environment in which organisms live, behave and reproduce. In our simulations neural networks are evolutionarily trained to control a mobile robot designed to keep an arena clear by picking up trash objects and releasing them outside the arena. During the evolutionary process modular neural networks, which control the robot's behavior, emerge as a result of genetic duplications. Preliminary simulation results show that duplication-based modular architecture outperforms the nonmod-ular architecture, which represents the starting architecture in our simulations. Moreover, an interaction between mutation and duplication rate emerges from our results. Our future goal is to use this model in order to explore the relationship between the evolutionary emergence of modularity and the phenomenon of gene duplication.
Target text information: "Discontinuity in evolution: how different levels of organization imply pre-adaptation", :
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 3 | Genetic Algorithms | cora | 2,320 | test |
1-hop neighbor's text information: Design and Implementation of a Replay Framework based on a Partial order Planner. : In this paper we describe the design and implementation of the derivation replay framework, dersnlp+ebl (Derivational snlp+ebl), which is based within a partial order planner. dersnlp+ebl replays previous plan derivations by first repeating its earlier decisions in the context of the new problem situation, then extending the replayed path to obtain a complete solution for the new problem. When the replayed path cannot be extended into a new solution, explanation-based learning (ebl) techniques are employed to identify the features of the new problem which prevent this extension. These features are then added as censors on the retrieval of the stored case. To keep retrieval costs low, dersnlp+ebl normally stores plan derivations for individual goals, and replays one or more of these derivations in solving multi-goal problems. Cases covering multiple goals are stored only when subplans for individual goals cannot be successfully merged. The aim in constructing the case library is to predict these goal interactions and to store a multi-goal case for each set of negatively interacting goals. We provide empirical results demonstrating the effectiveness of dersnlp+ebl in improving planning performance on randomly-generated problems drawn from a complex domain.
1-hop neighbor's text information: A comparative utility analysis of case-based reasoning and control-rule learning systems. : The utility problem in learning systems occurs when knowledge learned in an attempt to improve a system's performance degrades performance instead. We present a methodology for the analysis of utility problems which uses computational models of problem solving systems to isolate the root causes of a utility problem, to detect the threshold conditions under which the problem will arise, and to design strategies to eliminate it. We present models of case-based reasoning and control-rule learning systems and compare their performance with respect to the swamping utility problem. Our analysis suggests that case-based reasoning systems are more resistant to the utility problem than control-rule learning systems. 1
1-hop neighbor's text information: Derivation replay for partial-order planning. : Derivation replay was first proposed by Carbonell as a method of transferring guidance from a previous problem-solving episode to a new one. Subsequent implementations have used state-space planning as the underlying methodology. This paper is motivated by the acknowledged superiority of partial-order (PO) planners in plan generation, and is an attempt to bring derivation replay into the realm of partial-order planning. Here we develop DerSNLP, a framework for doing replay in SNLP, a partial-order plan-space planner, and analyze its relative effectiveness. We will argue that the decoupling of planning (derivation) order and the execution order of plan steps, provided by partial-order planners, enables DerSNLP to exploit the guidance of previous cases in a more efficient and straightforward fashion. We validate our hypothesis through empirical comparisons between DerSNLP and two replay systems based on state-space planners.
Target text information: An explanation-based approach to improve retrieval in case-based planning. : When a case-based planner is retrieving a previous case in preparation for solving a new similar problem, it is often not aware of the implicit features of the new problem situation which determine if a particular case may be successfully applied. This means that some cases may be retrieved in error in that the case may fail to improve the planner's performance. Retrieval may be incrementally improved by detecting and explaining these failures as they occur. In this paper we provide a definition of case failure for the planner, dersnlp (derivation replay in snlp), which solves new problems by replaying its previous plan derivations. We provide EBL (explanation-based learning) techniques for detecting and constructing the reasons for the failure. We also describe how to organize a case library so as to incorporate this failure information as it is produced. Finally we present an empirical study which demonstrates the effectiveness of this approach in improving the performance of dersnlp.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 2 | Case Based | cora | 1,433 | test |
1-hop neighbor's text information: "An analytical framework for local feedforward networks," : Although feedforward neural networks are well suited to function approximation, in some applications networks experience problems when learning a desired function. One problem is interference which occurs when learning in one area of the input space causes unlearning in another area. Networks that are less susceptible to interference are referred to as spatially local networks. To understand these properties, a theoretical framework, consisting of a measure of interference and a measure of network localization, is developed that incorporates not only the network weights and architecture but also the learning algorithm. Using this framework to analyze sigmoidal multi-layer perceptron (MLP) networks that employ the back-prop learning algorithm, we address a familiar misconception that sigmoidal networks are inherently non-local by demonstrating that given a sufficiently large number of adjustable parameters, sigmoidal MLPs can be made arbitrarily local while retaining the ability to represent any continuous function on a compact domain.
1-hop neighbor's text information: "Space-frequency localized basis function networks for nonlinear system estimation and control," : Stable neural network control and estimation may be viewed formally as a merging of concepts from nonlinear dynamic systems theory with tools from multivariate approximation theory. This paper extends earlier results on adaptive control and estimation of nonlinear systems using gaussian radial basis functions to the on-line generation of irregularly sampled networks, using tools from multiresolution analysis and wavelet theory. This yields much more compact and efficient system representations while preserving global closed-loop stability. Approximation models employing basis functions that are localized in both space and spatial frequency admit a measure of the approximated function's spatial frequency content that is not directly dependent on reconstruction error. As a result, these models afford a means of adaptively selecting basis functions according to the local spatial frequency content of the approximated function. An algorithm for stable, on-line adaptation of output weights simultaneously with node configuration in a class of non-parametric models with wavelet basis functions is presented. An asymptotic bound on the error in the network's reconstruction is derived and shown to be dependent solely on the minimum approximation error associated with the steady state node configuration. In addition, prior bounds on the temporal bandwidth of the system to be identified or controlled are used to develop a criterion for on-line selection of radial and ridge wavelet basis functions, thus reducing the rate of increase in network's size with the dimension of the state vector. Experimental results obtained by using the network to predict the path of an unknown light bluff object thrown through air, in an active-vision based robotic catching system, are given to illustrate the network's performance in a simple real-time application.
Target text information: Adaptive Wavelet Control of Nonlinear Systems: This paper considers the design and analysis of adaptive wavelet control algorithms for uncertain nonlinear dynamical systems. The Lyapunov synthesis approach is used to develop a state-feedback adaptive control scheme based on nonlinearly parametrized wavelet network models. Semi-global stability results are obtained under the key assumption that the system uncertainty satisfies a "matching" condition. The localization properties of adaptive networks are discussed and formal definitions of interference and localization measures are proposed.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 1 | Neural Networks | cora | 325 | test |
1-hop neighbor's text information: Finding opponents worth beating: Methods for competitive co-evolution. : We consider "competitive coevolution," in which fitness is based on direct competition among individuals selected from two independently evolving populations of "hosts" and "parasites." Competitive coevolution can lead to an "arms race," in which the two populations reciprocally drive one another to increasing levels of performance and complexity. We use the games of Nim and 3-D Tic-Tac-Toe as test problems to explore three new techniques in competitive coevolution. "Competitive fitness sharing" changes the way fitness is measured, "shared sampling" provides a method for selecting a strong, diverse set of parasites, and the "hall of fame" encourages arms races by saving good individuals from prior generations. We provide several different motivations for these methods, and mathematical insights into their use. Experimental comparisons are done, and a detailed analysis of these experiments is presented in terms of testing issues, diversity, extinction, arms race progress measurements, and drift.
1-hop neighbor's text information: Competitive environments evolve better solutions for complex tasks. :
1-hop neighbor's text information: A Genome Compiler for High Performance Genetic Programming: Genetic Programming is very computationally expensive. For most applications, the vast majority of time is spent evaluating candidate solutions, so it is desirable to make individual evaluation as efficient as possible. We describe a genome compiler which compiles s-expressions to machine code, resulting in significant speedup of individual evaluations over standard GP systems. Based on performance results with symbolic regression, we show that the execution of the genome compiler system is comparable to the fastest alternative GP systems. We also demonstrate the utility of compilation on a real-world problem, lossless image compression. A somewhat surprising result is that in our test domains, the overhead of compilation is negligible.
Target text information: Massively parallel genetic programming. : As the field of Genetic Programming (GP) matures and its breadth of application increases, the need for parallel implementations becomes absolutely necessary. The transputer-based system recently presented by Koza ( [ 8 ] ) is one of the rare such parallel implementations. Until today, no implementation has been proposed for parallel GP using a SIMD architecture, except for a data-parallel approach ( [ 16 ] ), although others have exploited workstation farms and pipelined supercomputers. One reason is certainly the apparent difficulty of dealing with the parallel evaluation of different S-expressions when only a single instruction can be executed at the same time on every processor. The aim of this paper is to present such an implementation of parallel GP on a SIMD system, where each processor can efficiently evaluate a different S-expression. We have implemented this approach on a MasPar MP-2 computer, and will present some timing results. To the extent that SIMD machines, like the MasPar are available to offer cost-effective cycles for scien tific experimentation, this is a useful approach.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 3 | Genetic Algorithms | cora | 2,097 | test |
1-hop neighbor's text information: "Learning structural descriptions from examples." :
1-hop neighbor's text information: An Efficient Subsumbtion Algorith for Inductive Logic Programming. : In this paper we investigate the efficiency of - subsumption (` ), the basic provability relation in ILP. As D ` C is NP-complete even if we restrict ourselves to linked Horn clauses and fix C to contain only a small constant number of literals, we investigate in several restrictions of D. We first adapt the notion of determinate clauses used in ILP and show that -subsumption is decidable in polynomial time if D is determinate with respect to C. Secondly, we adapt the notion of k-local Horn clauses and show that - subsumption is efficiently computable for some reasonably small k. We then show how these results can be combined, to give an efficient reasoning procedure for determinate k-local Horn clauses, an ILP-problem recently suggested to be polynomial predictable by Cohen (1993) by a simple counting argument. We finally outline how the -reduction algorithm, an essential part of every lgg ILP-learning algorithm, can be im proved by these ideas.
1-hop neighbor's text information: Learning logical definitions from relations. :
Target text information: Inductive Learning of Characteristic Concept Descriptions from Small Sets of Classified Examples. : This paper deals with the problem of learning characteristic concept descriptions from examples and describes a new generalization approach implemented in the system Cola-2. The approach tries to take advantage of the information which can be induced from descriptions of unclassified objects using a conceptual clustering algorithm. Experimental results in various real-world domains strongly support the hypothesis that the new approach delivers more correct (and possibly more comprehesible) concept descriptions than exisiting methods, if the induced concept descriptions are also used to classify objects which belong to concepts which were not present in the training data set. This paper describes the generalization approach implemented in Cola and presents experimental results obtained with a relational and a propositional real world data set.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 0 | Rule Learning | cora | 1,323 | test |
1-hop neighbor's text information: Learning to predict by the methods of temporal differences. : This article introduces a class of incremental learning procedures specialized for prediction|that is, for using past experience with an incompletely known system to predict its future behavior. Whereas conventional prediction-learning methods assign credit by means of the difference between predicted and actual outcomes, the new methods assign credit by means of the difference between temporally successive predictions. Although such temporal-difference methods have been used in Samuel's checker player, Holland's bucket brigade, and the author's Adaptive Heuristic Critic, they have remained poorly understood. Here we prove their convergence and optimality for special cases and relate them to supervised-learning methods. For most real-world prediction problems, temporal-difference methods require less memory and less peak computation than conventional methods; and they produce more accurate predictions. We argue that most problems to which supervised learning is currently applied are really prediction problems of the sort to which temporal-difference methods can be applied to advantage.
1-hop neighbor's text information: Pack Kaelbling. On the complexity of solving Markov decision problems. : Markov decision problems (MDPs) provide the foundations for a number of problems of interest to AI researchers studying automated planning and reinforcement learning. In this paper, we summarize results regarding the complexity of solving MDPs and the running time of MDP solution algorithms. We argue that, although MDPs can be solved efficiently in theory, more study is needed to reveal practical algorithms for solving large problems quickly. To encourage future research, we sketch some alternative methods of analysis that rely on the struc ture of MDPs.
1-hop neighbor's text information: Generalization in reinforcement learning: Safely approximating the value function. : To appear in: G. Tesauro, D. S. Touretzky and T. K. Leen, eds., Advances in Neural Information Processing Systems 7, MIT Press, Cambridge MA, 1995. A straightforward approach to the curse of dimensionality in reinforcement learning and dynamic programming is to replace the lookup table with a generalizing function approximator such as a neural net. Although this has been successful in the domain of backgammon, there is no guarantee of convergence. In this paper, we show that the combination of dynamic programming and function approximation is not robust, and in even very benign cases, may produce an entirely wrong policy. We then introduce Grow-Support, a new algorithm which is safe from divergence yet can still reap the benefits of successful generalization.
Target text information: Tight Performance Bounds on Greedy Poli cies Based on Imperfect Value Functions. : Northeastern University College of Computer Science Technical Report NU-CCS-93-14 Abstract Consider a given value function on states of a Markov decision problem, as might result from applying a reinforcement learning algorithm. Unless this value function equals the corresponding optimal value function, at some states there will be a discrepancy, which is natural to call the Bellman residual, between what the value function specifies at that state and what is obtained by a one-step lookahead along the seemingly best action at that state using the given value function to evaluate all succeeding states. This paper derives a tight bound on how far from optimal the discounted return for a greedy policy based on the given value function will be as a function of the maximum norm magnitude of this Bellman residual. A corresponding result is also obtained for value functions defined on state-action pairs, as are used in Q-learning. One significant application of these results is to problems where a function approximator is used to learn a value function, with training of the approximator based on trying to minimize the Bellman residual across states or state-action pairs. When
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 5 | Reinforcement Learning | cora | 2,184 | test |
1-hop neighbor's text information: : General convergence results for linear discriminant updates Abstract The problem of learning linear discriminant concepts can be solved by various mistake-driven update procedures, including the Winnow family of algorithms and the well-known Perceptron algorithm. In this paper we define the general class of quasi-additive algorithms, which includes Perceptron and Winnow as special cases. We give a single proof of convergence that covers much of this class, including both Perceptron and Winnow but also many novel algorithms. Our proof introduces a generic measure of progress that seems to capture much of when and how these algorithms converge. Using this measure, we develop a simple general technique for proving mistake bounds, which we apply to the new algorithms as well as existing algorithms. When applied to known algorithms, our technique automatically produces close variants of existing proofs (and we generally obtain the known bounds, to within constants) thus showing, in a certain sense, that these seem ingly diverse results are fundamentally isomorphic.
1-hop neighbor's text information: Exponentially many local minima for single neurons. : We show that for a single neuron with the logistic function as the transfer function the number of local minima of the error function based on the square loss can grow exponentially in the dimension.
Target text information: Relative loss bounds for multiclass regression problems. : We study on-line generalized linear regression with multidimensional outputs, i.e., neural networks with multiple output nodes but no hidden nodes. We allow at the final layer transfer functions such as the soft-max function that need to consider the linear activations to all the output neurons. We use distance functions of a certain kind in two completely independent roles in deriving and analyzing on-line learning algorithms for such tasks. We use one distance function to define a matching loss function for the (possibly multidimensional) transfer function, which allows us to generalize earlier results from one-dimensional to multidimensional outputs. We use another distance function as a tool for measuring progress made by the on-line updates. This shows how previously studied algorithms such as gradient descent and exponentiated gradient fit into a common framework. We evaluate the performance of the algorithms using relative loss bounds that compare the loss of the on-line algoritm to the best off-line predictor from the relevant model class, thus completely eliminating probabilistic assumptions about the data.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 4 | Theory | cora | 1,458 | test |
1-hop neighbor's text information: Structural Regression Trees: In many real-world domains the task of machine learning algorithms is to learn a theory predicting numerical values. In particular several standard test domains used in Inductive Logic Programming (ILP) are concerned with predicting numerical values from examples and relational and mostly non-determinate background knowledge. However, so far no ILP algorithm except one can predict numbers and cope with non-determinate background knowledge. (The only exception is a covering algorithm called FORS.) In this paper we present Structural Regression Trees (SRT), a new algorithm which can be applied to the above class of problems by integrating the statistical method of regression trees into ILP. SRT constructs a tree containing a literal (an atomic formula or its negation) or a conjunction of literals in each node, and assigns a numerical value to each leaf. SRT provides more comprehensible results than purely statistical methods, and can be applied to a class of problems most other ILP systems cannot handle. Experiments in several real-world domains demonstrate that the approach is competitive with existing methods, indicating that the advantages are not at the expense of predictive accuracy.
1-hop neighbor's text information: Search-based Class Discretization: We present a methodology that enables the use of classification algorithms on regression tasks. We implement this method in system RECLA that transforms a regression problem into a classification one and then uses an existent classification system to solve this new problem. The transformation consists of mapping a continuous variable into an ordinal variable by grouping its values into an appropriate set of intervals. We use misclassification costs as a means to reflect the implicit ordering among the ordinal values of the new variable. We describe a set of alternative discretization methods and, based on our experimental results, justify the need for a search-based approach to choose the best method. Our experimental results confirm the validity of our search-based approach to class discretization, and reveal the accuracy benefits of adding misclassification costs.
1-hop neighbor's text information: Tibshirani (1994) Combining Estimates in Regression and Classification, : We consider the problem of how to combine a collection of general regression fit vectors in order to obtain a better predictive model. The individual fits may be from subset linear regression, ridge regression, or something more complex like a neural network. We develop a general framework for this problem and examine a recent cross-validation-based proposal called "stacking" in this context. Combination methods based on the bootstrap and analytic methods are also derived and compared in a number of examples, including best subsets regression and regression trees. Finally, we apply these ideas to classification problems where the estimated combination weights can yield insight into the structure of the problem.
Target text information: Rule-based machine learning methods for function prediction. : We describe a machine learning method for predicting the value of a real-valued function, given the values of multiple input variables. The method induces solutions from samples in the form of ordered disjunctive normal form (DNF) decision rules. A central objective of the method and representation is the induction of compact, easily interpretable solutions. This rule-based decision model can be extended to search efficiently for similar cases prior to approximating function values. Experimental results on real-world data demonstrate that the new techniques are competitive with existing machine learning and statistical methods and can sometimes yield superior regression performance.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 4 | Theory | cora | 1,975 | val |
1-hop neighbor's text information: "Pattern Theoretic Learning", : This paper offers a perspective on features and pattern finding in general. This perspective is based on a robust complexity measure called Decomposed Function Car-dinality. A function decomposition algorithm for minimizing this complexity measure and finding the associated features is outlined. Results from experiments with this algorithm are also summarized.
1-hop neighbor's text information: "Inductive Learning by Selection of Minimal Complexity Representations," :
Target text information: "Application of ESOP Minimization in Machine Learning and Knowledge Discovery," : This paper presents a new application of an Exclusive-Sum-Of-Products (ESOP) minimizer EXORCISM-MV-2: to Machine Learning, and particularly, in Pattern Theory. An analysis of various logic synthesis programs has been conducted at Wright Laboratory for machine learning applications. Creating a robust and efficient Boolean minimizer for machine learning that would minimize a decomposed function cardinality (DFC) measure of functions would help to solve practical problems in application areas that are of interest to the Pattern Theory Group especially those problems that require strongly unspecified multiple-valued-input functions with a large number of variables. For many functions, the complexity minimization of EXORCISM-MV-2 is better than that of Espresso. For small functions, they are worse than those of the Curtis-like Decomposer. However, EXORCISM is much faster, can run on problems with more variables, and significant DFC improvements have also been found. We analyze the cases when EXORCISM is worse than Espresso and propose new improvements for strongly unspecified functions.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 4 | Theory | cora | 2,440 | test |
1-hop neighbor's text information: "Induction of Decision Trees," :
1-hop neighbor's text information: "Extracting rules from artificial neural networks with distributed representations", : Although artificial neural networks have been applied in a variety of real-world scenarios with remarkable success, they have often been criticized for exhibiting a low degree of human comprehensibility. Techniques that compile compact sets of symbolic rules out of artificial neural networks offer a promising perspective to overcome this obvious deficiency of neural network representations. This paper presents an approach to the extraction of if-then rules from artificial neural networks. Its key mechanism is validity interval analysis, which is a generic tool for extracting symbolic knowledge by propagating rule-like knowledge through Backpropagation-style neural networks. Empirical studies in a robot arm domain illustrate the appropriateness of the proposed method for extracting rules from networks with real-valued and distributed representations.
1-hop neighbor's text information: A hybrid nearest-neighbor and nearest-hyperrectangle algorithm. : Algorithms based on Nested Generalized Exemplar (NGE) theory (Salzberg, 1991) classify new data points by computing their distance to the nearest "generalized exemplar" (i.e., either a point or an axis-parallel rectangle). They combine the distance-based character of nearest neighbor (NN) classifiers with the axis-parallel rectangle representation employed in many rule-learning systems. An implementation of NGE was compared to the k-nearest neighbor (kNN) algorithm in 11 domains and found to be significantly inferior to kNN in 9 of them. Several modifications of NGE were studied to understand the cause of its poor performance. These show that its performance can be substantially improved by preventing NGE from creating overlapping rectangles, while still allowing complete nesting of rectangles. Performance can be further improved by modifying the distance metric to allow weights on each of the features (Salzberg, 1991). Best results were obtained in this study when the weights were computed using mutual information between the features and the output class. The best version of NGE developed is a batch algorithm (BNGE FW MI ) that has no user-tunable parameters. BNGE FW MI 's performance is comparable to the first-nearest neighbor algorithm (also incorporating feature weights). However, the k-nearest neighbor algorithm is still significantly superior to BNGE FW MI in 7 of the 11 domains, and inferior to it in only 2. We conclude that, even with our improvements, the NGE approach is very sensitive to the shape of the decision boundaries in classification problems. In domains where the decision boundaries are axis-parallel, the NGE approach can produce excellent generalization with interpretable hypotheses. In all domains tested, NGE algorithms require much less memory to store generalized exemplars than is required by NN algorithms.
Target text information: Constructing Fuzzy Graphs from Examples: Methods to build function approximators from example data have gained considerable interest in the past. Especially methodologies that build models that allow an interpretation have attracted attention. Most existing algorithms, however, are either complicated to use or infeasible for high-dimensional problems. This article presents an efficient and easy to use algorithm to construct fuzzy graphs from example data. The resulting fuzzy graphs are based on locally independent fuzzy rules that operate solely on selected, important attributes. This enables the application of these fuzzy graphs also to problems in high dimensional spaces. Using illustrative examples and a real world data set it is demonstrated how the resulting fuzzy graphs offer quick insights into the structure of the example data, that is, the underlying model.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 1 | Neural Networks | cora | 714 | test |
1-hop neighbor's text information: The Structure-Mapping Engine: Algorithms and Examples. : This paper describes the Structure-Mapping Engine (SME), a program for studying analogical processing. SME has been built to explore Gentner's Structure-mapping theory of analogy, and provides a "tool kit" for constructing matching algorithms consistent with this theory. Its flexibility enhances cognitive simulation studies by simplifying experimentation. Furthermore, SME is very efficient, making it a useful component in machine learning systems as well. We review the Structure-mapping theory and describe the design of the engine. We analyze the complexity of the algorithm, and demonstrate that most of the steps are polynomial, typically bounded by O (N 2 ). Next we demonstrate some examples of its operation taken from our cognitive simulation studies and work in machine learning. Finally, we compare SME to other analogy programs and discuss several areas for future work. This paper appeared in Artificial Intelligence, 41, 1989, pp 1-63. For more information, please contact forbus@ils.nwu.edu
Target text information: Role of Stories 1: PO Box 600 Wellington New Zealand Tel: +64 4 471 5328 Fax: +64 4 495 5232 Internet: Tech.Reports@comp.vuw.ac.nz Technical Report CS-TR-92/4 October 1992 Abstract People often give advice by telling stories. Stories both recommend a course of action and exemplify general conditions in which that recommendation is appropriate. A computational model of advice taking using stories must address two related problems: determining the story's recommendations and appropriateness conditions, and showing that these obtain in the new situation. In this paper, we present an efficient solution to the second problem based on caching the results of the first. Our proposal has been implemented in brainstormer, a planner that takes abstract advice.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 2 | Case Based | cora | 846 | test |
1-hop neighbor's text information: (1993) Presynaptic and postsynatic competition in models for the development of neuromuscular connections. : The development of the nervous system involves in many cases interactions on a local scale rather than the execution of a fully specified genetic blueprint. The problem is to discover the nature of these interactions and the factors on which they depend. The withdrawal of polyinnervation in developing muscle is an example where such competitive interactions play an important role. We examine the possible types of competition in formal
Target text information: The Role of Activity in Synaptic Competition at the Neuromuscular Junction: An extended version of the dual constraint model of motor end-plate morphogenesis is presented that includes activity dependent and independent competition. It is supported by a wide range of recent neurophysiological evidence that indicates a strong relationship between synaptic efficacy and survival. The computational model is justified at the molecular level and its predictions match the developmental and regenerative behaviour of real synapses.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 1 | Neural Networks | cora | 193 | test |
1-hop neighbor's text information: The Canonical Distortion Measure in Feature Space and 1-NN Classification: We prove that the Canonical Distortion Measure (CDM) [2, 3] is the optimal distance measure to use for 1 nearest-neighbour (1-NN) classification, and show that it reduces to squared Euclidean distance in feature space for function classes that can be expressed as linear combinations of a fixed set of features. PAC-like bounds are given on the sample-complexity required to learn the CDM. An experiment is presented in which a neural network CDM was learnt for a Japanese OCR environ ment and then used to do 1-NN classification.
1-hop neighbor's text information: Learning one more thing, : Most research on machine learning has focused on scenarios in which a learner faces a single, isolated learning task. The lifelong learning framework assumes that the learner encounters a multitude of related learning tasks over its lifetime, providing the opportunity for the transfer of knowledge among these. This paper studies lifelong learning in the context of binary classification. It presents the invariance approach, in which knowledge is transferred via a learned model of the invariances of the domain. Results on learning to recognize objects from color images demonstrate superior generalization capabilities if invariances are learned and used to bias subsequent learning.
Target text information: The Canonical Metric For Vector Quantization. : To measure the quality of a set of vector quantization points a means of measuring the distance between a random point and its quantization is required. Common metrics such as the Hamming and Euclidean metrics, while mathematically simple, are inappropriate for comparing natural signals such as speech or images. In this paper it is shown how an environment of functions on an input space X induces a canonical distortion measure (CDM) on X. The depiction canonical is justified because it is shown that optimizing the reconstruction error of X with respect to the CDM gives rise to optimal piecewise constant approximations of the functions in the environment. The CDM is calculated in closed form for several different function classes. An algorithm for training neural networks to implement the CDM is presented along with some en couraging experimental results.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 1 | Neural Networks | cora | 1,464 | test |
1-hop neighbor's text information: Selection of Distance Metrics and Feature Subsets for k-Nearest Neighbor Classifiers. :
Target text information: Static Data Association with a Terrain-Based Prior Density:
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 6 | Probabilistic Methods | cora | 573 | test |
1-hop neighbor's text information: A practical Bayesian framework for backpropagation networks. : A quantitative and practical Bayesian framework is described for learning of mappings in feedforward networks. The framework makes possible: (1) objective comparisons between solutions using alternative network architectures; (2) objective stopping rules for network pruning or growing procedures; (3) objective choice of magnitude and type of weight decay terms or additive regularisers (for penalising large weights, etc.); (4) a measure of the effective number of well-determined parameters in a model; (5) quantified estimates of the error bars on network parameters and on network output; (6) objective comparisons with alternative learning and interpolation models such as splines and radial basis functions. The Bayesian `evidence' automatically embodies `Occam's razor,' penalising over-flexible and over-complex models. The Bayesian approach helps detect poor underlying assumptions in learning models. For learning models well matched to a problem, a good correlation between generalisation ability This paper makes use of the Bayesian framework for regularisation and model comparison described in the companion paper `Bayesian interpolation' (MacKay, 1991a). This framework is due to Gull and Skilling (Gull, 1989a). and the Bayesian evidence is obtained.
1-hop neighbor's text information: Extraction of Facial Features for Recognition using Neural Networks:
Target text information: "Ensemble training: Some recent experiments with postal zip data," : Recent findings suggest that a classification scheme based on an ensemble of networks is an effective way to address overfitting. We study optimal methods for training an ensemble of networks. Some recent experiments on Postal Zip-code character data suggest that weight decay may not be an optimal method for controlling the variance of a classifier.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 1 | Neural Networks | cora | 946 | val |
1-hop neighbor's text information: Embedding of a sequential procedure within an evolutionary algorithm for coloring problems in graphs. :
Target text information: A DISCUSSION ON SOME DESIGN PRINCIPLES FOR EFFICIENT CROSSOVER OPERATORS FOR GRAPH COLORING PROBLEMS: A year ago, a new metaheuristic for graph coloring problems was introduced by Costa, Hertz and Dubuis. They have shown, with computer experiments, some clear indication of the benefits of this approach. Graph coloring has many applications specially in the areas of scheduling, assignments and timetabling. The metaheuristic can be classified as a memetic algorithm since it is based on a population search in which periods of local optimization are interspersed with phases in which new configurations are created from earlier well-developed configurations or local minima of the previous iterative improvement process. The new population is created using crossover operators as in genetic algorithms. In this paper we discuss how a methodology inspired in Competitive Analysis may be relevant to the problem of designing better crossover operators. RESUMO: No ultimo ano uma nova metaheurstica para o problema de colora~c~ao em grafos foi apre-sentada por Costa, Hertz e Dubuis. Eles mostraram, com experimentos computacionais, algumas indica~c~oes claras dos benefcios desta nova tecnica. Colora~c~ao em grafos tem muitas aplica~c~oes, especialmente na area de programa~c~ao de tarefas, localiza~c~ao e horario . A metaheurstica pode ser classificada como algoritmo memetico desde que seja baseada em uma busca de popula~c~ao cujos perodos de otimiza~c~ao local s~ao intercalados com fases onde novas configura~c~oes s~ao criadas a partir de boas configura~c~oes ou mnimos locais de itera~c~oes anteriores. A nova popula~c~ao e criada usando opera~c~oes de crossover como em algoritmos geneticos. Neste artigo apresen-tamos como uma metodologia baseada em Competitive Analysis pode ser relevante para construir opera~c~oes de crossover.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 3 | Genetic Algorithms | cora | 503 | test |
1-hop neighbor's text information: Exemplar-based Music Structure Recognition: We tend to think of what we really know as what we can talk about, and disparage knowledge that we can't verbalize. [Dowling 1989, p. 252]
1-hop neighbor's text information: Towards a better understanding of memory-based and bayesian classifiers. : We quantify both experimentally and analytically the performance of memory-based reasoning (MBR) algorithms. To start gaining insight into the capabilities of MBR algorithms, we compare an MBR algorithm using a value difference metric to a popular Bayesian classifier. These two approaches are similar in that they both make certain independence assumptions about the data. However, whereas MBR uses specific cases to perform classification, Bayesian methods summarize the data probabilistically. We demonstrate that a particular MBR system called Pebls works comparatively well on a wide range of domains using both real and artificial data. With respect to the artificial data, we consider distributions where the concept classes are separated by functional discriminants, as well as time-series data generated by Markov models of varying complexity. Finally, we show formally that Pebls can learn (in the limit) natural concept classes that the Bayesian classifier cannot learn, and that it will attain perfect accuracy whenever
1-hop neighbor's text information: An optimal weighting criterion of case indexing for both numeric and symbolic attributes. : Indexing of cases is an important topic for Memory-Based Reasoning(MBR). One key problem is how to assign weights to attributes of cases. Although several weighting methods have been proposed, some methods cannot handle numeric attributes directly, so it is necessary to discretize numeric values by classification. Furthermore, existing methods have no theoretical background, so little can be said about optimality. We propose a new weighting method based on a statistical technique called Quantification Method II. It can handle both numeric and symbolic attributes in the same framework. Generated attribute weights are optimal in the sense that they maximize the ratio of variance between classes to variance of all cases. Experiments on several benchmark tests show that in many cases, our method obtains higher accuracies than some other weighting methods. The results also indicate that it can distinguish relevant attributes from irrelevant ones, and can tolerate noisy data.
Target text information: A Weighted Nearest Neighbor Algorithm for Learning with Symbolic Features. : In the past, nearest neighbor algorithms for learning from examples have worked best in domains in which all features had numeric values. In such domains, the examples can be treated as points and distance metrics can use standard definitions. In symbolic domains, a more sophisticated treatment of the feature space is required. We introduce a nearest neighbor algorithm for learning in domains with symbolic features. Our algorithm calculates distance tables that allow it to produce real-valued distances between instances, and attaches weights to the instances to further modify the structure of feature space. We show that this technique produces excellent classification accuracy on three problems that have been studied by machine learning researchers: predicting protein secondary structure, identifying DNA promoter sequences, and pronouncing English text. Direct experimental comparisons with the other learning algorithms show that our nearest neighbor algorithm is comparable or superior in all three domains. In addition, our algorithm has advantages in training speed, simplicity, and perspicuity. We conclude that experimental evidence favors the use and continued development of nearest neighbor algorithms for domains such as the ones studied here.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 1 | Neural Networks | cora | 1,629 | test |
1-hop neighbor's text information: The BATmobile: Towards a Bayesian automated taxi. : The problem of driving an autonomous vehicle in normal traffic engages many areas of AI research and has substantial economic significance. We describe a new approach to this problem based on a decision-theoretic architecture using dynamic probabilistic networks. The architecture provides a sound solution to the problems of sensor noise, sensor failure, and uncertainty about the behavior of other vehicles and about the effects of one's own actions. We report on several advances in the theory and practice of inference and decision making in dynamic, partially observable domains. Our approach has been implemented in a simulation system, and the autonomous vehicle successfully negotiates a variety of difficult situations. Multiple submissions: This paper has not already been accepted by and is not currently under review for a journal or another conference. Nor will it be submitted for such during IJCAI's review period.
1-hop neighbor's text information: A case study in dynamic belief networks: monitoring walking, fall prediction and detection. :
1-hop neighbor's text information: The data association problem when monitoring robot vehicles using dynamic belief networks. : We describe the development of a monitoring system which uses sensor observation data about discrete events to construct dynamically a probabilistic model of the world. This model is a Bayesian network incorporating temporal aspects, which we call a Dynamic Belief Network; it is used to reason under uncertainty about both the causes and consequences of the events being monitored. The basic dynamic construction of the network is data-driven. However the model construction process combines sensor data about events with externally provided information about agents' behaviour, and knowledge already contained within the model, to control the size and complexity of the network. This means that both the network structure within a time interval, and the amount of history and detail maintained, can vary over time. We illustrate the system with the example domain of monitoring robot vehicles and people in a restricted dynamic environment using light-beam sensor data. In addition to presenting a generic network structure for monitoring domains, we describe the use of more complex network structures which address two specific monitoring problems, sensor validation and the Data Association Problem.
Target text information: Fall diagnosis using dynamic belief networks. : The task is to monitor walking patterns and give early warning of falls using foot switch and mercury trigger sensors. We describe a dynamic belief network model for fall diagnosis which, given evidence from sensor observations, outputs beliefs about the current walking status and makes predictions regarding future falls. The model represents possible sensor error and is parametrised to allow customisation to the individual being monitored.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 6 | Probabilistic Methods | cora | 2,084 | test |
1-hop neighbor's text information: A Model of Bias Learning. : In this paper the problem of learning appropriate domain-specific bias is addressed. It is shown that this can be achieved by learning many related tasks from the same domain, and a theorem is given bounding the number tasks that must be learnt. A corollary of the theorem is that if the tasks are known to possess a common internal representation or preprocessing then the number of examples required per task for good generalisation when learning n tasks simultaneously scales like O(a + b tive support for the theoretical results is reported.
Target text information: Theoretical Models of Learning to Learn Editor:: A Machine can only learn if it is biased in some way. Typically the bias is supplied by hand, for example through the choice of an appropriate set of features. However, if the learning machine is embedded within an environment of related tasks, then it can learn its own bias by learning sufficiently many tasks from the environment [4, 6]. In this paper two models of bias learning (or equivalently, learning to learn) are introduced and the main theoretical results presented. The first model is a PAC-type model based on empirical process theory, while the second is a hierarchical Bayes model.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 4 | Theory | cora | 117 | test |
1-hop neighbor's text information: Blind separation of delayed sources based on information maximisation, : Blind separation of independent sources from their convolutive mixtures is a problem in many real world multi-sensor applications. In this paper we present a solution to this problem based on the information maximization principle, which was recently proposed by Bell and Sejnowski for the case of blind separation of instantaneous mixtures. We present a feedback network architecture capable of coping with convolutive mixtures, and we derive the adaptation equations for the adaptive filters in the network by maximizing the information transferred through the network. Examples using speech signals are presented to illustrate the algorithm.
1-hop neighbor's text information: An Information Maximization Approach to Blind Separation and Blind Deconvolution. : We derive a new self-organising learning algorithm which maximises the information transferred in a network of non-linear units. The algorithm does not assume any knowledge of the input distributions, and is defined here for the zero-noise limit. Under these conditions, information maximisation has extra properties not found in the linear case (Linsker 1989). The non-linearities in the transfer function are able to pick up higher-order moments of the input distributions and perform something akin to true redundancy reduction between units in the output representation. This enables the network to separate statistically independent components in the inputs: a higher-order generalisation of Principal Components Analysis. We apply the network to the source separation (or cocktail party) problem, successfully separating unknown mixtures of up to ten speakers. We also show that a variant on the network architecture is able to perform blind deconvolution (cancellation of unknown echoes and reverberation in a speech signal). Finally, we derive dependencies of information transfer on time delays. We suggest that information max-imisation provides a unifying framework for problems in `blind' signal processing. fl Please send comments to tony@salk.edu. This paper will appear as Neural Computation, 7, 6, 1004-1034 (1995). The reference for this version is: Technical Report no. INC-9501, February 1995, Institute for Neural Computation, UCSD, San Diego, CA 92093-0523.
1-hop neighbor's text information: A new learning algorithm for blind signal separation. : A new on-line learning algorithm which minimizes a statistical dependency among outputs is derived for blind separation of mixed signals. The dependency is measured by the average mutual information (MI) of the outputs. The source signals and the mixing matrix are unknown except for the number of the sources. The Gram-Charlier expansion instead of the Edgeworth expansion is used in evaluating the MI. The natural gradient approach is used to minimize the MI. A novel activation function is proposed for the on-line learning algorithm which has an equivariant property and is easily implemented on a neural network like model. The validity of the new learning algorithm is verified by computer simulations.
Target text information: A Context-Sensitive Generalization of ICA: Source separation arises in a surprising number of signal processing applications, from speech recognition to EEG analysis. In the square linear blind source separation problem without time delays, one must find an unmixing matrix which can detangle the result of mixing n unknown independent sources through an unknown n fi n mixing matrix. The recently introduced ICA blind source separation algorithm (Baram and Roth 1994; Bell and Sejnowski 1995) is a powerful and surprisingly simple technique for solving this problem. ICA is all the more remarkable for performing so well despite making absolutely no use of the temporal structure of its input! This paper presents a new algorithm, contextual ICA, which derives from a maximum likelihood density estimation formulation of the problem. cICA can incorporate arbitrarily complex adaptive history-sensitive source models, and thereby make use of the temporal structure of its input. This allows it to separate in a number of situations where standard ICA cannot, including sources with low kurtosis, colored gaussian sources, and sources which have gaussian histograms. Since ICA is a special case of cICA, the MLE derivation provides as a corollary a rigorous derivation of classic ICA.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 6 | Probabilistic Methods | cora | 645 | test |
1-hop neighbor's text information: On the Virtues of Parameterized Uniform Crossover, : Traditionally, genetic algorithms have relied upon 1 and 2-point crossover operators. Many recent empirical studies, however, have shown the benefits of higher numbers of crossover points. Some of the most intriguing recent work has focused on uniform crossover, which involves on the average L/2 crossover points for strings of length L. Theoretical results suggest that, from the view of hyperplane sampling disruption, uniform crossover has few redeeming features. However, a growing body of experimental evidence suggests otherwise. In this paper, we attempt to reconcile these opposing views of uniform crossover and present a framework for understanding its virtues.
1-hop neighbor's text information: Genetic Algorithms as Multi-Coordinators in Large-Scale Optimization: We present high-level, decomposition-based algorithms for large-scale block-angular optimization problems containing integer variables, and demonstrate their effectiveness in the solution of large-scale graph partitioning problems. These algorithms combine the subproblem-coordination paradigm (and lower bounds) of price-directive decomposition methods with knapsack and genetic approaches to the utilization of "building blocks" of partial solutions. Even for graph partitioning problems requiring billions of variables in a standard 0-1 formulation, this approach produces high-quality solutions (as measured by deviations from an easily computed lower bound), and substantially outperforms widely-used graph partitioning techniques based on heuristics and spectral methods.
1-hop neighbor's text information: DISTRIBUTED GENETIC ALGORITHMS FOR PARTITIONING UNIFORM GRIDS:
Target text information: User\'s Guide to the PGAPack Parallel Genetic Algorithm Library Version 0.2. :
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 3 | Genetic Algorithms | cora | 3 | test |
1-hop neighbor's text information: Learning Classification Rules Using Lattices. : This paper presents a novel induction algorithm, Rulearner, which induces classification rules using a Galois lattice as an explicit map through the search space of rules. The Rulearner system is shown to compare favorably with commonly used symbolic learning methods which use heuristics rather than an explicit map to guide their search through the rule space. Furthermore, our learning system is shown to be robust in the presence of noisy data. The Rulearner system is also capable of learning both decision lists and unordered rule sets allowing for comparisons of these different learning paradigms within the same algorithmic framework.
Target text information: IGLUE An Instance-based Learning System over Lattice Theory: Concept learning is one of the most studied areas in machine learning. A lot of work in this domain deals with decision trees. In this paper, we are concerned with a different kind of technique based on Galois lattices or concept lattices. We present a new semi-lattice based system, IGLUE, that uses the entropy function with a top-down approach to select concepts during the lattice construction. Then IGLUE generates new relevant numerical features by transforming initial boolean features over these concepts. IGLUE uses the new features to redescribe examples. Finally, IGLUE applies the Mahanalobis distance as a similarity measure between examples. Keywords : Multistrategy Learning, Instance-Based Learning, Galois lattice, Feature transformation
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 2 | Case Based | cora | 74 | test |
1-hop neighbor's text information: Generative models for discovering sparse distributed representations. : We describe a hierarchical, generative model that can be viewed as a non-linear generalization of factor analysis and can be implemented in a neural network. The model uses bottom-up, top-down and lateral connections to perform Bayesian perceptual inference correctly. Once perceptual inference has been performed the connection strengths can be updated using a very simple learning rule that only requires locally available information. We demon strate that the network learns to extract sparse, distributed, hierarchical representations.
1-hop neighbor's text information: The EM algorithm for mixtures of factor analyzers. : Technical Report CRG-TR-96-1 May 21, 1996 (revised Feb 27, 1997) Abstract Factor analysis, a statistical method for modeling the covariance structure of high dimensional data using a small number of latent variables, can be extended by allowing different local factor models in different regions of the input space. This results in a model which concurrently performs clustering and dimensionality reduction, and can be thought of as a reduced dimension mixture of Gaussians. We present an exact Expectation-Maximization algorithm for fitting the parameters of this mixture of factor analyzers.
1-hop neighbor's text information: A new view of the EM algorithm that justifies incremental and other variants. : The EM algorithm performs maximum likelihood estimation for data in which some variables are unobserved. We present a function that resembles negative free energy and show that the M step maximizes this function with respect to the model parameters and the E step maximizes it with respect to the distribution over the unobserved variables. From this perspective, it is easy to justify an incremental variant of the EM algorithm in which the distribution for only one of the unobserved variables is recalculated in each E step. This variant is shown empirically to give faster convergence in a mixture estimation problem. A variant of the algorithm that exploits sparse conditional distributions is also described, and a wide range of other variant algorithms are also seen to be possible.
Target text information: (in press). A hierarchical community of experts. : We describe a directed acyclic graphical model that contains a hierarchy of linear units and a mechanism for dynamically selecting an appropriate subset of these units to model each observation. The non-linear selection mechanism is a hierarchy of binary units each of which gates the output of one of the linear units. There are no connections from linear units to binary units, so the generative model can be viewed as a logistic belief net (Neal 1992) which selects a skeleton linear model from among the available linear units. We show that Gibbs sampling can be used to learn the parameters of the linear and binary units even when the sampling is so brief that the Markov chain is far from equilibrium.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 1 | Neural Networks | cora | 1,755 | test |
1-hop neighbor's text information: Context-specific independence in Bayesian networks. : Bayesian networks provide a language for qualitatively representing the conditional independence properties of a distribution. This allows a natural and compact representation of the distribution, eases knowledge acquisition, and supports effective inference algorithms. It is well-known, however, that there are certain independencies that we cannot capture qualitatively within the Bayesian network structure: independencies that hold only in certain contexts, i.e., given a specific assignment of values to certain variables. In this paper, we propose a formal notion of context-specific independence (CSI), based on regularities in the conditional probability tables (CPTs) at a node. We present a technique, analogous to (and based on) d-separation, for determining when such independence holds in a given network. We then focus on a particular qualitative representation schemetree-structured CPTs for capturing CSI. We suggest ways in which this representation can be used to support effective inference algorithms. In particular, we present a structural decomposition of the resulting network which can improve the performance of clustering algorithms, and an alternative algorithm based on cutset conditioning.
1-hop neighbor's text information: Factorial hidden Markov models. : Hidden Markov models (HMMs) have proven to be one of the most widely used tools for learning probabilistic models of time series data. In an HMM, information about the past is conveyed through a single discrete variable|the hidden state. We discuss a generalization of HMMs in which this state is factored into multiple state variables and is therefore represented in a distributed manner. We describe an exact algorithm for inferring the posterior probabilities of the hidden state variables given the observations, and relate it to the forward-backward algorithm for HMMs and to algorithms for more general graphical models. Due to the combinatorial nature of the hidden state representation, this exact algorithm is intractable. As in other intractable systems, approximate inference can be carried out using Gibbs sampling or variational methods. Within the variational framework, we present a structured approximation in which the the state variables are decoupled, yielding a tractable algorithm for learning the parameters of the model. Empirical comparisons suggest that these approximations are efficient and provide accurate alternatives to the exact methods. Finally, we use the structured approximation to model Bach's chorales and show that factorial HMMs can capture statistical structure in this data set which an unconstrained HMM cannot.
1-hop neighbor's text information: Global conditioning for probabilistic inference in belief networks. : In this paper we propose a new approach to probabilistic inference on belief networks, global conditioning, which is a simple generalization of Pearl's (1986b) method of loop-cutset conditioning. We show that global conditioning, as well as loop-cutset conditioning, can be thought of as a special case of the method of Lauritzen and Spiegelhalter (1988) as refined by Jensen et al (1990a; 1990b). Nonetheless, this approach provides new opportunities for parallel processing and, in the case of sequential processing, a tradeoff of time for memory. We also show how a hybrid method (Suermondt and others 1990) combining loop-cutset conditioning with Jensen's method can be viewed within our framework. By exploring the relationships between these methods, we develop a unifying framework in which the advantages of each approach can be combined successfully.
Target text information: Structured Representation of Complex Stochastic Systems: This paper considers the problem of representing complex systems that evolve stochastically over time. Dynamic Bayesian networks provide a compact representation for stochastic processes. Unfortunately, they are often unwieldy since they cannot explicitly model the complex organizational structure of many real life systems: the fact that processes are typically composed of several interacting subprocesses, each of which can, in turn, be further decomposed. We propose a hierarchically structured representation language which extends both dynamic Bayesian networks and the object-oriented Bayesian network framework of [9], and show that our language allows us to describe such systems in a natural and modular way. Our language supports a natural representation for certain system characteristics that are hard to capture using more traditional frameworks. For example, it allows us to represent systems where some processes evolve at a different rate than others, or systems where the processes interact only intermittently. We provide a simple inference mechanism for our representation via translation to Bayesian networks, and suggest ways in which the inference algorithm can exploit the additional structure encoded in our representation.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 6 | Probabilistic Methods | cora | 467 | val |
1-hop neighbor's text information: Automatic Generation of Adaptive Programs Automatic Generation of Adaptive Programs. In From Animals to Animats:
1-hop neighbor's text information: Evolving Teamwork and Coordination with Genetic Programming: Some problems can be solved only by multi-agent teams. In using genetic programming to produce such teams, one faces several design decisions. First, there are questions of team diversity and of breeding strategy. In one commonly used scheme, teams consist of clones of single individuals; these individuals breed in the normal way and are cloned to form teams during fitness evaluation. In contrast, teams could also consist of distinct individuals. In this case one can either allow free interbreeding between members of different teams, or one can restrict interbreeding in various ways. A second design decision concerns the types of coordination-facilitating mechanisms provided to individual team members; these range from sensors of various sorts to complex communication systems. This paper examines three breeding strategies (clones, free, and restricted) and three coordination mechanisms (none, deictic sensing, and name-based sensing) for evolving teams of agents in the Serengeti world, a simple predator/prey environment. Among the conclusions are the fact that a simple form of restricted interbreeding outperforms free interbreeding in all teams with distinct individuals, and the fact that name-based sensing consistently outperforms deictic sensing.
1-hop neighbor's text information: "The Evolution of Agents that Build Mental Models and Create Simple Plans Using Genetic Programming," : An essential component of an intelligent agent is the ability to notice, encode, store, and utilize information about its environment. Traditional approaches to program induction have focused on evolving functional or reactive programs. This paper presents MAPMAKER, an approach to the automatic generation of agents that discover information about their environment, encode this information for later use, and create simple plans utilizing the stored mental models. In this approach, agents are multipart computer programs that communicate through a shared memory. Both the programs and the representation scheme are evolved using genetic programming. An illustrative problem of 'gold' collection is used to demonstrate the approach in which one part of a program makes a map of the world and stores it in memory, and the other part uses this map to find the gold The results indicate that the approach can evolve programs that store simple representations of their environments and use these representations to produce simple plans. 1. Introduction
Target text information: Simultaneous evolution of programs and their control structures. :
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 3 | Genetic Algorithms | cora | 1,345 | test |
1-hop neighbor's text information: Goal-Driven Learning. : In Artificial Intelligence, Psychology, and Education, a growing body of research supports the view that learning is a goal-directed process. Psychological experiments show that people with different goals process information differently; studies in education show that goals have strong effects on what students learn; and functional arguments from machine learning support the necessity of goal-based focusing of learner effort. At the Fourteenth Annual Conference of the Cognitive Science Society, a symposium brought together researchers in AI, psychology, and education to discuss goal-driven learning. This article presents the fundamental points illuminated by the symposium, placing them in the context of open questions and current research di rections in goal-driven learning. fl Technical Report #85, Cognitive Science Program, Indiana University, Bloomington, Indiana, January 1993.
1-hop neighbor's text information: Inferential Theory of Learning: Developing Foundations for Multistrategy Learning, in Machine Learning: A Multistrategy Approach, Vol. IV, R.S. : The development of multistrategy learning systems should be based on a clear understanding of the roles and the applicability conditions of different learning strategies. To this end, this chapter introduces the Inferential Theory of Learning that provides a conceptual framework for explaining logical capabilities of learning strategies, i.e., their competence. Viewing learning as a process of modifying the learners knowledge by exploring the learners experience, the theory postulates that any such process can be described as a search in a knowledge space, triggered by the learners experience and guided by learning goals. The search operators are instantiations of knowledge transmutations, which are generic patterns of knowledge change. Transmutations may employ any basic type of inferencededuction, induction or analogy. Several fundamental knowledge transmutations are described in a novel and general way, such as generalization, abstraction, explanation and similization, and their counterparts, specialization, concretion, prediction and dissimilization, respectively. Generalization enlarges the reference set of a description (the set of entities that are being described). Abstraction reduces the amount of the detail about the reference set. Explanation generates premises that explain (or imply) the given properties of the reference set. Similization transfers knowledge from one reference set to a similar reference set. Using concepts of the theory, a multistrategy task-adaptive learning (MTL) methodology is outlined, and illustrated b y an example. MTL dynamically adapts strategies to the learning task, defined by the input information, learners background knowledge, and the learning goal. It aims at synergistically integrating a whole range of inferential learning strategies, such as empirical generalization, constructive induction, deductive generalization, explanation, prediction, abstraction, and similization.
1-hop neighbor's text information: An architecture for goal-driven explanation. : In complex and changing environments explanation must be a a dynamic and goal-driven process. This paper discusses an evolving system implementing a novel model of explanation generation | Goal-Driven Interactive Explanation | that models explanation as a goal-driven, multi-strategy, situated process inter-weaving reasoning with action. We describe a preliminary implementation of this model in gobie, a system that generates explanations for its internal use to support plan generation and execution.
Target text information: Issues in goal-driven explanation. : When a reasoner explains surprising events for its internal use, a key motivation for explaining is to perform learning that will facilitate the achievement of its goals. Human explainers use a range of strategies to build explanations, including both internal reasoning and external information search, and goal-based considerations have a profound effect on their choices of when and how to pursue explanations. However, standard AI models of explanation rely on goal-neutral use of a single fixed strategy|generally backwards chaining|to build their explanations. This paper argues that explanation should be modeled as a goal-driven learning process for gathering and transforming information, and discusses the issues involved in developing an active multi-strategy process for goal-driven explanation.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 2 | Case Based | cora | 5 | train |
1-hop neighbor's text information: A bound on the error of Cross Validation using the approxima-tion and estimation rates, with consequences for the training-test split. : We give an analysis of the generalization error of cross validation in terms of two natural measures of the difficulty of the problem under consideration: the approximation rate (the accuracy to which the target function can be ideally approximated as a function of the number of hypothesis parameters), and the estimation rate (the deviation between the training and generalization errors as a function of the number of hypothesis parameters). The approximation rate captures the complexity of the target function with respect to the hypothesis model, and the estimation rate captures the extent to which the hypothesis model suffers from overfitting. Using these two measures, we give a rigorous and general bound on the error of cross validation. The bound clearly shows the tradeoffs involved with making fl the fraction of data saved for testing too large or too small. By optimizing the bound with respect to fl, we then argue (through a combination of formal analysis, plotting, and controlled experimentation) that the following qualitative properties of cross validation behavior should be quite robust to significant changes in the underlying model selection problem:
1-hop neighbor's text information: "Induction of Decision Trees," :
1-hop neighbor's text information: An experimental and theoretical comparison of model selection methods. : We investigate the problem of model selection in the setting of supervised learning of boolean functions from independent random examples. More precisely, we compare methods for finding a balance between the complexity of the hypothesis chosen and its observed error on a random training sample of limited size, when the goal is that of minimizing the resulting generalization error. We undertake a detailed comparison of three well-known model selection methods | a variation of Vapnik's Guaranteed Risk Minimization (GRM), an instance of Rissanen's Minimum Description Length Principle (MDL), and cross validation (CV). We introduce a general class of model selection methods (called penalty-based methods) that includes both GRM and MDL, and provide general methods for analyzing such rules. We provide both controlled experimental evidence and formal theorems to support the following conclusions: * The class of penalty-based methods is fundamentally handicapped in the sense that there exist two types of model selection problems for which every penalty-based method must incur large generalization error on at least one, while CV enjoys small generalization error Despite the inescapable incomparability of model selection methods under certain circumstances, we conclude with a discussion of our belief that the balance of the evidence provides specific reasons to prefer CV to other methods, unless one is in possession of detailed problem-specific information. on both.
Target text information: Preventing "overfitting" of Cross-Validation data. : Suppose that, for a learning task, we have to select one hypothesis out of a set of hypotheses (that may, for example, have been generated by multiple applications of a randomized learning algorithm). A common approach is to evaluate each hypothesis in the set on some previously unseen cross-validation data, and then to select the hypothesis that had the lowest cross-validation error. But when the cross-validation data is partially corrupted such as by noise, and if the set of hypotheses we are selecting from is large, then "folklore" also warns about "overfitting" the cross- validation data [Klockars and Sax, 1986, Tukey, 1949, Tukey, 1953]. In this paper, we explain how this "overfitting" really occurs, and show the surprising result that it can be overcome by selecting a hypothesis with a higher cross-validation error, over others with lower cross-validation errors. We give reasons for not selecting the hypothesis with the lowest cross-validation error, and propose a new algorithm, LOOCVCV, that uses a computa- tionally efficient form of leave-one-out cross- validation to select such a hypothesis. Fi- nally, we present experimental results for one domain, that show LOOCVCV consistently beating picking the hypothesis with the lowest cross-validation error, even when using reasonably large cross-validation sets.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 4 | Theory | cora | 1,246 | train |
1-hop neighbor's text information: Pac learning intersections of halfspaces with membership queries. :
1-hop neighbor's text information: Learning from Incomplete Boundary Queries Using Split Graphs and Hypergraphs (Extended Abstract): We consider learnability with membership queries in the presence of incomplete information. In the incomplete boundary query model introduced by Blum et al. [7], it is assumed that membership queries on instances near the boundary of the target concept may receive a "don't know" answer. We show that zero-one threshold functions are efficiently learnable in this model. The learning algorithm uses split graphs when the boundary region has radius 1, and their generalization to split hypergraphs (for which we give a split-finding algorithm) when the boundary region has constant radius greater than 1. We use a notion of indistinguishability of concepts that is appropriate for this model.
1-hop neighbor's text information: Learning to model sequences generated by switching distributions. : We study efficient algorithms for solving the following problem, which we call the switching distributions learning problem. A sequence S = 1 2 : : : n , over a finite alphabet is generated in the following way. The sequence is a concatenation of K runs, each of which is a consecutive subsequence. Each run is generated by independent random draws from a distribution ~p i over , where ~p i is an element in a set of distributions f~p 1 ; : : : ; ~p N g. The learning algorithm is given this sequence and its goal is to find approximations of the distributions ~p 1 ; : : : ; ~p N , and give an approximate segmentation of the sequence into its constituting runs. We give an efficient algorithm for solving this problem and show conditions under which the algorithm is guaranteed to work with high probability.
Target text information: Learning with unreliable boundary queries. : We introduce a model for learning from examples and membership queries in situations where the boundary between positive and negative examples is somewhat ill-defined. In our model, queries near the boundary of a target concept may receive incorrect or "don't care" responses, and the distribution of examples has zero probability mass on the boundary region. The motivation behind our model is that in many cases the boundary between positive and negative examples is complicated or "fuzzy." However, one may still hope to learn successfully, because the typical examples that one sees do not come from that region. We present several positive results in this new model. We show how to learn the intersection of two arbitrary halfspaces when membership queries near the boundary may be answered incorrectly. Our algorithm is an extension of an algorithm of Baum [7, 6] that learns the intersection of two halfspaces whose bounding planes pass through the origin in the PAC-with-membership-queries model. We also describe algorithms for learning several subclasses of monotone DNF formulas.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 4 | Theory | cora | 919 | test |
1-hop neighbor's text information: Case Retrieval Nets: Basic ideas and extensions. accepted for: : An efficient retrieval of a relatively small number of relevant cases from a huge case base is a crucial subtask of Case-Based Reasoning (CBR). In this article, we present Case Retrieval Nets, a memory model that has recently been developed for this task. The main idea is to apply a spreading activation process to the case memory structured as a Case Retrieval Net in order to retrieve cases being sufficiently similar to a posed query case. This article summarizes the basic ideas of Case Retrieval Nets and suggests some useful extensions.
Target text information: Technical Diagnosis: Fallexperte-D of further knowledge sources (domain knowledge, common knowledge) is investigated in the: Case based reasoning (CBR) uses the knowledge from former experiences ("known cases"). Since special knowledge of an expert is mainly subject to his experiences, the CBR techniques are a good base for the development of expert systems. We investigate the problem for technical diagnosis. Diagnosis is not considered as a classification task, but as a process to be guided by computer assisted experience. This corresponds to the flexible "case completion" approach. Flexibility is also needed for the expert view with predominant interest in the unexpected, unpredictible cases.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 2 | Case Based | cora | 547 | train |
1-hop neighbor's text information: An O(n log log n ) learning algorihm for DNF under the uniform distribution. : We show that a DNF with terms of size at most d can be approximated by a function with at most d O(d log1=") non zero Fourier coefficients such that the expected error squared, with respect to the uniform distribution, is at most ". This property is used to derive a learning algorithm for DNF, under the uniform distribution. The learning algorithm uses queries and learns, with respect to the uniform distribution, a DNF with terms of size at most d in time polynomial in n and d O(d log 1=") . The interesting implications are for the case when " is constant. In this case our algorithm learns a DNF with a polynomial number of terms in time n O(log log n) , and a DNF with terms of size at most O(log n= log log n) in polynomial time.
1-hop neighbor's text information: Weakly Learning DNF and Characterizing Statistical Query Learning Using Fourier Analysis, : We present new results, both positive and negative, on the well-studied problem of learning disjunctive normal form (DNF) expressions. We first prove that an algorithm due to Kushilevitz and Mansour [16] can be used to weakly learn DNF using membership queries in polynomial time, with respect to the uniform distribution on the inputs. This is the first positive result for learning unrestricted DNF expressions in polynomial time in any nontrivial formal model of learning. It provides a sharp contrast with the results of Kharitonov [15], who proved that AC 0 is not efficiently learnable in the same model (given certain plausible cryptographic assumptions). We also present efficient learning algorithms in various models for the read-k and SAT-k subclasses of DNF. For our negative results, we turn our attention to the recently introduced statistical query model of learning [11]. This model is a restricted version of the popular Probably Approximately Correct (PAC) model [23], and practically every class known to be efficiently learnable in the PAC model is in fact learnable in the statistical query model [11]. Here we give a general characterization of the complexity of statistical query learning in terms of the number of uncorrelated functions in the concept class. This is a distribution-dependent quantity yielding upper and lower bounds on the number of statistical queries required for learning on any input distribution. As a corollary, we obtain that DNF expressions and decision trees are not even weakly learnable with fl This research is sponsored in part by the Wright Laboratory, Aeronautical Systems Center, Air Force Materiel Command, USAF, and the Advanced Research Projects Agency (ARPA) under grant number F33615-93-1-1330. Support also is sponsored by the National Science Foundation under Grant No. CC-9119319. Blum also supported in part by NSF National Young Investigator grant CCR-9357793. Views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing official policies or endorsements, either expressed or implied, of Wright Laboratory or the United States Government, or NSF. respect to the uniform input distribution in polynomial time in the statistical query model. This result is information-theoretic and therefore does not rely on any unproven assumptions. It demonstrates that no simple modification of the existing algorithms in the computational learning theory literature for learning various restricted forms of DNF and decision trees from passive random examples (and also several algorithms proposed in the experimental machine learning communities, such as the ID3 algorithm for decision trees [22] and its variants) will solve the general problem. The unifying tool for all of our results is the Fourier analysis of a finite class of boolean functions on the hypercube.
Target text information: Learning Using Group Representations (Extended Abstract): We consider the problem of learning functions over a fixed distribution. An algorithm by Kushilevitz and Mansour [7] learns any boolean function over f0; 1g n in time polynomial in the L 1 -norm of the Fourier transform of the function. We show that the KM-algorithm is a special case of a more general class of learning algorithms. This is achieved by extending their ideas using representations of finite groups. We introduce some new classes of functions which can be learned using this generalized KM algorithm.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 4 | Theory | cora | 228 | val |
1-hop neighbor's text information: An unified treatment of uncertainties. : Uncertainty in artificial intelligence" is an active research field, where several approaches have been suggested and studied for dealing with various types of uncertainty. However, it's hard to rank the approaches in general, because each of them is usually aimed at a special application environment. This paper begins by defining such an environment, then show why some existing approaches cannot be used in such a situation. Then a new approach, Non-Axiomatic Reasoning System, is introduced to work in the environment. The system is designed under the assumption that the system's knowledge and resources are usually insufficient to handle the tasks imposed by its environment. The system can consistently represent several types of uncertainty, and can carry out multiple operations on these uncertainties. Finally, the new approach is compared with the previous approaches in terms of uncertainty representation and interpretation.
1-hop neighbor's text information: From inheritance relation to non-axiomatic logic. : At the beginning of the paper, three binary term logics are defined. The first is based only on an inheritance relation. The second and the third suggest a novel way to process extension and intension, and they also have interesting relations with Aristotle's syllogistic logic. Based on the three simple systems, a Non-Axiomatic Logic is defined. It has a term-oriented language and an experience-grounded semantics. It can uniformly represents and processes randomness, fuzziness, and ignorance. It can also uniformly carries out deduction, abduction, induction, and revision.
1-hop neighbor's text information: Reference classes and multiple inheritances. : The reference class problem in probability theory and the multiple inheritances (extensions) problem in non-monotonic logics can be referred to as special cases of conflicting beliefs. The current solution accepted in the two domains is the specificity priority principle. By analyzing an example, several factors (ignored by the principle) are found to be relevant to the priority of a reference class. A new approach, Non-Axiomatic Reasoning System (NARS), is discussed, where these factors are all taken into account. It is argued that the solution provided by NARS is better than the solutions provided by probability theory and non-monotonic logics.
Target text information: Belief revision in probability theory. : In a probability-based reasoning system, Bayes' theorem and its variations are often used to revise the system's beliefs. However, if the explicit conditions and the implicit conditions of probability assignments are properly distinguished, it follows that Bayes' theorem is not a generally applicable revision rule. Upon properly distinguishing belief revision from belief updating, we see that Jeffrey's rule and its variations are not revision rules, either. Without these distinctions, the limitation of the Bayesian approach is often ignored or underestimated. Revision, in its general form, cannot be done in the Bayesian approach, because a probability distribution function alone does not contain the information needed by the operation.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 6 | Probabilistic Methods | cora | 1,063 | test |
1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming.
1-hop neighbor's text information: Complexity Compression and Evolution. : Compression of information is an important concept in the theory of learning. We argue for the hypothesis that there is an inherent compression pressure towards short, elegant and general solutions in a genetic programming system and other variable length evolutionary algorithms. This pressure becomes visible if the size or complexity of solutions are measured without non-effective code segments called introns. The built in parsimony pressure effects complex fitness functions, crossover probability, generality, maximum depth or length of solutions, explicit parsimony, granularity of fitness function, initialization depth or length, and modulariz-ation. Some of these effects are positive and some are negative. In this work we provide a basis for an analysis of these effects and suggestions to overcome the negative implications in order to obtain the balance needed for successful evolution. An empirical investigation that supports our hypothesis is also presented.
Target text information: Signal Path Oriented Approach for Generation of Dynamic Process Models: The article at hand discusses a tool for automatic generation of structured models for complex dynamic processes by means of genetic programming. In contrast to other techniques which use genetic programming to find an appropriate arithmetic expression in order to describe the input-output behaviour of a process, this tool is based on a block oriented approach with a transparent description of signal paths. A short survey on other techniques for computer based system identification is given and the basic concept of SMOG (Structured MOdel Generator) is described. Furthermore latest extensions of the system are presented in detail, including automatically defined sub-models and quali tative fitness criteria.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 3 | Genetic Algorithms | cora | 542 | test |
1-hop neighbor's text information: Supervised learning from incomplete data via an EM approach. : Real-world learning tasks may involve high-dimensional data sets with arbitrary patterns of missing data. In this paper we present a framework based on maximum likelihood density estimation for learning from such data sets. We use mixture models for the density estimates and make two distinct appeals to the Expectation-Maximization (EM) principle (Dempster et al., 1977) in deriving a learning algorithm|EM is used both for the estimation of mixture components and for coping with missing data. The resulting algorithm is applicable to a wide range of supervised as well as unsupervised learning problems. Results from a classification benchmark|the iris data set|are presented.
Target text information: Recurrent Neural Networks for Missing or Asynchronous Data: In this paper we propose recurrent neural networks with feedback into the input units for handling two types of data analysis problems. On the one hand, this scheme can be used for static data when some of the input variables are missing. On the other hand, it can also be used for sequential data, when some of the input variables are missing or are available at different frequencies. Unlike in the case of probabilistic models (e.g. Gaussian) of the missing variables, the network does not attempt to model the distribution of the missing variables given the observed variables. Instead it is a more "discriminant" approach that fills in the missing variables for the sole purpose of minimizing a learning criterion (e.g., to minimize an output error).
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 1 | Neural Networks | cora | 242 | test |
1-hop neighbor's text information: Transfer of Learning by Composing Solutions of Elemental Sequential Tasks, : Although building sophisticated learning agents that operate in complex environments will require learning to perform multiple tasks, most applications of reinforcement learning have focussed on single tasks. In this paper I consider a class of sequential decision tasks (SDTs), called composite sequential decision tasks, formed by temporally concatenating a number of elemental sequential decision tasks. Elemental SDTs cannot be decomposed into simpler SDTs. I consider a learning agent that has to learn to solve a set of elemental and composite SDTs. I assume that the structure of the composite tasks is unknown to the learning agent. The straightforward application of reinforcement learning to multiple tasks requires learning the tasks separately, which can waste computational resources, both memory and time. I present a new learning algorithm and a modular architecture that learns the decomposition of composite SDTs, and achieves transfer of learning by sharing the solutions of elemental SDTs across multiple composite SDTs. The solution of a composite SDT is constructed by computationally inexpensive modifications of the solutions of its constituent elemental SDTs. I provide a proof of one aspect of the learning algorithm.
1-hop neighbor's text information: Learning Hierarchical Control Structures for Multiple Tasks and Changing Environments, : While the need for hierarchies within control systems is apparent, it is also clear to many researchers that such hierarchies should be learned. Learning both the structure and the component behaviors is a difficult task. The benefit of learning the hierarchical structures of behaviors is that the decomposition of the control structure into smaller transportable chunks allows previously learned knowledge to be applied to new but related tasks. Presented in this paper are improvements to Nested Q-learning (NQL) that allow more realistic learning of control hierarchies in reinforcement environments. Also presented is a simulation of a simple robot performing a series of related tasks that is used to compare both hierarchical and non-hierarchal learning techniques.
1-hop neighbor's text information: Learning in continuous domains with delayed rewards. : Much has been done to develop learning techniques for delayed reward problems in worlds where the actions and/or states are approximated by discrete representations. Although this is acceptable in some applications there are many more situations where such an approximation is difficult and unnatural. For instance, in applications such as robotic,s where real machines interact with the real world, learning techniques that use real valued continuous quantities are required. Presented in this paper is an extension to Q-learning that uses both real valued states and actions. This is achieved by introducing activation strengths to each actuator system of the robot. This allow all actuators to be active to some continuous amount simultaneously. Learning occurs by incrementally adapting both the expected future reward to goal evaluation function and the gradients of that function with respect to each actuator system.
Target text information: Emergent Hierarchical Control Structures: Learning Reactive/Hierarchical Relationships in Reinforcement Environments, : The use of externally imposed hierarchical structures to reduce the complexity of learning control is common. However, it is acknowledged that learning the hierarchical structure itself is an important step towards more general (learning of many things as required) and less bounded (learning of a single thing as specified) learning. Presented in this paper is a reinforcement learning algorithm called Nested Q-learning that generates a hierarchical control structure in reinforcement learning domains. The emergent structure combined with learned bottom-up reactive reactions results in a reactive hierarchical control system. Effectively, the learned hierarchy decomposes what would otherwise be a monolithic evaluation function into many smaller evaluation functions that can be recombined without the loss of previously learned information.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 5 | Reinforcement Learning | cora | 898 | test |
1-hop neighbor's text information: An Information Maximization Approach to Blind Separation and Blind Deconvolution. : We derive a new self-organising learning algorithm which maximises the information transferred in a network of non-linear units. The algorithm does not assume any knowledge of the input distributions, and is defined here for the zero-noise limit. Under these conditions, information maximisation has extra properties not found in the linear case (Linsker 1989). The non-linearities in the transfer function are able to pick up higher-order moments of the input distributions and perform something akin to true redundancy reduction between units in the output representation. This enables the network to separate statistically independent components in the inputs: a higher-order generalisation of Principal Components Analysis. We apply the network to the source separation (or cocktail party) problem, successfully separating unknown mixtures of up to ten speakers. We also show that a variant on the network architecture is able to perform blind deconvolution (cancellation of unknown echoes and reverberation in a speech signal). Finally, we derive dependencies of information transfer on time delays. We suggest that information max-imisation provides a unifying framework for problems in `blind' signal processing. fl Please send comments to tony@salk.edu. This paper will appear as Neural Computation, 7, 6, 1004-1034 (1995). The reference for this version is: Technical Report no. INC-9501, February 1995, Institute for Neural Computation, UCSD, San Diego, CA 92093-0523.
1-hop neighbor's text information: Independent component analysis by general nonlinear hebbian-like learning rules. : A number of neural learning rules have been recently proposed for Independent Component Analysis (ICA). The rules are usually derived from information-theoretic criteria such as maximum entropy or minimum mutual information. In this paper, we show that in fact, ICA can be performed by very simple Hebbian or anti-Hebbian learning rules, which may have only weak relations to such information-theoretical quantities. Rather suprisingly, practically any non-linear function can be used in the learning rule, provided only that the sign of the Hebbian/anti-Hebbian term is chosen correctly. In addition to the Hebbian-like mechanism, the weight vector is here constrained to have unit norm, and the data is preprocessed by prewhitening, or sphering. These results imply that one can choose the non-linearity so as to optimize desired statistical or numerical criteria.
1-hop neighbor's text information: "Adaptive source separation without prewhitening," : Source separation consists in recovering a set of independent signals when only mixtures with unknown coefficients are observed. This paper introduces a class of adaptive algorithms for source separation which implements an adaptive version of equivariant estimation and is henceforth called EASI (Equivariant Adaptive Separation via Independence). The EASI algorithms are based on the idea of serial updating: this specific form of matrix updates systematically yields algorithms with a simple, parallelizable structure, for both real and complex mixtures. Most importantly, the performance of an EASI algorithm does not depend on the mixing matrix. In particular, convergence rates, stability conditions and interference rejection levels depend only on the (normalized) distributions of the source signals. Close form expressions of these quantities are given via an asymptotic performance analysis. This is completed by some numerical experiments illustrating the effectiveness of the proposed approach.
Target text information: Simple neuron models for independent component analysis. : Recently, several neural algorithms have been introduced for Independent Component Analysis. Here we approach the problem from the point of view of a single neuron. First, simple Hebbian-like learning rules are introduced for estimating one of the independent components from sphered data. Some of the learning rules can be used to estimate an independent component which has a negative kurtosis, and the others estimate a component of positive kurtosis. Next, a two-unit system is introduced to estimate an independent component of any kurtosis. The results are then generalized to estimate independent components from non-sphered (raw) mixtures. To separate several independent components, a system of several neurons with linear negative feedback is used. The convergence of the learning rules is rigorously proven without any unnecessary hypotheses on the distributions of the independent components.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 1 | Neural Networks | cora | 2,053 | val |
1-hop neighbor's text information: Self-Organization and Associative Memory, : Selective suppression of transmission at feedback synapses during learning is proposed as a mechanism for combining associative feedback with self-organization of feedforward synapses. Experimental data demonstrates cholinergic suppression of synaptic transmission in layer I (feedback synapses), and a lack of suppression in layer IV (feed-forward synapses). A network with this feature uses local rules to learn mappings which are not linearly separable. During learning, sensory stimuli and desired response are simultaneously presented as input. Feedforward connections form self-organized representations of input, while suppressed feedback connections learn the transpose of feedfor-ward connectivity. During recall, suppression is removed, sensory input activates the self-organized representation, and activity generates the learned response.
1-hop neighbor's text information: "Self-organizing process based on lateral inhibition and synaptic resource redistribution," : Self-organizing feature maps are usually implemented by abstracting the low-level neural and parallel distributed processes. An external supervisor finds the unit whose weight vector is closest in Euclidian distance to the input vector and determines the neighborhood for weight adaptation. The weights are changed proportional to the Euclidian distance. In a biologically more plausible implementation, similarity is measured by a scalar product, neighborhood is selected through lateral inhibition and weights are changed by redistributing synaptic resources. The resulting self-organizing process is quite similar to the abstract case. However, the process is somewhat hampered by boundary effects and the parameters need to be carefully evolved. It is also necessary to add a redundant dimension to the input vectors.
Target text information: How lateral interaction develops in a self-organizing feature map. : A biologically motivated mechanism for self-organizing a neural network with modifiable lateral connections is presented. The weight modification rules are purely activity-dependent, unsupervised and local. The lateral interaction weights are initially random but develop into a "Mexican hat" shape around each neuron. At the same time, the external input weights self-organize to form a topological map of the input space. The algorithm demonstrates how self-organization can bootstrap itself using input information. Predictions of the algorithm agree very well with experimental observations on the development of lateral connections in cortical feature maps.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 1 | Neural Networks | cora | 1,160 | test |
1-hop neighbor's text information: D.B. Leake. Modeling Case-based Planning for Repairing Reasoning Failures. : One application of models of reasoning behavior is to allow a reasoner to introspectively detect and repair failures of its own reasoning process. We address the issues of the transferability of such models versus the specificity of the knowledge in them, the kinds of knowledge needed for self-modeling and how that knowledge is structured, and the evaluation of introspective reasoning systems. We present the ROBBIE system which implements a model of its planning processes to improve the planner in response to reasoning failures. We show how ROBBIE's hierarchical model balances model generality with access to implementation-specific details, and discuss the qualitative and quantitative measures we have used for evaluating its introspective component.
1-hop neighbor's text information: Representing self-knowledge for introspection about memory search. : This position paper sketches a framework for modeling introspective reasoning and discusses the relevance of that framework for modeling introspective reasoning about memory search. It argues that effective and flexible memory processing in rich memories should be built on five types of explicitly represented self-knowledge: knowledge about information needs, relationships between different types of information, expectations for the actual behavior of the information search process, desires for its ideal behavior, and representations of how those expectations and desires relate to its actual performance. This approach to modeling memory search is both an illustration of general principles for modeling introspective reasoning and a step towards addressing the problem of how a reasoner human or machinecan acquire knowledge about the properties of its own knowledge base.
Target text information: Abstract: We describe an ongoing project to develop an adaptive training system (ATS) that dynamically models a students learning processes and can provide specialized tutoring adapted to a students knowledge state and learning style. The student modeling component of the ATS, ML-Modeler, uses machine learning (ML) techniques to emulate the students novice-to-expert transition. ML-Modeler infers which learning methods the student has used to reach the current knowledge state by comparing the students solution trace to an expert solution and generating plausible hypotheses about what misconceptions and errors the student has made. A case-based approach is used to generate hypotheses through incorrectly applying analogy, overgeneralization, and overspecialization. The student and expert models use a network-based representation that includes abstract concepts and relationships as well as strategies for problem solving. Fuzzy methods are used to represent the uncertainty in the student model. This paper describes the design of the ATS and ML-Modeler, and gives a detailed example of how the system would model and tutor the student in a typical session. The domain we use for this example is high-school level chemistry.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 2 | Case Based | cora | 822 | val |
1-hop neighbor's text information: Predictions with confidence intervals (local error bars). : We present a new method for obtaining local error bars, i.e., estimates of the confidence in the predicted value that depend on the input. We approach this problem of nonlinear regression in a maximum likelihood framework. We demonstrate our technique first on computer generated data with locally varying, normally distributed target noise. We then apply it to the laser data from the Santa Fe Time Series Competition. Finally, we extend the technique to estimate error bars for iterated predictions, and apply it to the exact competition task where it gives the best performance to date.
1-hop neighbor's text information: Penalisation multiple adaptative un nouvel algorithme de regression, la penalisation multiple adapta-tive. Cet algorithme represente: Chaque parametre du modele est penalise individuellement. Le reglage de ces penalisations se fait automatiquement a partir de la definition d'un hyperparametre de regularisation globale. Cet hyperparametre, qui controle la complexite du regresseur, peut ^etre estime par des techniques de reechantillonnage. Nous montrons experimentalement les performances et la stabilite de la penalisation multiple adaptative dans le cadre de la regression lineaire. Nous avons choisi des problemes pour lesquels le probleme du controle de la complexite est particulierement crucial, comme dans le cadre plus general de l'estimation fonctionnelle. Les comparaisons avec les moindres carres regularises et la selection de variables nous permettent de deduire les conditions d'application de chaque algorithme de penalisation. Lors des simulations, nous testons egalement plusieurs techniques de reechantillonnage. Ces techniques sont utilisees pour selectionner la complexite optimale des estimateurs de la fonction de regression. Nous comparons les pertes occasionnees par chacune d'entre elles lors de la selection de modeles sous-optimaux. Nous regardons egalement si elles permettent de determiner l'estimateur de la fonction de regression minimisant l'erreur en generalisation parmi les differentes methodes de penalisation en competition.
1-hop neighbor's text information: Evaluating neural network predictors by bootstrapping. : We present a new method, inspired by the bootstrap, whose goal it is to determine the quality and reliability of a neural network predictor. Our method leads to more robust forecasting along with a large amount of statistical information on forecast performance that we exploit. We exhibit the method in the context of multi-variate time series prediction on financial data from the New York Stock Exchange. It turns out that the variation due to different resamplings (i.e., splits between training, cross-validation, and test sets) is significantly larger than the variation due to different network conditions (such as architecture and initial weights). Furthermore, this method allows us to forecast a probability distribution, as opposed to the traditional case of just a single value at each time step. We demonstrate this on a strictly held-out test set that includes the 1987 stock market crash. We also compare the performance of the class of neural networks to identically bootstrapped linear models.
Target text information: A comparison of some error estimates for neural network models. : We discuss a number of methods for estimating the standard error of predicted values from a multi-layer perceptron. These methods include the delta method based on the Hessian, bootstrap estimators, and the "sandwich" estimator. The methods are described and compared in a number of examples. We find that the bootstrap methods perform best, partly because they capture variability due to the choice of starting weights.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 1 | Neural Networks | cora | 2,248 | val |
1-hop neighbor's text information: Engineering Multiversion Neural-Net Systems, : In this paper we address the problem of constructing reliable neural-net implementations, given the assumption that any particular implementation will not be totally correct. The approach taken in this paper is to organize the inevitable errors so as to minimize their impact in the context of a multiversion system. | i.e. the system functionality is reproduced in multiple versions which together will constitute the neural-net system. The unique characteristics of neural computing are exploited in order to engineer reliable systems in the form of diverse, multiversion systems which are used together with a `decision strategy' (such as majority vote). Theoretical notions of "methodological diversity" contributing to the improvement of system performance are implemented and tested. An important aspect of the engineering of an optimal system is to overproduce the components and then choose an optimal subset. Three general techniques for choosing final system components are implemented and evaluated. Several different approaches to the effective engineering of complex multiversion systems designs are realized and evaluated to determine overall reliability as well as reliability of the overall system in comparison to the lesser reliability of component substructures.
Target text information: "Use of methodological diversity to improve neural network generalization," : Littlewood and Miller [1989] present a statistical framework for dealing with coincident failures in multiversion software systems. They develop a theoretical model that holds the promise of high system reliability through the use of multiple, diverse sets of alternative versions. In this paper we adapt their framework to investigate the feasibility of exploiting the diversity observable in multiple populations of neural networks developed using diverse methodologies. We evaluate the generalisation improvements achieved by a range of methodologically diverse network generation processes. We attempt to order the constituent methodological features with respect to their potential for use in the engineering of useful diversity. We also define and explore the use of relative measures of the diversity between version sets as a guide to the potential for exploiting inter-set diversity.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 1 | Neural Networks | cora | 2,270 | test |
1-hop neighbor's text information: A model of similarity-based retrieval. : We present a model of similarity-based retrieval which attempts to capture three psychological phenomena: (1) people are extremely good at judging similarity and analogy when given items to compare. (2) Superficial remindings are much more frequent than structural remindings. (3) People sometimes experience and use purely structural analogical re-mindings. Our model, called MAC/FAC (for "many are called but few are chosen") consists of two stages. The first stage (MAC) uses a computationally cheap, non-structural matcher to filter candidates from a pool of memory items. That is, we redundantly encode structured representations as content vectors, whose dot product yields an estimate of how well the corresponding structural representations will match. The second stage (FAC) uses SME to compute a true structural match between the probe and output from the first stage. MAC/FAC has been fully implemented, and we show that it is capable of modeling patterns of access found in psychological data.
1-hop neighbor's text information: How to retrieve relevant information?. : The document presents an approach to judging relevance of retrieved information based on a novel approach to similarity assessment. Contrary to other systems, we define relevance measures (context in similarity) at query time. This is necessary if since without a context in similarity one cannot guarantee that similar items will also be relevant.
1-hop neighbor's text information: Applying case-based reasoning to control in robotics. : The proposed architecture is experimentally evaluated on two real world domains and the results are compared to other machine learning algorithms applied to the same problem.
Target text information: Context-based similarity applied to retrieval of relevant cases. : Retrieving relevant cases is a crucial component of case-based reasoning systems. The task is to use user-defined query to retrieve useful information, i.e., exact matches or partial matches which are close to query-defined request according to certain measures. The difficulty stems from the fact that it may not be easy (or it may be even impossible) to specify query requests precisely and completely resulting in a situation known as a fuzzy-querying. It is usually not a problem for small domains, but for a large repositories which store various information (multifunctional information bases or a federated databases), a request specification becomes a bottleneck. Thus, a flexible retrieval algorithm is required, allowing for imprecise query specification and for changing the viewpoint. Efficient database techniques exists for locating exact matches. Finding relevant partial matches might be a problem. This document proposes a context-based similarity as a basis for flexible retrieval. Historical bacground on research in similarity assessment is presented and is used as a motivation for formal definition of context-based similarity. We also describe a similarity-based retrieval system for multifunctinal information bases.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 2 | Case Based | cora | 2,564 | test |
1-hop neighbor's text information: Evolutionary training of clp-constrained neural networks. : The paper is concerned with the integration of constraint logic programming systems (CLP) with systems based on genetic algorithms (GA). The resulting framework is tailored for applications that require a first phase in which a number of constraints need to be generated, and a second phase in which an optimal solution satisfying these constraints is produced. The first phase is carried by the CLP and the second one by the GA. We present a specific framework where ECL i PS e (ECRC Common Logic Programming System) and GENOCOP (GEnetic algorithm for Numerical Optimization for COnstrained Problems) are integrated in a framework called CoCo (COmputational intelligence plus COnstraint logic programming). The CoCo system is applied to the training problem for neural networks. We consider constrained networks, e.g. neural networks with shared weights, constraints on the weights for example domain constraints for hardware implementation etc. Then ECL i PS e is used to generate the chromosome representation together with other constraints which ensure, in most cases, that each network is specified by exactly one chromosome. Thus the problem becomes a constrained optimization problem, where the optimization criterion is to optimize the error of the network, and GENOCOP is used to find an optimal solution. Note: The work of the second author was partially supported by SION, a department of the NWO, the National Foundation for Scientific Research. This work has been carried out while the third author was visiting CWI, Amsterdam, and the fourth author was visiting Leiden University.
1-hop neighbor's text information: Forward-Tracking: A Technique for Searching Beyond Failure: In many applications, such as decision support, negotiation, planning, scheduling, etc., one needs to express requirements that can only be partially satisfied. In order to express such requirements, we propose a technique called forward-tracking. Intuitively, forward-tracking is a kind of dual of chronological back-tracking: if a program globally fails to find a solution, then a new execution is started from a program point and a state `forward' in the computation tree. This search technique is applied to constraint logic programming, obtaining a powerful extension that preserves all the useful properties of the original scheme. We report on the successful practical application of forward-tracking to the evolutionary training of (constrained) neural networks.
Target text information: Constraining of weights using regularities. : In this paper we study how global optimization methods (like genetic algorithms) can be used to train neural networks. We introduce the notion of regularity, for studying properties of the error function that expand the search space in an artificial way. Regularities are used to generate constraints on the weights of the network. In order to find a satisfiable set of constraints we use a constraint logic programming system. Then the training of the network becomes a constrained optimization problem. We also relate the notion of regularity to so-called network transformations.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 3 | Genetic Algorithms | cora | 2,329 | test |
1-hop neighbor's text information: Producing More Comprehensible Models While Retaining Their Performance. : Rissanen's Minimum Description Length (MDL) principle is adapted to handle continuous attributes in the Inductive Logic Programming setting. Application of the developed coding as a MDL pruning mechanism is devised. The behavior of the MDL pruning is tested in a synthetic domain with artificially added noise of different levels and in two real life problems | modelling of the surface roughness of a grinding workpiece and modelling of the mutagenicity of nitroaromatic compounds. Results indicate that MDL pruning is a successful parameter-free noise fighting tool in real-life domains since it acts as a safeguard against building too complex models while retaining the accuracy of the model.
1-hop neighbor's text information: First Order Regression: We present a new approach, called First Order Regression (FOR), to handling numerical information in Inductive Logic Programming (ILP). FOR is a combination of ILP and numerical regression. First-order logic descriptions are induced to carve out those subspaces that are amenable to numerical regression among real-valued variables. The program Fors is an implementation of this idea, where numerical regression is focused on a distinguished continuous argument of the target predicate. We show that this can be viewed as a generalisation of the usual ILP problem. Applications of Fors on several real-world data sets are described: the prediction of mutagenicity of chemicals, the modelling of liquid dynamics in a surge tank, predicting the roughness in steel grinding, finite element mesh design, and operator's skill reconstruction in electric discharge machining. A comparison of Fors' performance with previous results in these domains indicates that Fors is an effective tool for ILP applications that involve numerical data.
1-hop neighbor's text information: "Induction of Decision Trees," :
Target text information: First order regression: Application in real-world domains. : A first order regression algorithm capable of handling real-valued (continuous) variables is introduced and some of its applications are presented. Regressional learning assumes real-valued class and discrete or real-valued variables. The algorithm combines regressional learning with standard ILP concepts, such as first order concept description and background knowledge. A clause is generated by successively refining the initial clause by adding literals of the form A = v for the discrete attributes, A v and A v for the real-valued attributes, and background knowledge literals to the clause body. The algorithm employs a covering approach (beam search), a heuristic impurity function, and stopping criteria based on local improvement, minimum number of examples, maximum clause length, minimum local improvement, minimum description length, allowed error, and variable depth. An outline of the algorithm and the results of the system's application in some artificial and real-world domains are presented. The real-world domains comprise: modelling of the water behavior in a surge tank, modelling of the workpiece roughness in a steel grinding process and modelling of the operator's behavior during the process of electrical discharge machining. Special emphasis is given to the evaluation of obtained models by domain experts and their comments on the aspects of practical use of the induced knowledge. The results obtained during the knowledge acquisition process show several important guidelines for knowledge acquisition, concerning mainly the process of interaction with domain experts, exposing primarily the importance of comprehensibility of the induced knowledge.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 0 | Rule Learning | cora | 2,086 | test |
1-hop neighbor's text information: Evolving networks: Using the genetic algorithm with connectionist learning. :
1-hop neighbor's text information: Automatic design of cellular neural networks by means of genetic algorithms: finding a feature detector, : This paper aims to examine the use of genetic algorithms to optimize subsystems of cellular neural network architectures. The application at hand is character recognition: the aim is to evolve an optimal feature detector in order to aid a conventional classifier network to generalize across different fonts. To this end, a performance function and a genetic encoding for a feature detector are presented. An experiment is described where an optimal feature detector is indeed found by the genetic algorithm. We are interested in the application of cellular neural networks in computer vision. Genetic algorithms (GA's) [1-3] can serve to optimize the design of cellular neural networks. Although the design of the global architecture of the system could still be done by human insight, we propose that specific sub-modules of the system are best optimized using one or other optimization method. GAs are a good candidate to fulfill this optimization role, as they are well suited to problems where the objective function is a complex function of many parameters. The specific problem we want to investigate is one of character recognition. More specifically, we would like to use the GA to find optimal feature detectors to be used in the recognition of digits .
1-hop neighbor's text information: Efficient reinforcement learning through symbiotic evolution. : This article presents a new reinforcement learning method called SANE (Symbiotic, Adaptive Neuro-Evolution), which evolves a population of neurons through genetic algorithms to form a neural network capable of performing a task. Symbiotic evolution promotes both cooperation and specialization, which results in a fast, efficient genetic search and discourages convergence to suboptimal solutions. In the inverted pendulum problem, SANE formed effective networks 9 to 16 times faster than the Adaptive Heuristic Critic and 2 times faster than Q-learning and the GENITOR neuro-evolution approach without loss of generalization. Such efficient learning, combined with few domain assumptions, make SANE a promising approach to a broad range of reinforcement learning problems, including many real-world applications.
Target text information: : 1] R.K. Belew, J. McInerney, and N. Schraudolph, Evolving networks: using the genetic algorithm with connectionist learning, in Artificial Life II, SFI Studies in the Science of Complexity, C.G. Langton, C. Taylor, J.D. Farmer, S. Rasmussen Eds., vol. 10, Addison-Wesley, 1991. [2] M. McInerney, and A.P. Dhawan, Use of genetic algorithms with back propagation in training of feed-forward neural networks, in IEEE International Conference on Neural Networks, vol. 1, pp. 203-208, 1993. [3] F.Z. Brill, D.E. Brown, and W.N. Martin, Fast genetic selection of features for neural network classifiers, IEEE Transactions on Neural Networks, vol. 3, no. 2, pp. 324-328, 1992. [4] F. Dellaert, and J. Vandewalle, Automatic design of cellular neural networks by means of genetic algorithms: finding a feature detector, in The Third IEEE International Workshop on Cellular Neural Networks and Their Applications, IEEE, New Jersey, pp. 189-194, 1994. [5] D.E. Moriarty, and R. Miikkulainen, Efficient reinforcement learning through symbiotic evolution, Machine Learning, vol. 22, pp. 11-33, 1996. [6] L. Davis, Handbook of Genetic Algorithms, Van Nostrand Reinhold, New York, 1991. [7] D. Whitely, The GENITOR algorithm and selective pressure, in Proceedings of the Third Interanational Conference on Genetic Algorithms, J.D. Schaffer Ed., Morgan Kauffman, San Mateo, CA, 1989, pp. 116-121. [8] van Camp, D., T. Plate and G.E. Hinton (1992). The Xerion Neural Network Simulator and Documentation. Department of Computer Science, University of Toronto, Toronto.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 3 | Genetic Algorithms | cora | 842 | test |
1-hop neighbor's text information: a multiple instruction stream computer. : This paper describes a single chip Multiple Instruction Stream Computer (MISC) capable of extracting instruction level parallelism from a broad spectrum of programs. The MISC architecture uses multiple asynchronous processing elements to separate a program into streams that can be executed in parallel, and integrates a conflict-free message passing system into the lowest level of the processor design to facilitate low latency intra-MISC communication. This approach allows for increased machine parallelism with minimal code expansion, and provides an alternative approach to single instruction stream multi-issue machines such as SuperScalar and VLIW.
1-hop neighbor's text information: Simultaneous Multithreading: A Platform for Next-Generation Processors. : A version of this paper will appear in ACM Transactions on Computer Systems, August 1997. Permission to make digital copies of part or all of this work for personal or classroom use is grantedwithout fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Abstract To achieve high performance, contemporary computer systems rely on two forms of parallelism: instruction-level parallelism (ILP) and thread-level parallelism (TLP). Wide-issue superscalar processors exploit ILP by executing multiple instructions from a single program in a single cycle. Multiprocessors (MP) exploit TLP by executing different threads in parallel on different processors. Unfortunately, both parallel-processing styles statically partition processor resources, thus preventing them from adapting to dynamically-changing levels of ILP and TLP in a program. With insufficient TLP, processors in an MP will be idle; with insufficient ILP, multiple-issue hardware on a superscalar is wasted. This paper explores parallel processing on an alternative architecture, simultaneous multithreading (SMT), which allows multiple threads to compete for and share all of the processors resources every cycle. The most compelling reason for running parallel applications on an SMT processor is its ability to use thread-level parallelism and instruction-level parallelism interchangeably. By permitting multiple threads to share the processors functional units simultaneously, the processor can use both ILP and TLP to accommodate variations in parallelism. When a program has only a single thread, all of the SMT processors resources can be dedicated to that thread; when more TLP exists, this parallelism can compensate for a lack of
1-hop neighbor's text information: Limits of Instruction-Level Parallelism, : This paper examines the limits to instruction level parallelism that can be found in programs, in particular the SPEC95 benchmark suite. Apart from using a more recent version of the SPEC benchmark suite, it differs from earlier studies in removing non-essential true dependencies that occur as a result of the compiler employing a stack for subroutine linkage. This is a subtle limitation to parallelism that is not readily evident as it appears as a true dependency on the stack pointer. Other methods can be used that do not employ a stack to remove this dependency. In this paper we show that its removal exposes far more parallelism than has been seen previously. We refer to this type of parallelism as "parallelism at a distance" because it requires impossibly large instruction windows for detection. We conclude with two observations: 1) that a single instruction window characteristic of superscalar machines is inadequate for detecting parallelism at a distance; and 2) in order to take advantage of this parallelism the compiler must be involved, or separate threads must be explicitly programmed.
Target text information: Techniques for extracting instruction level parallelism on MIMD architectures. : Extensive research has been done on extracting parallelism from single instruction stream processors. This paper presents some results of our investigation into ways to modify MIMD architectures to allow them to extract the instruction level parallelism achieved by current superscalar and VLIW machines. A new architecture is proposed which utilizes the advantages of a multiple instruction stream design while addressing some of the limitations that have prevented MIMD architectures from performing ILP operation. A new code scheduling mechanism is described to support this new architecture by partitioning instructions across multiple processing elements in order to exploit this level of parallelism.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 0 | Rule Learning | cora | 1,373 | train |
1-hop neighbor's text information: Simple selection of utile control rules in speedup learning. : Many recent approaches to avoiding the utility problem in speedup learning rely on sophisticated utility measures and significant numbers of training data to accurately estimate the utility of control knowledge. Empirical results presented here and elsewhere indicate that a simple selection strategy of retaining all control rules derived from a training problem explanation quickly defines an efficient set of control knowledge from few training problems. This simple selection strategy provides a low-cost alternative to example-intensive approaches for improving the speed of a problem solver.
1-hop neighbor's text information: A heuristic approach to the discovery of macro-operators. : The negative effect is naturally more significant in the more complex domain. The graph for the simple domain crosses the 0 line earlier than the complex domain. That means that learning starts to be useful with weight greater than 0.6 for the simple domain and 0.7 for the complex domain. As we relax the optimality requirement more s i g n i f i c a n t l y ( w i t h a W = 0.8), macro usage in the more complex domain becomes more advantageous. The purpose of the research described in this paper is to identify the parameters that effects deductive learning and to perform experiments systematically in order to understand the nature of those effects. The goal of this paper is to demonstrate the methodology of performing parametric experimental study of deductive learning. The example here include the study of two parameters: the point on the satisficing-optimizing scale that is used during the search carried out during problem solving time and during learning time. We showed that A*, which looks for optimal solutions, cannot benefit from macro learning but as the strategy comes closer to best-first (satisficing search), the utility of macros increases. We also demonstrated that deductive learners that learn offline by solving training problems are sensitive to the type of search used during the learning. We showed that in general optimizing search is best for learning. It generates macros that increase the quality solutions regardless of the search method used during problem solving. It also improves the efficiency for problem solvers that require a high level of optimality. The only drawback in using optimizing search is the increase in learning resources spent. We are aware of the fact that the results described here are not very surprising. The goal of the parametric study is not necessarily to find exciting results, but to obtain results, sometimes even previously known, in a controlled experimental environment. The work described here is only part of our research plan. We are currently in the process of extensive experimentation with all the parameters described here and also with others. We also intend to test the validity of the conclusions reached during the study by repeating some of the tests in several of the commonly known search problems. We hope that such systematic experimentation will help the research community to better understand the process of deductive learning and will serve as a demonstration of the experimental methodology that should be used in machine learning research.
1-hop neighbor's text information: Learning approximate control rules of high utility. : One of the difficult problems in the area of explanation based learning is the utility problem; learning too many rules of low utility can lead to swamping, or degradation of performance. This paper introduces two new techniques for improving the utility of learned rules. The first technique is to combine EBL with inductive learning techniques to learn a better set of control rules; the second technique is to use these inductive techniques to learn approximate control rules. The two techniques are synthesized in an algorithm called approximating abductive explanation based learning (AxA-EBL). AxA-EBL is shown to improve substantially over standard EBL in several domains.
Target text information: Utilization Filtering: a method for reducing the inherent harmfulness of deductively learned knowledge, : This paper highlights a phenomenon that causes deductively learned knowledge to be harmful when used for problem solving. The problem occurs when deductive problem solvers encounter a failure branch of the search tree. The backtracking mechanism of such problem solvers will force the program to traverse the whole subtree thus visiting many nodes twice - once by using the deductively learned rule and once by using the rules that generated the learned rule in the first place. We suggest an approach called utilization filtering to solve that problem. Learners that use this approach submit to the problem solver a filter function together with the knowledge that was acquired. The function decides for each problem whether to use the learned knowledge and what part of it to use. We have tested the idea in the context of a lemma learning system, where the filter uses the probability of a subgoal failing to decide whether to turn lemma usage off. Experiments show an improvement of performance by a factor of 3. This paper is concerned with a particular type of harmful redundancy that occurs in deductive problem solvers that employ backtracking in their search procedure, and use deductively learned knowledge to accelerate the search. The problem is that in failure branches of the search tree, the backtracking mechanism of the problem solver forces exploration of the whole subtree. Thus, the search procedure will visit many states twice - once by using the deductively learned rule, and once by using the search path that produced the rule in the first place.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 5 | Reinforcement Learning | cora | 2,288 | val |
1-hop neighbor's text information: Asking questions to minimize errors. : A number of efficient learning algorithms achieve exact identification of an unknown function from some class using membership and equivalence queries. Using a standard transformation such algorithms can easily be converted to on-line learning algorithms that use membership queries. Under such a transformation the number of equivalence queries made by the query algorithm directly corresponds to the number of mistakes made by the on-line algorithm. In this paper we consider several of the natural classes known to be learnable in this setting, and investigate the minimum number of equivalence queries with accompanying counterexamples (or equivalently the minimum number of mistakes in the on-line model) that can be made by a learning algorithm that makes a polynomial number of membership queries and uses polynomial computation time. We are able both to reduce the number of equivalence queries used by the previous algorithms and often to prove matching lower bounds. As an example, consider the class of DNF formulas over n variables with at most k = O(log n) terms. Previously, the algorithm of Blum and Rudich [BR92] provided the best known upper bound of 2 O(k) log n for the minimum number of equivalence queries needed for exact identification. We greatly improve on this upper bound showing that exactly k counterexamples are needed if the learner knows k a priori and exactly k +1 counterexamples are needed if the learner does not know k a priori. This exactly matches known lower bounds [BC92]. For many of our results we obtain a complete characterization of the tradeoff between the number of membership and equivalence queries needed for exact identification. The classes we consider here are monotone DNF formulas, Horn sentences, O(log n)-term DNF formulas, read-k sat-j DNF formulas, read-once formulas over various bases, and deterministic finite automata.
1-hop neighbor's text information: Learning conjunctions of Horn clauses. :
1-hop neighbor's text information: Exact identification of circuits using fixed points of amplification functions. : In this paper we describe a new technique for exactly identifying certain classes of read-once Boolean formulas. The method is based on sampling the input-output behavior of the target formula on a probability distribution which is determined by the fixed point of the formula's amplification function (defined as the probability that a 1 is output by the formula when each input bit is 1 independently with probability p). By performing various statistical tests on easily sampled variants of the fixed-point distribution, we are able to efficiently infer all structural information about any logarithmic-depth formula (with high probability). We apply our results to prove the existence of short universal identification sequences for large classes of formulas. We also describe extensions of our algorithms to handle high rates of noise, and to learn formulas of unbounded depth in Valiant's model with respect to specific distributions. fl Most of this research was carried out while all three authors were at M.I.T. Laboratory for Computer Science. Support was provided by NSF Grant CCR-88914428, ARO Grant DAAL03-86-K-0171, DARPA Contract N00014-89-J-1988, and a grant from the Siemens Corporation. An extended abstract of this paper appeared in the proceedings of the 31st Annual Symposium on Foundations of Computer Science. y Supported in part by a G.E. Foundation Junior Faculty Grant. z Supported by AFOSR Grant AFOSR-89-0506.
Target text information: Learning arithmetic read-once formulas. : We present a membership query (i.e. interpolation) algorithm for exactly identifying the class of read-once formulas over the basis of boolean threshold functions. Using a generic transformation from [Angluin, Hellerstein, Karpin-ski 89], this gives an algorithm using membership and equivalence queries for exactly identifying the class of read-once formulas over the basis of boolean threshold functions and negation. We also present a a series of generic transformations that can be used to convert an algorithm in one learning model into an algorithm in a different model.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 4 | Theory | cora | 2,670 | test |
1-hop neighbor's text information: "Robot Juggling: An Implementation of Memory-Based Learning," : This paper explores issues involved in implementing robot learning for a challenging dynamic task, using a case study from robot juggling. We use a memory-based local model - ing approach (locally weighted regression) to represent a learned model of the task to be performed. Statistical tests are given to examine the uncertainty of a model, to optimize its pre diction quality, and to deal with noisy and corrupted data. We develop an exploration algorithm that explicitly deals with prediction accuracy requirements dur ing explo - ration. Using all these ingredients in combination with methods from optimal control, our robot achieves fast real - time learning of the task within 40 to 100 trials. * Address of both authors: Massachusetts Institute of Technology, The Artificial Intelligence Laboratory & The Department of Brain and Cognitive Sciences, 545 Technology Square, Cambride, MA 02139, USA. Email: ss-chaal@ai.mit.edu, cga@ai.mit.edu. Support was provided by the Air Force Office of Sci entific Research and by Siemens Cor pora tion. Support for the first author was provided by the Ger man Scholar ship Foundation and the Alexander von Hum boldt Founda tion. Support for the second author was provided by a Na tional Sci ence Foundation Pre sidential Young Investigator Award. We thank Gideon Stein for im ple ment ing the first version of LWR on the i860 microprocessor, and Gerrie van Zyl for build ing the devil stick robot and implementing the first version of devil stick learning.
1-hop neighbor's text information: On Reasoning from Data:
1-hop neighbor's text information: Issues in using function approximation for reinforcement learning. : Reinforcement learning techniques address the problem of learning to select actions in unknown, dynamic environments. It is widely acknowledged that to be of use in complex domains, reinforcement learning techniques must be combined with generalizing function approximation methods such as artificial neural networks. Little, however, is understood about the theoretical properties of such combinations, and many researchers have encountered failures in practice. In this paper we identify a prime source of such failuresnamely, a systematic overestimation of utility values. Using Watkins' Q-Learning [18] as an example, we give a theoretical account of the phenomenon, deriving conditions under which one may expected it to cause learning to fail. Employing some of the most popular function approximators, we present experimental results which support the theoretical findings.
Target text information: Memory-based learning for control. : Lazy learning methods provide useful representations and training algorithms for learning about complex phenomena during autonomous adaptive control of complex systems. This paper surveys ways in which locally weighted learning, a type of lazy learning, has been applied by us to control tasks. We explain various forms that control tasks can take, and how this affects the choice of learning paradigm. The discussion section explores the interesting impact that explicitly remembering all previous experiences has on the problem of learning to control.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 5 | Reinforcement Learning | cora | 1,526 | test |
1-hop neighbor's text information: Nonsmooth dynamic simulation with linear programming based methods. : Process simulation has emerged as a valuable tool for process design, analysis and operation. In this work, we extend the capabilities of iterated linear programming (LP) for dealing with problems encountered in dynamic nonsmooth process simulation. A previously developed LP method is refined with the addition of a new descent strategy which combines line search with a trust region approach. This adds more stability and efficiency to the method. The LP method has the advantage of naturally dealing with profile bounds as well. This is demonstrated to avoid the computational difficulties which arise from the iterates going into physically unrealistic regions. A new method for the treatment of discontinuities occurring in dynamic simulation problems is also presented in this paper. The method ensures that any event which has occurred within the time interval in consideration is detected and if more than one event occurs, the detected one is indeed the earliest one. A specific class of implicitly discontinuous process simulation problems, phase equilibrium calculations is also looked at. A new formulation is introduced to solve multiphase problems. fl To whom all correspondence should be addressed. email:biegler@cmu.edu
Target text information: A successive linear programming approach to consistent initialization and reinitialization after discontinuities of differential algebraic equations. : Determination of consistent initial conditions is an important aspect of the solution of differential algebraic equations (DAEs). Specification of inconsistent initial conditions, even if they are slightly inconsistent, often leads to a failure in the initialization problem. In this paper, we present a Successive Linear Programming (SLP) approach for the solution of the DAE derivative array equations for the initialization problem. The SLP formulation handles roundoff errors and inconsistent user specifications among others and allows for reliable convergence strategies that incorporate variable bounds and trust region concepts. A new consistent set of initial conditions is obtained by minimizing the deviation of the variable values from the specified ones. For problems with discontinuities caused by a step change in the input functions, a new criterion is presented for identifying the subset of variables which are continuous across the discontinuity. The LP formulation is then applied to determine a consistent set of initial conditions for further solution of the problem in the domain after the discontinuity. Numerous example problems are solved to illustrate these concepts.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 1 | Neural Networks | cora | 1,605 | test |
1-hop neighbor's text information: Improved uniform test error bounds, : We derive distribution-free uniform test error bounds that improve on VC-type bounds for validation. We show how to use knowledge of test inputs to improve the bounds. The bounds are sharp, but they require intense computation. We introduce a method to trade sharpness for speed of computation. Also, we compute the bounds for several test cases.
1-hop neighbor's text information: Improved Hoeffding-Style Performance Guarantees for Accurate Classifiers: We extend Hoeffding bounds to develop superior probabilistic performance guarantees for accurate classifiers. The original Hoeffding bounds on classifier accuracy depend on the accuracy itself as a parameter. Since the accuracy is not known a priori, the parameter value that gives the weakest bounds is used. We present a method that loosely bounds the accuracy using the old method and uses the loose bound as an improved parameter value for tighter bounds. We show how to use the bounds in practice, and we generalize the bounds for individual classifiers to form uniform bounds over multiple classifiers.
1-hop neighbor's text information: Similar classifiers and VC error bounds. : We improve error bounds based on VC analysis for classes with sets of similar classifiers. We apply the new error bounds to separating planes and artificial neural networks. Key words machine learning, learning theory, generalization, Vapnik-Chervonenkis, separating planes, neural networks.
Target text information: Partition-based uniform error bounds, : This paper develops probabilistic bounds on out-of-sample error rates for several classifiers using a single set of in-sample data. The bounds are based on probabilities over partitions of the union of in-sample and out-of-sample data into in-sample and out-of-sample data sets. The bounds apply when in-sample and out-of-sample data are drawn from the same distribution. Partition-based bounds are stronger than VC-type bounds, but they require more computation.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 4 | Theory | cora | 2,519 | train |
1-hop neighbor's text information: On the computational utility of consciousness. : We propose a computational framework for understanding and modeling human consciousness. This framework integrates many existing theoretical perspectives, yet is sufficiently concrete to allow simulation experiments. We do not attempt to explain qualia (subjective experience), but instead ask what differences exist within the cognitive information processing system when a person is conscious of mentally-represented information versus when that information is unconscious. The central idea we explore is that the contents of consciousness correspond to temporally persistent states in a network of computational modules. Three simulations are described illustrating that the behavior of persistent states in the models corresponds roughly to the behavior of conscious states people experience when performing similar tasks. Our simulations show that periodic settling to persistent (i.e., conscious) states improves performance by cleaning up inaccuracies and noise, forcing decisions, and helping keep the system on track toward a solution.
Target text information: In Search Of Articulated Attractors: Recurrent attractor networks offer many advantages over feed-forward networks for the modeling of psychological phenomena. Their dynamic nature allows them to capture the time course of cognitive processing, and their learned weights may often be easily interpreted as soft constraints between representational components. Perhaps the most significant feature of such networks, however, is their ability to facilitate generalization by enforcing well formedness constraints on intermediate and output representations. Attractor networks which learn the systematic regularities of well formed representations by exposure to a small number of examples are said to possess articulated attractors. This paper investigates the conditions under which articulated attractors arise in recurrent networks trained using variants of backpropagation. The results of computational experiments demonstrate that such structured attrac-tors can spontaneously appear in an emergence of systematic-ity, if an appropriate error signal is presented directly to the recurrent processing elements. We show, however, that distal error signals, backpropagated through intervening weights, pose serious problems for networks of this kind. We present simulation results, discuss the reasons for this difficulty, and suggest some directions for future attempts to surmount it.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 1 | Neural Networks | cora | 276 | test |
1-hop neighbor's text information: Supervised and unsupervised discretization of continuous features. : Many supervised machine learning algorithms require a discrete feature space. In this paper, we review previous work on continuous feature discretization, identify defining characteristics of the methods, and conduct an empirical evaluation of several methods. We compare binning, an unsupervised discretization method, to entropy-based and purity-based methods, which are supervised algorithms. We found that the performance of the Naive-Bayes algorithm significantly improved when features were discretized using an entropy-based method. In fact, over the 16 tested datasets, the discretized version of Naive-Bayes slightly outperformed C4.5 on average. We also show that in some cases, the performance of the C4.5 induction algorithm significantly improved if features were discretized in advance; in our experiments, the performance never significantly degraded, an interesting phenomenon considering the fact that C4.5 is capable of locally discretiz ing features.
1-hop neighbor's text information: Beyond independence: Conditions for the optimality of the simple bayesian classifier. : The simple Bayesian classifier (SBC) is commonly thought to assume that attributes are independent given the class, but this is apparently contradicted by the surprisingly good performance it exhibits in many domains that contain clear attribute dependences. No explanation for this has been proposed so far. In this paper we show that the SBC does not in fact assume attribute independence, and can be optimal even when this assumption is violated by a wide margin. The key to this finding lies in the distinction between classification and probability estimation: correct classification can be achieved even when the probability estimates used contain large errors. We show that the previously-assumed region of optimality of the SBC is a second-order infinitesimal fraction of the actual one. This is followed by the derivation of several necessary and several sufficient conditions for the optimality of the SBC. For example, the SBC is optimal for learning arbitrary conjunctions and disjunctions, even though they violate the independence assumption. The paper also reports empirical evidence of the SBC's competitive performance in domains containing substantial degrees of attribute dependence.
Target text information: NAIVE BAYESIAN LEARNING Adapted from:
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 6 | Probabilistic Methods | cora | 661 | test |
1-hop neighbor's text information: An Adverse Interaction between the Crossover Operator and a Restriction on Tree Depth of Crossover: The Crossover operator is common to most implementations of Genetic Programming (GP). Another, usually unavoidable, factor is some form of restriction on the size of trees in the GP population. This paper concentrates on the interaction between the Crossover operator and a restriction on tree depth demonstrated by the MAX problem, which involves returning the largest possible value for given function and terminal sets.
1-hop neighbor's text information: The MAX Problem for Genetic Programming Highlighting an Adverse Interaction between the Crossover Operator and: The Crossover operator is common to most implementations of Genetic Programming (GP). Another, usually unavoidable, factor is some form of restriction on the size of trees in the GP population. This paper concentrates on the interaction between the Crossover operator and a restriction on tree depth demonstrated by the MAX problem, which involves returning the largest possible value for given function and terminal sets. Some characteristics and inadequacies of Crossover in `normal' use are highlighted and discussed. Subtree discovery and movement takes place mostly near the leaf nodes, with nodes near the root left untouched. Diversity drops quickly to zero near the root node in the tree population. GP is then unable to create `fitter' trees via the crossover operator, leaving a Mutation operator as the only common, but ineffective, route to discovery of `fitter' trees.
Target text information: AN ANALYSIS OF HIERARCHICAL GENETIC PROGRAMMING. : Hierarchical genetic programming (HGP) approaches rely on the discovery, modification, and use of new functions to accelerate evolution. This paper provides a qualitative explanation of the improved behavior of HGP, based on an analysis of the evolution process from the dual perspective of diversity and causality. From a static point of view, the use of an HGP approach enables the manipulation of a population of higher diversity programs. Higher diversity increases the exploratory ability of the genetic search process, as demonstrated by theoretical and experimental fitness distributions and expanded structural complexity of individuals. From a dynamic point of view, this report analyzes the causality of the crossover operator. Causality relates changes in the structure of an object with the effect of such changes, i.e. changes in the properties or behavior of the object. The analyses of crossover causality suggests that HGP discovers and exploits useful structures in a bottom-up, hierarchical manner. Diversity and causality are complementary, affecting exploration and exploitation in genetic search. Unlike other machine learning techniques that need extra machinery to control the tradeoff between them, HGP automatically trades off exploration and exploitation.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 3 | Genetic Algorithms | cora | 2,095 | test |
1-hop neighbor's text information: Self-targeting candidates for Hastings-Metropolis algorithms. : The Metropolis-Hastings algorithm for estimating a distribution is based on choosing a candidate Markov chain and then accepting or rejecting moves of the candidate to produce a chain known to have as the invariant measure. The traditional methods use candidates essentially unconnected to . Based on diffusions for which is invariant, we develop for one-dimensional distributions a class of candidate distributions that "self-target" towards the high density areas of . These produce Metropolis-Hastings algorithms with convergence rates that appear to be considerably better than those known for the traditional candidate choices, such as random walk. In particular, for wide classes of these choices may effectively help reduce the "burn-in" problem. We illustrate this behaviour for examples with exponential and polynomial tails, and for a logistic regression model using a Gibbs sampling algorithm.
1-hop neighbor's text information: Exponential convergence of Langevin diffusions and their discrete approximations. : In this paper we consider a continous time method of approximating a given distribution using the Langevin diffusion dL t = dW t + 1 2 r log (L t )dt: We find conditions under which this diffusion converges exponentially quickly to or does not: in one dimension, these are essentially that for distributions with exponential tails of the form (x) / exp(fljxj fi ), 0 < fi < 1, exponential convergence occurs if and only if fi 1. We then consider conditions under which the discrete approximations to the diffusion converge. We first show that even when the diffusion itself converges, naive discretisations need not do so. We then consider a "Metropolis-adjusted" version of the algorithm, and find conditions under which this also converges at an exponential rate: perhaps surprisingly, even the Metropolised version need not converge exponentially fast even if the diffusion does. We briefly discuss a truncated form of the algorithm which, in practice, should avoid the difficulties of the other forms.
1-hop neighbor's text information: Rates of convergence of the Hastings and Metropolis algorithms. : We apply recent results in Markov chain theory to Hastings and Metropolis algorithms with either independent or symmetric candidate distributions, and provide necessary and sufficient conditions for the algorithms to converge at a geometric rate to a prescribed distribution . In the independence case (in IR k ) these indicate that geometric convergence essentially occurs if and only if the candidate density is bounded below by a multiple of ; in the symmetric case (in IR only) we show geometric convergence essentially occurs if and only if has geometric tails. We also evaluate recently developed computable bounds on the rates of convergence in this context: examples show that these theoretical bounds can be inherently extremely conservative, although when the chain is stochastically monotone the bounds may well be effective.
Target text information: Geometric and subgeometric convergence of diffusions with given stationary distributions. : We describe algorithms for estimating a given measure known up to a constant of proportionality, based on a large class of diffusions (extending the Langevin model) for which is invariant. We show that under weak conditions one can choose from this class in such a way that the diffusions converge at exponential rate to , and one can even ensure that convergence is independent of the starting point of the algorithm. When convergence is less than exponential we show that it is often polynomial at known rates. We then consider methods of discretizing the diffusion in time, and find methods which inherit the convergence rates of the continuous time process. These contrast with the behaviour of the naive or Euler discretization, which can behave badly even in simple cases.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 6 | Probabilistic Methods | cora | 2,297 | test |
1-hop neighbor's text information: Self-organized formation of typologically correct feature maps. : 2] D. E. Rumelhart, G. E. Hinton and R. J. Williams, "Learning Internal Representations by Error Propagation", in D. E. Rumelhart and J. L. McClelland (eds.) Parallel Distributed Processing: Explorations in the Microstructure of Cognition (Vol. 1), MIT Press (1986).
1-hop neighbor's text information: A Vector Microprocessor System. : We report on our development of a high-performance system for neural network and other signal processing applications. We have designed and implemented a vector microprocessor and packaged it as an attached processor for a conventional workstation. We present performance comparisons with commercial workstations on neural network backpropagation training. The SPERT-II system demonstrates significant speedups over extensively hand optimization code running on the workstations.
Target text information: A fast Kohonen net implementation for spert-ii. : We present an implementation of Kohonen Self-Organizing Feature Maps for the Spert-II vector microprocessor system. The implementation supports arbitrary neural map topologies and arbitrary neighborhood functions. For small networks, as used in real-world tasks, a single Spert-II board is measured to run Kohonen net classification at up to 208 million connections per second (MCPS). On a speech coding benchmark task, Spert-II performs on-line Kohonen net training at over 100 million connection updates per second (MCUPS). This represents almost a factor of 10 improvement compared to previously reported implementations. The asymptotic peak speed of the system is 213 MCPS and 213 MCUPS.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 1 | Neural Networks | cora | 2,656 | test |
1-hop neighbor's text information: Mining and Model Simplicity: A Case Study in Diagnosis: Proceedings of the 2nd International Conference on Knowledge Discovery and Data Mining (KDD), 1996. The official version of this paper has been published by the American Association for Artificial Intelligence (http://www.aaai.org) c fl 1996, American Association for Artificial Intelligence. All rights reserved. Abstract We describe the results of performing data mining on a challenging medical diagnosis domain, acute abdominal pain. This domain is well known to be difficult, yielding little more than 60% predictive accuracy for most human and machine diagnosticians. Moreover, many researchers argue that one of the simplest approaches, the naive Bayesian classifier, is optimal. By comparing the performance of the naive Bayesian classifier to its more general cousin, the Bayesian network classifier, and to selective Bayesian classifiers with just 10% of the total attributes, we show that the simplest models perform at least as well as the more complex models. We argue that simple models like the selective naive Bayesian classifier will perform as well as more complicated models for similarly complex domains with relatively small data sets, thereby calling into question the extra expense necessary to induce more complex models.
1-hop neighbor's text information: Construction of Bayesian Network Structures from Data: a Brief Survey and an Efficient Algorithm. : Previous algorithms for the recovery of Bayesianbelief network structures from data have been either highly dependent on conditional independence (CI) tests, or have required an ordering on the nodes to be supplied by the user. We present an algorithm that integrates these two approaches - CI tests are used to generate an ordering on the nodes from the database which is then used to recover the underlying Bayesian network structure using a non CI test based method. Results of the evaluation of the algorithm on a number of databases (e.g. ALARM, LED and SOYBEAN) are presented. We also discuss some algorithm performance issues and open problems.
1-hop neighbor's text information: Toward optimal feature selection. : In this paper, we examine a method for feature subset selection based on Information Theory. Initially, a framework for defining the theoretically optimal, but computation-ally intractable, method for feature subset selection is presented. We show that our goal should be to eliminate a feature if it gives us little or no additional information beyond that subsumed by the remaining features. In particular, this will be the case for both irrelevant and redundant features. We then give an efficient algorithm for feature selection which computes an approximation to the optimal feature selection criterion. The conditions under which the approximate algorithm is successful are examined. Empirical results are given on a number of data sets, showing that the algorithm effectively han dles datasets with large numbers of features.
Target text information: Efficient learning of selective Bayesian network classifiers. : In this paper, we present a computation-ally efficient method for inducing selective Bayesian network classifiers. Our approach is to use information-theoretic metrics to efficiently select a subset of attributes from which to learn the classifier. We explore three conditional, information-theoretic met-rics that are extensions of metrics used extensively in decision tree learning, namely Quin-lan's gain and gain ratio metrics and Man-taras's distance metric. We experimentally show that the algorithms based on gain ratio and distance metric learn selective Bayesian networks that have predictive accuracies as good as or better than those learned by existing selective Bayesian network induction approaches (K2-AS), but at a significantly lower computational cost. We prove that the subset-selection phase of these information-based algorithms has polynomial complexity, as compared to the worst-case exponential time complexity of the corresponding phase in K2-AS.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 6 | Probabilistic Methods | cora | 1,764 | val |
1-hop neighbor's text information: Fitness causes bloat: Mutation. : In many cases programs length's increase (known as "bloat", "fluff" and increasing "structural complexity") during artificial evolution. We show bloat is not specific to genetic programming and suggest it is inherent in search techniques with discrete variable length representations using simple static evaluation functions. We investigate the bloating characteristics of three non-population and one population based search techniques using a novel mutation operator. An artificial ant following the Santa Fe trail problem is solved by simulated annealing, hill climbing, strict hill climbing and population based search using two variants of the the new subtree based mutation operator. As predicted bloat is observed when using unbiased mutation and is absent in simulated annealing and both hill climbers when using the length neutral mutation however bloat occurs with both mutations when using a population. We conclude that there are two causes of bloat.
1-hop neighbor's text information: Data Structures and Genetic Programming, : It is established good software engineering practice to ensure that programs use memory via abstract data structures such as stacks, queues and lists. These provide an interface between the program and memory, freeing the program of memory management details which are left to the data structures to implement. The main result presented herein is that GP can automatically generate stacks and queues. Typically abstract data structures support multiple operations, such as put and get. We show that GP can simultaneously evolve all the operations of a data structure by implementing each such operation with its own independent program tree. That is, the chromosome consists of a fixed number of independent program trees. Moreover, crossover only mixes genetic material of program trees that implement the same operation. Program trees interact with each other only via shared memory and shared "Automatically Defined Functions" (ADFs).
1-hop neighbor's text information: An Analysis of Genetic Programming, : In this paper we carefully formulate a Schema Theorem for Genetic Programming (GP) using a schema definition that accounts for the variable length and the non-homologous nature of GP's representation. In a manner similar to early GA research, we use interpretations of our GP Schema Theorem to obtain a GP Building Block definition and to state a "classical" Building Block Hypothesis (BBH): that GP searches by hierarchically combining building blocks. We report that this approach is not convincing for several reasons: it is difficult to find support for the promotion and combination of building blocks solely by rigourous interpretation of a GP Schema Theorem; even if there were such support for a BBH, it is empirically questionable whether building blocks always exist because partial solutions of consistently above average fitness and resilience to disruption are not assured; also, a BBH constitutes a narrow and imprecise account of GP search behavior.
Target text information: Why ants are hard. : The problem of programming an artificial ant to follow the Santa Fe trail is used as an example program search space. Previously reported genetic programming, simulated annealing and hill climbing performance is shown not to be much better than random search on the Ant problem. Analysis of the program search space in terms of fixed length schema suggests it is highly deceptive and that for the simplest solutions large building blocks must be assembled before they have above average fitness. In some cases we show solutions cannot be assembled using a fixed representation from small building blocks of above average fitness. This suggest the Ant problem is difficult for Genetic Algorithms.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 3 | Genetic Algorithms | cora | 2,117 | test |
1-hop neighbor's text information: Generalization by Controlled Expansion of Examples. : SG (Specific to General) is a network for supervised inductive learning from examples that uses ideas from neural networks and symbolic inductive learning to gain benefits of both methods. The network is built of many simple nodes that learn important features in the input space and then monitor the ability of the features to predict output values. The network avoids the exponential nature of the number of features by creating specific features for each example and then expanding those features; making them more general. Expansion of a feature terminates when it encounters another feature with contradicting outputs. Empirical evaluation of the model on real-world data has shown that the network provides good generalization performance. Convergence is accomplished within a small number of training passes. The network provides these benefits while automatically allocating and deleting nodes and without requiring user adjustment of any parameters. The network learns incrementally and operates in a parallel fashion. This paper describes a network architecture for supervised learning that combines techniques used in neural networks 1,7,8 with symbolic machine learning 3,4,6 to gain advantages of both approaches. In supervised learning the network is given a training set containing examples. Each example gives an input pattern along with the corresponding output that the network should produce when presented with the input. The task of the network is not only to converge to a representation that contains the information given by the training set, but to generalize that information so that the network will respond well to inputs that it has not been trained on. One approach to generalization is to look for important features in the input space. A feature is some subset of network inputs along with their associated values. A feature is matched when the values on the network inputs that are part of the feature are equal to the values for those inputs as given in the feature. Inputs that are not part of the feature can be any value. A feature that predicts an output with high probability is an important feature. The number of inputs contained in a feature is the order of the feature and determines the generality of the feature. A feature with few inputs is a general feature, while a feature with many inputs is a specific feature. It is impractical to monitor all possible input features because the number of features is exponential in the number of inputs. This paper proposes SG (Specific to General), a network that creates specific input features and then generalizes those features. One way SG generalizes is by combining similar specific features. If two features are similar, they are close to each other in the input space. Combining the two features by dropping inputs that are not common between the features creates a new feature that encompasses both of the original features. The new feature is general; it matches points in the input space that have not been defined by any example. This section presents an overview of the model while later sections provide detail about the system. The network is made up of many simple nodes. Each node contains the input feature that it monitors. During training, the node gathers statistics giving the discrete conditional probability of each possible output value given the input feature.
Target text information: Efficient Construction of Networks for Learned Representations with General to Specific Relationships: Machine learning systems often represent concepts or rules as sets of attribute-value pairs. Many learning algorithms generalize or specialize these concept representations by removing or adding pairs. Thus concepts are created that have general to specific relationships. This paper presents algorithms to connect concepts into a network based on their general to specific relationships. Since any concept can access related concepts quickly, the resulting structure allows increased efficiency in learning and reasoning. The time complexity of one set of learning models improves from O(n log n) to O(log n) (where n is the number of nodes) when using the general to specific structure.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 4 | Theory | cora | 314 | test |
1-hop neighbor's text information: Learning Analytically and Inductively. : Learning is a fundamental component of intelligence, and a key consideration in designing cognitive architectures such as Soar [ Laird et al., 1986 ] . This chapter considers the question of what constitutes an appropriate general-purpose learning mechanism. We are interested in mechanisms that might explain and reproduce the rich variety of learning capabilities of humans, ranging from learning perceptual-motor skills such as how to ride a bicycle, to learning highly cognitive tasks such as how to play chess. Research on learning in fields such as cognitive science, artificial intelligence, neurobiology, and statistics has led to the identification of two distinct classes of learning methods: inductive and analytic. Inductive methods, such as neural network Backpropagation, learn general laws by finding statistical correlations and regularities among a large set of training examples. In contrast, analytical methods, such as Explanation-Based Learning, acquire general laws from many fewer training examples. They rely instead on prior knowledge to analyze individual training examples in detail, then use this analysis to distinguish relevant example features from the irrelevant. The question considered in this chapter is how to best combine inductive and analytical learning in an architecture that seeks to cover the range of learning exhibited by intelligent systems such as humans. We present a specific learning mechanism, Explanation Based Neural Network learning (EBNN), that blends these two types of learning, and present experimental results demonstrating its ability to learn control strategies for a mobile robot using
1-hop neighbor's text information: (192). The Utility of Knowledge in Inductive Learning. :
Target text information: Integrating learning in a neural network. : The use of previously learned knowledge during learning has been shown to reduce the number of examples required for good generalization, and to increase robustness to noise in the examples. In reviewing various means of using learned knowledge from a domain to guide further learning in the same domain, two underlying classes are discerned. Methods which use previous knowledge to initialize a learner (as an initialization bias), and those that use previous knowledge to constrain a learner (as a search bias). We show such methods in fact exploit the same domain knowledge differently, and can complement each other. This is shown by presenting a combined approach which both initializes and constrains a learner. This combined approach is seen to outperform the individual methods under the conditions that accurate previously learned domain knowledge is available, and that there are irrelevant features in the domain representation.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 1 | Neural Networks | cora | 1,466 | test |
1-hop neighbor's text information: Machine learning in blood group determination of Danish Jersey cattle (causal probabilistic network). Dobljene mreze: In the following paper we approach the problem with different machine learning algorithms and show that they can be compared with causal probabilistic networks in terms of performance and comprehensibility.
1-hop neighbor's text information: `Machine learning in prognosis of the femoral neck fracture recovery\', : We compare the performance of several machine learning algorithms in the problem of prognos-tics of the femoral neck fracture recovery: the K-nearest neighbours algorithm, the semi-naive Bayesian classifier, backpropagation with weight elimination learning of the multilayered neural networks, the LFC (lookahead feature construction) algorithm, and the Assistant-I and Assistant-R algorithms for top down induction of decision trees using information gain and RELIEFF as search heuristics, respectively. We compare the prognostic accuracy and the explanation ability of different classifiers. Among the different algorithms the semi-naive Bayesian classifier and Assistant-R seem to be the most appropriate. We analyze the combination of decisions of several classifiers for solving prediction problems and show that the combined classifier improves both performance and the explanation ability.
1-hop neighbor's text information: (1995) Induction of decision trees using RELIEFF. : In the context of machine learning from examples this paper deals with the problem of estimating the quality of attributes with and without dependencies between them. Greedy search prevents current inductive machine learning algorithms to detect significant dependencies between the attributes. Recently, Kira and Rendell developed the RELIEF algorithm for estimating the quality of attributes that is able to detect dependencies between attributes. We show strong relation between RELIEF's estimates and impurity functions, that are usually used for heuristic guidance of inductive learning algorithms. We propose to use RELIEFF, an extended version of RELIEF, instead of myopic impurity functions. We have reimplemented Assistant, a system for top down induction of decision trees, using RELIEFF as an estimator of attributes at each selection step. The algorithm is tested on several artificial and several real world problems. Results show the advantage of the presented approach to inductive learning and open a wide rang of possibilities for using RELIEFF.
Target text information: Estimating attributes: Analysis and extension of relief. : In the context of machine learning from examples this paper deals with the problem of estimating the quality of attributes with and without dependencies among them. Kira and Rendell (1992a,b) developed an algorithm called RELIEF, which was shown to be very efficient in estimating attributes. Original RELIEF can deal with discrete and continuous attributes and is limited to only two-class problems. In this paper RELIEF is analysed and extended to deal with noisy, incomplete, and multi-class data sets. The extensions are verified on various artificial and one well known real-world problem.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 0 | Rule Learning | cora | 1,761 | test |
1-hop neighbor's text information: A model of similarity-based retrieval. : We present a model of similarity-based retrieval which attempts to capture three psychological phenomena: (1) people are extremely good at judging similarity and analogy when given items to compare. (2) Superficial remindings are much more frequent than structural remindings. (3) People sometimes experience and use purely structural analogical re-mindings. Our model, called MAC/FAC (for "many are called but few are chosen") consists of two stages. The first stage (MAC) uses a computationally cheap, non-structural matcher to filter candidates from a pool of memory items. That is, we redundantly encode structured representations as content vectors, whose dot product yields an estimate of how well the corresponding structural representations will match. The second stage (FAC) uses SME to compute a true structural match between the probe and output from the first stage. MAC/FAC has been fully implemented, and we show that it is capable of modeling patterns of access found in psychological data.
1-hop neighbor's text information: Towards formalizations in case-based reasoning for synthesis. : This paper presents the formalization of a novel approach to structural similarity assessment and adaptation in case-based reasoning (Cbr) for synthesis. The approach has been informally presented, exemplified, and implemented for the domain of industrial building design (Borner 1993). By relating the approach to existing theories we provide the foundation of its systematic evaluation and appropriate usage. Cases, the primary repository of knowledge, are represented structurally using an algebraic approach. Similarity relations provide structure preserving case modifications modulo the underlying algebra and an equational theory over the algebra (so available). This representation of a modeled universe of discourse enables theory-based inference of adapted solutions. The approach enables us to incorporate formally generalization, abstraction, geometrical transformation, and their combinations into Cbr.
1-hop neighbor's text information: Structural similarity as guidance in case-based design. : This paper presents a novel approach to determine structural similarity as guidance for adaptation in case-based reasoning (Cbr). We advance structural similarity assessment which provides not only a single numeric value but the most specific structure two cases have in common, inclusive of the modification rules needed to obtain this structure from the two cases. Our approach treats retrieval, matching and adaptation as a group of dependent processes. This guarantees the retrieval and matching of not only similar but adaptable cases. Both together enlarge the overall problem solving performance of Cbr and the explainability of case selection and adaptation considerably. Although our approach is more theoretical in nature and not restricted to a specific domain, we will give an example taken from the domain of industrial building design. Additionally, we will sketch two prototypical implementations of this approach.
Target text information: Task-oriented Knowledge Acquisition and Reasoning for Design Support Systems. : We present a framework for task-driven knowledge acquisition in the development of design support systems. Different types of knowledge that enter the knowledge base of a design support system are defined and illustrated both from a formal and from a knowledge acquisition vantage point. Special emphasis is placed on the task-structure, which is used to guide both acquisition and application of knowledge. Starting with knowledge for planning steps in design and augmenting this with problem-solving knowledge that supports design, a formal integrated model of knowledge for design is constructed. Based on the notion of knowledge acquisition as an incremental process we give an account of possibilities for problem solving depending on the knowledge that is at the disposal of the system. Finally, we depict how different kinds of knowledge interact in a design support system. ? This research was supported by the German Ministry for Research and Technology (BMFT) within the joint project FABEL under contract no. 413-4001-01IW104. Project partners in FABEL are German National Research Center of Computer Science (GMD), Sankt Augustin, BSR Consulting GmbH, Munchen, Technical University of Dresden, HTWK Leipzig, University of Freiburg, and University of Karlsruhe.
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 2 | Case Based | cora | 1,343 | train |
1-hop neighbor's text information: Generating Declarative Language Bias for Top-Down ILP Algorithms: Many of today's algorithms for Inductive Logic Programming (ILP) put a heavy burden and responsibility on the user, because their declarative bias have to be defined in a rather low-level fashion. To address this issue, we developed a method for generating declarative language bias for top-down ILP systems from high-level declarations. The key feature of our approach is the distinction between a user level and an expert level of language bias declarations. The expert provides abstract meta-declarations, and the user declares the relationship between the meta-level and the given database to obtain a low-level declarative language bias. The suggested languages allow for compact and abstract specifications of the declarative language bias for top-down ILP systems using schemata. We verified several properties of the translation algorithm that generates schemata, and applied it successfully to a few chemical domains. As a consequence, we propose to use a two-level approach to generate declarative language bias.
1-hop neighbor's text information: Lookahead and discretization in ILP. : We present and evaluate two methods for improving the performance of ILP systems. One of them is discretization of numerical attributes, based on Fayyad and Irani's text [9], but adapted and extended in such a way that it can cope with some aspects of discretization that only occur in relational learning problems (when indeterminate literals occur). The second technique is lookahead. It is a well-known problem in ILP that a learner cannot always assess the quality of a refinement without knowing which refinements will be enabled afterwards, i.e. without looking ahead in the refinement lattice. We present a simple method for specifying when lookahead is to be used, and what kind of lookahead is interesting. Both the discretization and lookahead techniques are evaluated experimentally. The results show that both techniques improve the quality of the induced theory, while computational costs are acceptable.
Target text information: Top-down induction of logical decision trees, : Top-down induction of decision trees (TDIDT) is a very popular machine learning technique. Up till now, it has mainly been used for propositional learning, but seldomly for relational learning or inductive logic programming. The main contribution of this paper is the introduction of logical decision trees, which make it possible to use TDIDT in inductive logic programming. An implementation of this top-down induction of logical decision trees, the
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 0 | Rule Learning | cora | 1,305 | test |
1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming.
1-hop neighbor's text information: "Using DNA to solve NP-Complete Problems", : A strategy for using Genetic Algorithms (GAs) to solve NP-complete problems is presented. The key aspect of the approach taken is to exploit the observation that, although all NP-complete problems are equally difficult in a general computational sense, some have much better GA representations than others, leading to much more successful use of GAs on some NP-complete problems than on others. Since any NP-complete problem can be mapped into any other one in polynomial time, the strategy described here consists of identifying a canonical NP-complete problem on which GAs work well, and solving other NP-complete problems indirectly by mapping them onto the canonical problem. Initial empirical results are presented which support the claim that the Boolean Satisfiability Problem (SAT) is a GA-effective canonical problem, and that other NP-complete problems with poor GA representations can be solved efficiently by mapping them first onto SAT problems.
Target text information: Vector Quantizer Design Using Genetic Algorithms: A Genetic Algorithmic (GA) approach to vector quantizer design that combines the conventional Generalized Lloyd Algorithm (GLA) [6] is presented. We refer to this hybrid as the Genetic Generalized Lloyd Algorithm (GGLA). It works briefly as follows: A finite number of codebooks, called chromosomes, are selected. Each codebook undergoes iterative cycles of reproduction. We perform experiments with various alternative design choices using Gaussian-Markov processes, speech, and image as source data and signal-to-noise ratio (SNR) as the performance measure. In most cases, the GGLA showed performance improvements with respect to the GLA. We also compare our results with the Zador-Gersho formula [2, 9].
I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node. | 3 | Genetic Algorithms | cora | 691 | test |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.