content
stringlengths
633
9.91k
label
stringclasses
7 values
category
stringclasses
7 values
dataset
stringclasses
1 value
node_id
int64
0
2.71k
split
stringclasses
3 values
1-hop neighbor's text information: Exponentially many local minima for single neurons. : We show that for a single neuron with the logistic function as the transfer function the number of local minima of the error function based on the square loss can grow exponentially in the dimension. 1-hop neighbor's text information: "Critical points for least-squares problems involving certain analytic functions, with applications to sigmoidal nets," : This paper deals with nonlinear least-squares problems involving the fitting to data of parameterized analytic functions. For generic regression data, a general result establishes the countability, and under stronger assumptions finiteness, of the set of functions giving rise to critical points of the quadratic loss function. In the special case of what are usually called "single-hidden layer neural networks," which are built upon the standard sigmoidal activation tanh(x) (or equivalently (1 + e x ) 1 ), a rough upper bound for this cardinality is provided as well. 1-hop neighbor's text information: Backpropagation Separates when Perceptrons Do, : Feedforward nets with sigmoidal activation functions are often designed by minimizing a cost criterion. It has been pointed out before that this technique may be outperformed by the classical perceptron learning rule, at least on some problems. In this paper, we show that no such pathologies can arise if the error criterion is of a threshold LMS type, i.e., is zero for values "beyond" the desired target values. More precisely, we show that if the data are linearly separable, and one considers nets with no hidden neurons, then an error function as above cannot have any local minima that are not global. Simulations of networks with hidden units are consistent with these results, in that often data which can be classified when minimizing a threshold LMS criterion may fail to be classified when using instead a simple LMS cost. In addition, the proof gives the following stronger result, under the stated hypotheses: the continuous gradient adjustment procedure is such that from any initial weight configuration a separating set of weights is obtained in finite time. This is a precise analogue of the Perceptron Learning Theorem. The results are then compared with the more classical pattern recognition problem of threshold LMS with linear activations, where no spurious local minima exist even for nonseparable data: here it is shown that even if using the threshold criterion, such bad local minima may occur, if the data are not separable and sigmoids are used. Target text information: Backpropagation can give rise to spurious local minima even for networks without hidden layers. : We give an example of a neural net without hidden layers and with a sigmoid transfer function, together with a training set of binary vectors, for which the sum of the squared errors, regarded as a function of the weights, has a local minimum which is not a global minimum. The example consists of a set of 125 training instances, with four weights and a threshold to be learnt. We do not know if substantially smaller binary examples exist. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
1,921
test
1-hop neighbor's text information: Quicknet on MultiSpert: Fast Parallel Neural Network Training: The MultiSpert parallel system is a straight-forward extension of the Spert workstation accelerator, which is predominantly used in speech recognition research at ICSI. In order to deliver high performance for Artificial Neural Network training without requiring changes to the user interfaces, the exisiting Quicknet ANN library was modified to run on MultiSpert. In this report, we present the algorithms used in the parallelization of the Quicknet code and analyse their communication and computation requirements. The resulting performance model yields a better understanding of system speed-ups and potential bottlenecks. Experimental results from actual training runs validate the model and demonstrate the achieved performance levels. 1-hop neighbor's text information: A fast Kohonen net implementation for spert-ii. : We present an implementation of Kohonen Self-Organizing Feature Maps for the Spert-II vector microprocessor system. The implementation supports arbitrary neural map topologies and arbitrary neighborhood functions. For small networks, as used in real-world tasks, a single Spert-II board is measured to run Kohonen net classification at up to 208 million connections per second (MCPS). On a speech coding benchmark task, Spert-II performs on-line Kohonen net training at over 100 million connection updates per second (MCUPS). This represents almost a factor of 10 improvement compared to previously reported implementations. The asymptotic peak speed of the system is 213 MCPS and 213 MCUPS. Target text information: A Vector Microprocessor System. : We report on our development of a high-performance system for neural network and other signal processing applications. We have designed and implemented a vector microprocessor and packaged it as an attached processor for a conventional workstation. We present performance comparisons with commercial workstations on neural network backpropagation training. The SPERT-II system demonstrates significant speedups over extensively hand optimization code running on the workstations. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
2,468
test
1-hop neighbor's text information: (1996) A compression algorithm for probability transition matrices. : This paper describes a compression algorithm for probability transition matrices. The compressed matrix is itself a probability transition matrix. In general the compression is not error-free, but the error appears to be small even for high levels of compression. 1-hop neighbor's text information: Island Model Genetic Algorithms and Linearly Separable Problems: Parallel Genetic Algorithms have often been reported to yield better performance than Genetic Algorithms which use a single large panmictic population. In the case of the Island Model Genetic Algorithm, it has been informally argued that having multiple subpopulations helps to preserve genetic diversity, since each island can potentially follow a different search trajectory through the search space. On the other hand, linearly separable functions have often been used to test Island Model Genetic Algorithms; it is possible that Island Models are particular well suited to separable problems. We look at how Island Models can track multiple search trajectories using the infinite population models of the simple genetic algorithm. We also introduce a simple model for better understanding when Island Model Genetic Algorithms may have an advantage when processing linearly separable problems. 1-hop neighbor's text information: Analyzing GAs using Markov models with semantically ordered and lumped states. : At the previous FOGA workshop, we presented some initial results on using Markov models to analyze the transient behavior of genetic algorithms (GAs) being used as function optimizers (GAFOs). In that paper, the states of the Markov model were ordered via a simple and mathematically convenient lexicographic ordering used initially by Nix and Vose. In this paper, we explore alternative orderings of states based on interesting semantic properties such as average fitness, degree of homogeneity, average attractive force, etc. We also explore lumping techniques for reducing the size of the state space. Analysis of these reordered and lumped Markov models provides new insights into the transient behavior of GAs in general and GAFOs in particular. Target text information: Using Markov chains to analyze GAFOs. : Our theoretical understanding of the properties of genetic algorithms (GAs) being used for function optimization (GAFOs) is not as strong as we would like. Traditional schema analysis provides some first order insights, but doesn't capture the non-linear dynamics of the GA search process very well. Markov chain theory has been used primarily for steady state analysis of GAs. In this paper we explore the use of transient Markov chain analysis to model and understand the behavior of finite population GAFOs observed while in transition to steady states. This approach appears to provide new insights into the circumstances under which GAFOs will (will not) perform well. Some preliminary results are presented and an initial evaluation of the merits of this approach is provided. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
1,004
train
1-hop neighbor's text information: Problem Formulation, Program Synthesis and Program Transformation Techniques for Simulation, Optimization and Constraint Satisfaction (Research Statement): 1-hop neighbor's text information: Program Synthesis and Transformation Techniques for Simpuation, Optimization and Constraint Satisfaction Deductive Synthesis of Numerical: Scientists and engineers face recurring problems of constructing, testing and modifying numerical simulation programs. The process of coding and revising such simulators is extremely time-consuming, because they are almost always written in conventional programming languages. Scientists and engineers can therefore benefit from software that facilitates construction of programs for simulating physical systems. Our research adapts the methodology of deductive program synthesis to the problem of constructing numerical simulation codes. We have focused on simulators that can be represented as second order functional programs composed of numerical integration and root extraction routines. We have developed a system that uses first order Horn logic to synthesize numerical simulators built from these components. Our approach is based on two ideas: First, we axiomatize only the relationship between integration and differentiation. We neither attempt nor require a complete axiomatization of mathematical analysis. Second, our system uses a representation in which functions are reified as objects. Function objects are encoded as lambda expressions. Our knowledge base includes an axiomatization of term equality in the lambda calculus. It also includes axioms defining the semantics of numerical integration and root extraction routines. We use depth bounded SLD resolution to construct proofs and synthesize programs. Our system has successfully constructed numerical simulators for computational design of jet engine nozzles and sailing yachts, among others. Our results demonstrate that deductive synthesis techniques can be used to construct numerical simulation programs for realistic applications (Ellman and Murata 1998). Automatic design optimization is highly sensitive to problem formulation. The choice of objective function, constraints and design parameters can dramatically impact the computational cost of optimization and the quality of the resulting design. The best formulation varies from one application to another. A design engineer will usually not know the best formulation in advance. In order to address this problem, we have developed a system that supports interactive formulation, testing and reformulation of design optimization strategies. Our system includes an executable, data-flow language for representing optimization strategies. The language allows an engineer to define multiple stages of optimization, each using different approximations of the objective and constraints or different abstractions of the design space. We have also developed a set of transformations that reformulate strategies represented in our language. The transformations can approximate objective and constraint functions, abstract or reparameterize search spaces, or divide an optimization process into multiple stages. The system is applicable in principle to any design problem that can be expressed in terms of constrained op 1-hop neighbor's text information: Ellman. Knowledge-based re-engineering of legacy programs for robustness in automated design. : Systems for automated design optimization of complex real-world objects can, in principle, be constructed by combining domain-independent numerical routines with existing domain-specific analysis and simulation programs. Unfortunately, such legacy analysis codes are frequently unsuitable for use in automated design. They may crash for large classes of input, be numerically unstable or locally non-smooth, or be highly sensitive to control parameters. To be useful, analysis programs must be modified to reduce or eliminate only the undesired behaviors, without altering the desired computation. To do this by direct modification of the programs is labor-intensive, and necessitates costly revalidation. We have implemented a high-level language and run-time environment that allow failure-handling strategies to be incorporated into existing Fortran and C analysis programs while preserving their computational integrity. Our approach relies on globally managing the execution of these programs at the level of discretely callable functions so that the computation is only affected when problems are detected. Problem handling procedures are constructed from a knowledge base of generic problem management strategies. We show that our approach is effective in improving analysis program robustness and design optimization performance in the domain of conceptual design of jet engine nozzles. Target text information: A transformation system for interactive reformulation of design optimization strategies. Fully accepted to Research in Engineering Design, : Automatic design optimization is highly sensitive to problem formulation. The choice of objective function, constraints and design parameters can dramatically impact the computational cost of optimization and the quality of the resulting design. The best formulation varies from one application to another. A design engineer will usually not know the best formulation in advance. In order to address this problem, we have developed a system that supports interactive formulation, testing and reformulation of design optimization strategies. Our system includes an executable, data-flow language for representing optimization strategies. The language allows an engineer to define multiple stages of optimization, each using different approximations of the objective and constraints or different abstractions of the design space. We have also developed a set of transformations that reformulate strategies represented in our language. The transformations can approximate objective and constraint functions, abstract or reparameterize search spaces, or divide an optimization process into multiple stages. The system is applicable in principle to any design problem that can be expressed in terms of constrained optimization; however, we expect the system to be most useful when the design artifact is governed by algebraic and ordinary differential equations. We have tested the system on problems of racing yacht design and jet engine nozzle design. We report experimental results demonstrating that our reformulation techniques can significantly improve the performance of automatic design optimization. Our research demonstrates the viability of a reformulation methodology that combines symbolic program transformation with numerical experimentation. It is an important first step in a research program aimed at automating the entire strategy formulation process. fl Fully accepted to Research in Engineering Design. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
1,614
test
1-hop neighbor's text information: "Characterizing the input-to-state stability property for set stability," : We show that the well-known Lyapunov sufficient condition for "input-to-state stability" is also necessary, settling positively an open question raised by several authors during the past few years. Additional characterizations of the ISS property, including one in terms of nonlinear stability margins, are also provided. 1-hop neighbor's text information: Some canonical properties of nonlinear systems, in Robust Control of Linear Systems and Nonlinear Control, M.A. Kaashoek, : This paper surveys some well-known facts as well as some recent developments on the topic of stabilization of nonlinear systems. 1-hop neighbor's text information: A Characterization of Integral Input to State Stability: Just as input to state stability (iss) generalizes the idea of finite gains with respect to supremum norms, the new notion of integral input to state stability (iiss) generalizes the concept of finite gain when using an integral norm on inputs. In this paper, we obtain a necessary and sufficient characterization of the iiss property, expressed in terms of dissipation inequalities. Target text information: "Further facts about input to state stabilization," : Report SYCON-88-15 ABSTRACT Previous results about input to state stabilizability are shown to hold even for systems which are not linear in controls, provided that a more general type of feedback be allowed. Applications to certain stabilization problems and coprime factorizations, as well as comparisons to other results on input to state stability, are also briefly discussed. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
1,190
test
1-hop neighbor's text information: Distributed Collective Adaptation Applied to a Hard Combinatorial Optimization Problem: We utilize collective memory to integrate weak and strong search heuristics to find cliques in FC, a family of graphs. We construct FC such that pruning of partial solutions will be ineffective. Each weak heuristic maintains a local cache of the collective memory. We examine the impact on the distributed search from the various characteristics of the distribution of the collective memory, the search algorithms, and our family of graphs. We find the distributed search performs better than the individuals, even though the space of partial solutions is combinatorially explosive. Target text information: Collective Adaptation: The Sharing of Building Blocks. : I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
2,066
test
1-hop neighbor's text information: Simultaneous evolution of programs and their control structures. : 1-hop neighbor's text information: "The Evolution of Agents that Build Mental Models and Create Simple Plans Using Genetic Programming," : An essential component of an intelligent agent is the ability to notice, encode, store, and utilize information about its environment. Traditional approaches to program induction have focused on evolving functional or reactive programs. This paper presents MAPMAKER, an approach to the automatic generation of agents that discover information about their environment, encode this information for later use, and create simple plans utilizing the stored mental models. In this approach, agents are multipart computer programs that communicate through a shared memory. Both the programs and the representation scheme are evolved using genetic programming. An illustrative problem of 'gold' collection is used to demonstrate the approach in which one part of a program makes a map of the world and stores it in memory, and the other part uses this map to find the gold The results indicate that the approach can evolve programs that store simple representations of their environments and use these representations to produce simple plans. 1. Introduction Target text information: Cultural transmission of information in genetic programming. : This paper shows how the performance of a genetic programming system can be improved through the addition of mechanisms for non-genetic transmission of information between individuals (culture). Teller has previously shown how genetic programming systems can be enhanced through the addition of memory mechanisms for individual programs [Teller 1994]; in this paper we show how Teller's memory mechanism can be changed to allow for communication between individuals within and across generations. We show the effects of indexed memory and culture on the performance of a genetic programming system on a symbolic regression problem, on Koza's Lawnmower problem, and on Wum-pus world agent problems. We show that culture can reduce the computational effort required to solve all of these problems. We conclude with a discussion of possible improvements. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
2,132
test
1-hop neighbor's text information: Strongly typed genetic programming in evolving cooperation strategies. : 1-hop neighbor's text information: Type inheritance in strongly typed genetic programming. : This paper appears as chapter 18 of Kenneth E. Kinnear, Jr. and Peter J. Angeline, editors Advances in Genetic Programming 2, MIT Press, 1996. Abstract Genetic Programming (GP) is an automatic method for generating computer programs, which are stored as data structures and manipulated to evolve better programs. An extension restricting the search space is Strongly Typed Genetic Programming (STGP), which has, as a basic premise, the removal of closure by typing both the arguments and return values of functions, and by also typing the terminal set. A restriction of STGP is that there are only two levels of typing. We extend STGP by allowing a type hierarchy, which allows more than two levels of typing. 1-hop neighbor's text information: Duplication of coding segments in genetic programming. : Research into the utility of non-coding segments, or introns, in genetic-based encodings has shown that they expedite the evolution of solutions in domains by protecting building blocks against destructive crossover. We consider a genetic programming system where non-coding segments can be removed, and the resultant chromosomes returned into the population. This parsimonious repair leads to premature convergence, since as we remove the naturally occurring non-coding segments, we strip away their protective backup feature. We then duplicate the coding segments in the repaired chromosomes, and place the modified chromosomes into the population. The duplication method significantly improves the learning rate in the domain we have considered. We also show that this method can be applied to other domains. Target text information: Modeling Distributed Search via Social Insects: Complex group behavior arises in social insects colonies as the integration of the actions of simple and redundant individual insects [Adler and Gordon, 1992, Oster and Wilson, 1978]. Furthermore, the colony can act as an information center to expedite foraging [Brown, 1989]. We apply these lessons from natural systems to model collective action and memory in a computational agent society. Collective action can expedite search in combinatorial optimization problems [Dorigo et al., 1996]. Collective memory can improve learning in multi-agent systems [Garland and Alterman, 1996]. Our collective adaptation integrates the simplicity of collective action with the pattern detection of collective memory to significantly improve both the gathering and processing of knowledge. As a test of the role of the society as an information center, we examine the ability of the society to distribute task allocation without any omnipotent centralized control. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
511
test
1-hop neighbor's text information: (1997b) Applications and extensions of MCMC in IRT: Multiple item types, missing data, and rated responses. : Technical Report No. 670 December, 1997 1-hop neighbor's text information: Markov chain Monte Carlo in practice: A roundtable discussion. : Markov chain Monte Carlo (MCMC) methods make possible the use of flexible Bayesian models that would otherwise be computationally infeasible. In recent years, a great variety of such applications have been described in the literature. Applied statisticians who are new to these methods may have several questions and concerns, however: How much effort and expertise are needed to design and use a Markov chain sampler? How much confidence can one have in the answers that MCMC produces? How does the use of MCMC affect the rest of the model-building process? At the Joint Statistical Meetings in August, 1996, a panel of experienced MCMC users discussed these and other issues, as well as various "tricks of the trade". This paper is an edited recreation of that discussion. Its purpose is to offer advice and guidance to novice users of MCMC - and to not-so-novice users as well. Topics include building confidence in simulation results, methods for speeding and assessing convergence, estimating standard errors, identification of models for which good MCMC algorithms exist, and the current state of software development. 1-hop neighbor's text information: Convergence of Gibbs sampler for a model related to James-Stein estimators. : Summary. We analyze a hierarchical Bayes model which is related to the usual empirical Bayes formulation of James-Stein estimators. We consider running a Gibbs sampler on this model. Using previous results about convergence rates of Markov chains, we provide rigorous, numerical, reasonable bounds on the running time of the Gibbs sampler, for a suitable range of prior distributions. We apply these results to baseball data from Efron and Morris (1975). For a different range of prior distributions, we prove that the Gibbs sampler will fail to converge, and use this information to prove that in this case the associated posterior distribution is non-normalizable. Acknowledgements. I am very grateful to Jun Liu for suggesting this project, and to Neal Madras for suggesting the use of the Submartingale Convergence Theorem herein. I thank Kate Cowles and Richard Tweedie for helpful conversations, and thank the referees for useful comments. Target text information: Markov chain Monte Carlo convergence diagnostics: A comparative review. : A critical issue for users of Markov Chain Monte Carlo (MCMC) methods in applications is how to determine when it is safe to stop sampling and use the samples to estimate characteristics of the distribution of interest. Research into methods of computing theoretical convergence bounds holds promise for the future but currently has yielded relatively little that is of practical use in applied work. Consequently, most MCMC users address the convergence problem by applying diagnostic tools to the output produced by running their samplers. After giving a brief overview of the area, we provide an expository review of thirteen convergence diagnostics, describing the theoretical basis and practical implementation of each. We then compare their performance in two simple models and conclude that all the methods can fail to detect the sorts of convergence failure they were designed to identify. We thus recommend a combination of strategies aimed at evaluating and accelerating MCMC sampler convergence, including applying diagnostic procedures to a small number of parallel chains, monitoring autocorrelations and cross-correlations, and modifying parameterizations or sampling algorithms appropriately. We emphasize, however, that it is not possible to say with certainty that a finite sample from an MCMC algorithm is representative of an underlying stationary distribution. Mary Kathryn Cowles is Assistant Professor of Biostatistics, Harvard School of Public Health, Boston, MA 02115. Bradley P. Carlin is Associate Professor, Division of Biostatistics, School of Public Health, University of Minnesota, Minneapolis, MN 55455. Much of the work was done while the first author was a graduate student in the Divison of Biostatistics at the University of Minnesota and then Assistant Professor, Biostatistics Section, Department of Preventive and Societal Medicine, University of Nebraska Medical Center, Omaha, NE 68198. The work of both authors was supported in part by National Institute of Allergy and Infectious Diseases FIRST Award 1-R29-AI33466. The authors thank the developers of the diagnostics studied here for sharing their insights, experiences, and software, and Drs. Thomas Louis and Luke Tierney for helpful discussions and suggestions which greatly improved the manuscript. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
1,149
test
1-hop neighbor's text information: Neal (1997). Monte Carlo Implementation of Gaussian Process Models for Bayesian Regression and Classification. : Technical Report No. 9702, Department of Statistics, University of Toronto Abstract. Gaussian processes are a natural way of defining prior distributions over functions of one or more input variables. In a simple nonparametric regression problem, where such a function gives the mean of a Gaussian distribution for an observed response, a Gaussian process model can easily be implemented using matrix computations that are feasible for datasets of up to about a thousand cases. Hyperparameters that define the covariance function of the Gaussian process can be sampled using Markov chain methods. Regression models where the noise has a t distribution and logistic or probit models for classification applications can be implemented by sampling as well for latent values underlying the observations. Software is now available that implements these methods using covariance functions with hierarchical parameterizations. Models defined in this way can discover high-level properties of the data, such as which inputs are relevant to predicting the response. Target text information: MacKay (1997b). Variational Gaussian Process Classifiers. : I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
1,055
test
1-hop neighbor's text information: A hybrid projection proximal point algorithm. : We propose a modification of the classical proximal point algorithm for finding zeroes of a maximal monotone operator in a Hilbert space. In particular, an approximate proximal point iteration is used to construct a hyperplane which strictly separates the current iterate from the solution set of the problem. This step is then followed by a projection of the current iterate onto the separating hyperplane. All information required for this projection operation is readily available at the end of the approximate proximal step, and therefore this projection entails no additional computational cost. The new algorithm allows significant relaxation of tolerance requirements imposed on the solution of proximal point subproblems, which yields a more practical framework. Weak global convergence and local linear rate of convergence are established under suitable assumptions. Additionally, presented analysis yields an alternative proof of convergence for the exact proximal point method, which allows a nice geometric interpretation, and is somewhat more intuitive than the classical proof. Target text information: Globally convergent inexact Newton methods, : We propose an algorithm for solving systems of monotone equations which combines Newton, proximal point, and projection methodologies. An important property of the algorithm is that the whole sequence of iterates is always globally convergent to a solution of the system without any additional regularity assumptions. Moreover, under standard assumptions the local su-perlinear rate of convergence is achieved. As opposed to classical globalization strategies for Newton methods, for computing the stepsize we do not use line-search aimed at decreasing the value of some merit function. Instead, linesearch in the approximate Newton direction is used to construct an appropriate hy-perplane which separates the current iterate from the solution set. This step is followed by projecting the current iterate onto this hyperplane, which ensures global convergence of the algorithm. Computational cost of each iteration of our method is of the same order as that of the classical damped Newton method. The crucial advantage is that our method is truly globally convergent. In particular, it cannot get trapped in a stationary point of a merit function. The presented algorithm is motivated by the hybrid projection-proximal point method proposed in [25]. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
2,147
val
1-hop neighbor's text information: A formal analysis of the role of multi--point crossover in genetic algorithms. : On the basis of early theoretical and empirical studies, genetic algorithms have typically used 1 and 2-point crossover operators as the standard mechanisms for implementing recombination. However, there have been a number of recent studies, primarily empirical in nature, which have shown the benefits of crossover operators involving a higher number of crossover points. From a traditional theoretical point of view, the most surprising of these new results relate to uniform crossover, which involves on the average L / 2 crossover points for strings of length L. In this paper we extend the existing theoretical results in an attempt to provide a broader explanatory and predictive theory of the role of multi-point crossover in genetic algorithms. In particular, we extend the traditional disruption analysis to include two general forms of multi-point crossover: n-point crossover and uniform crossover. We also analyze two other aspects of multi-point crossover operators, namely, their recombination potential and exploratory power. The results of this analysis provide a much clearer view of the role of multi-point crossover in genetic algorithms. The implications of these results on implementation issues and performance are discussed, and several directions for further research are suggested. 1-hop neighbor's text information: A summary of research on parallel genetic algorithms. : IlliGAL Report No. 97003 May 1997 1-hop neighbor's text information: A Comparative Study of Genetic Search: We present a comparative study of genetic algorithms and their search properties when treated as a combinatorial optimization technique. This is done in the context of the NP-hard problem MAX-SAT, the comparison being relative to the Metropolis process, and by extension, simulated annealing. Our contribution is two-fold. First, we show that for large and difficult MAX-SAT instances, the contribution of cross-over to the search process is marginal. Little is lost if it is dispensed altogether, running mutation and selection as an enlarged Metropolis process. Second, we show that for these problem instances, genetic search consistently performs worse than simulated annealing when subject to similar resource bounds. The correspondence between the two algorithms is made more precise via a decomposition argument, and provides a framework for interpreting our results. Target text information: A genetic algorithm for the set partitioning problem. : Genetic algorithms are stochastic search and optimization techniques which can be used for a wide range of applications. This paper addresses the application of genetic algorithms to the graph partitioning problem. Standard genetic algorithms with large populations suffer from lack of efficiency (quite high execution time). A massively parallel genetic algorithm is proposed, an implementation on a SuperNode of Transputers and results of various benchmarks are given. A comparative analysis of our approach with hill-climbing algorithms and simulated annealing is also presented. The experimental measures show that our algorithm gives better results concerning both the quality of the solution and the time needed to reach it. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
1,666
test
1-hop neighbor's text information: Learning in hybrid noise environments using statistical queries. : We consider formal models of learning from noisy data. Specifically, we focus on learning in the probability approximately correct model as defined by Valiant. Two of the most widely studied models of noise in this setting have been classification noise and malicious errors. However, a more realistic model combining the two types of noise has not been formalized. We define a learning environment based on a natural combination of these two noise models. We first show that hypothesis testing is possible in this model. We next describe a simple technique for learning in this model, and then describe a more powerful technique based on statistical query learning. We show that the noise tolerance of this improved technique is roughly optimal with respect to the desired learning accuracy and that it provides a smooth tradeoff between the tolerable amounts of the two types of noise. Finally, we show that statistical query simulation yields learning algorithms for other combinations of noise models, thus demonstrating that statistical query specification truly An important goal of research in machine learning is to determine which tasks can be automated, and for those which can, to determine their information and computation requirements. One way to answer these questions is through the development and investigation of formal models of machine learning which capture the task of learning under plausible assumptions. In this work, we consider the formal model of learning from examples called "probably approximately correct" (PAC) learning as defined by Valiant [Val84]. In this setting, a learner attempts to approximate an unknown target concept simply by viewing positive and negative examples of the concept. An adversary chooses, from some specified function class, a hidden f0; 1g-valued target function defined over some specified domain of examples and chooses a probability distribution over this domain. The goal of the learner is to output in both polynomial time and with high probability, an hypothesis which is "close" to the target function with respect to the distribution of examples. The learner gains information about the target function and distribution by interacting with an example oracle. At each request by the learner, this oracle draws an example randomly according to the hidden distribution, labels it according to the hidden target function, and returns the labelled example to the learner. A class of functions F is said to be PAC learnable if captures the generic fault tolerance of a learning algorithm. 1-hop neighbor's text information: On Learning from Noisy and Incomplete Examples. : We investigate learnability in the PAC model when the data used for learning, attributes and labels, is either corrupted or incomplete. In order to prove our main results, we define a new complexity measure on statistical query (SQ) learning algorithms. The view of an SQ algorithm is the maximum over all queries in the algorithm, of the number of input bits on which the query depends. We show that a restricted view SQ algorithm for a class is a general sufficient condition for learnability in both the models of attribute noise and covered (or missing) attributes. We further show that since the algorithms in question are statistical, they can also simultaneously tolerate classification noise. Classes for which these results hold, and can therefore be learned with simultaneous attribute noise and classification noise, include k-DNF, k-term-DNF by DNF representations, conjunctions with few relevant variables, and over the uniform distribution, decision lists. These noise models are the first PAC models in which all training data, attributes and labels, may be corrupted by a random process. Previous researchers had shown that the class of k-DNF is learnable with attribute noise if the attribute noise rate is known exactly. We show that all of our attribute noise learnabil-ity results, either with or without classification noise, also hold when the exact noise rate is not known, provided that the learner instead has a polynomially good approximation of the noise rate. In addition, we show that the results also hold when there is not just one noise rate, but a distinct noise rate for each attribute. Our results for learning with random covering do not require the learner to be told even an approximation of the covering rate and in addition hold in the setting with distinct covering rates for each attribute. Finally, we give lower bounds on the number of examples required for learning in the presence of attribute noise or covering. Target text information: Improved noise-tolerant learning and generalized statistical queries. : The statistical query learning model can be viewed as a tool for creating (or demonstrating the existence of) noise-tolerant learning algorithms in the PAC model. The complexity of a statistical query algorithm, in conjunction with the complexity of simulating SQ algorithms in the PAC model with noise, determine the complexity of the noise-tolerant PAC algorithms produced. Although roughly optimal upper bounds have been shown for the complexity of statistical query learning, the corresponding noise-tolerant PAC algorithms are not optimal due to inefficient simulations. In this paper we provide both improved simulations and a new variant of the statistical query model in order to overcome these inefficiencies. We improve the time complexity of the classification noise simulation of statistical query algorithms. Our new simulation has a roughly optimal dependence on the noise rate. We also derive a simpler proof that statistical queries can be simulated in the presence of classification noise. This proof makes fewer assumptions on the queries themselves and therefore allows one to simulate more general types of queries. We also define a new variant of the statistical query model based on relative error, and we show that this variant is more natural and strictly more powerful than the standard additive error model. We demonstrate efficient PAC simulations for algorithms in this new model and give general upper bounds on both learning with relative error statistical queries and PAC simulation. We show that any statistical query algorithm can be simulated in the PAC model with malicious errors in such a way that the resultant PAC algorithm has a roughly optimal tolerable malicious error rate and sample complexity. Finally, we generalize the types of queries allowed in the statistical query model. We discuss the advantages of allowing these generalized queries and show that our results on improved simulations also hold for these queries. This paper is available from the Center for Research in Computing Technology, Division of Applied Sciences, Harvard University as technical report TR-17-94. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
2,262
test
1-hop neighbor's text information: A connectionist architecture with inherent systematicity. : For connectionist networks to be adequate for higher level cognitive activities such as natural language interpretation, they have to generalize in a way that is appropriate given the regularities of the domain. Fodor and Pylyshyn (1988) identified an important pattern of regularities in such domains, which they called systematicity. Several attempts have been made to show that connectionist networks can generalize in accordance with these regularities, but not to the satisfaction of the critics. To address this challenge, this paper starts by establishing the implications of systematicity for connectionist solutions to the variable binding problem. Based on the work of Hadley (1994a), we argue that the network must generalize information it learns in one variable binding to other variable bindings. We then show that temporal synchrony variable binding (Shas-tri and Ajjanagadde, 1993) inherently generalizes in this way. Thereby we show that temporal synchrony variable binding is a connectionist architecture that accounts for systematicity. This is an important step in showing that connectionism can be an adequate architecture for higher level cognition. 1-hop neighbor's text information: A connectionist architecture for learning to parse. : We present a connectionist architecture and demonstrate that it can learn syntactic parsing from a corpus of parsed text. The architecture can represent syntactic constituents, and can learn generalizations over syntactic constituents, thereby addressing the sparse data problems of previous connectionist architectures. We apply these Simple Synchrony Networks to mapping sequences of word tags to parse trees. After training on parsed samples of the Brown Corpus, the networks achieve precision and recall on constituents that approaches that of statistical methods for this task. Target text information: Simple Synchrony Networks Learning to Parse Natural Language with Temporal Synchrony Variable Binding: The Simple Synchrony Network (SSN) is a new connectionist architecture, incorporating the insights of Temporal Synchrony Variable Binding (TSVB) into Simple Recurrent Networks. The use of TSVB means SSNs can output representations of structures, and can learn generalisations over the constituents of these structures (as required by systematicity). This paper describes the SSN and an associated training algorithm, and demonstrates SSNs' generalisation abilities through results from training I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
607
test
1-hop neighbor's text information: A Theory of Learning Classification Rules. : 1-hop neighbor's text information: A Parallel Learning Algorithm for Bayesian Inference Networks: We present a new parallel algorithm for learning Bayesian inference networks from data. Our learning algorithm exploits both properties of the MDL-based score metric, and a distributed, asynchronous, adaptive search technique called nagging. Nagging is intrinsically fault tolerant, has dynamic load balancing features, and scales well. We demonstrate the viability, effectiveness, and scalability of our approach empirically with several experiments using on the order of 20 machines. More specifically, we show that our distributed algorithm can provide optimal solutions for larger problems as well as good solutions for Bayesian networks of up to 150 variables. Target text information: Theory refinement on Bayesian networks. : Theory refinement is the task of updating a domain theory in the light of new cases, to be done automatically or with some expert assistance. The problem of theory refinement under uncertainty is reviewed here in the context of Bayesian statistics, a theory of belief revision. The problem is reduced to an incremental learning task as follows: the learning system is initially primed with a partial theory supplied by a domain expert, and thereafter maintains its own internal representation of alternative theories which is able to be interrogated by the domain expert and able to be incrementally refined from data. Algorithms for refinement of Bayesian networks are presented to illustrate what is meant by "partial theory", "alternative theory representation", etc. The algorithms are an incremental variant of batch learning algorithms from the literature so can work well in batch and incremental mode. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
1,228
val
1-hop neighbor's text information: Hidden Markov models in computational biology: Applications to protein modeling. : Hidden Markov Models (HMMs) are applied to the problems of statistical modeling, database searching and multiple sequence alignment of protein families and protein domains. These methods are demonstrated on the globin family, the protein kinase catalytic domain, and the EF-hand calcium binding motif. In each case the parameters of an HMM are estimated from a training set of unaligned sequences. After the HMM is built, it is used to obtain a multiple alignment of all the training sequences. It is also used to search the SWISS-PROT 22 database for other sequences that are members of the given protein family, or contain the given domain. The HMM produces multiple alignments of good quality that agree closely with the alignments produced by programs that incorporate three-dimensional structural information. When employed in discrimination tests (by examining how closely the sequences in a database fit the globin, kinase and EF-hand HMMs), the HMM is able to distinguish members of these families from non-members with a high degree of accuracy. Both the HMM and PRO-FILESEARCH (a technique used to search for relationships between a protein sequence and multiply aligned sequences) perform better in these tests than PROSITE (a dictionary of sites and patterns in proteins). The HMM appears to have a slight advantage 1-hop neighbor's text information: On the Approximability of Numerical Taxonomy. : DIMACS Technical Report 95-46 Target text information: A New Look at Tree Models for Multiple Sequence Alignment: Evolutionary trees are frequently used as the underlying model in the design of algorithms, optimization criteria and software packages for multiple sequence alignment (MSA). In this paper, we reexamine the suitability of trees as a universal model for MSA in light of the broad range of biological questions that MSA's are used to address. A tree model consists of a tree topology and a model of accepted mutations along the branches. After surveying the major applications of MSA, examples from the molecular biology literature are used to illustrate situations in which this tree model fails. This occurs when the relationship between residues in a column cannot be described by a tree; for example, in some structural and functional applications of MSA. It also occurs in situations, such as lateral gene transfer, where an entire gene cannot be modeled by a unique tree. In cases of nonparsimonous data or convergent evolution, it may be difficult to find a consistent mutational model. We hope that this survey will promote dialogue between biologists and computer scientists, leading to more biologically realistic research on MSA. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
561
test
1-hop neighbor's text information: Self-Organization and Associative Memory, : Selective suppression of transmission at feedback synapses during learning is proposed as a mechanism for combining associative feedback with self-organization of feedforward synapses. Experimental data demonstrates cholinergic suppression of synaptic transmission in layer I (feedback synapses), and a lack of suppression in layer IV (feed-forward synapses). A network with this feature uses local rules to learn mappings which are not linearly separable. During learning, sensory stimuli and desired response are simultaneously presented as input. Feedforward connections form self-organized representations of input, while suppressed feedback connections learn the transpose of feedfor-ward connectivity. During recall, suppression is removed, sensory input activates the self-organized representation, and activity generates the learned response. 1-hop neighbor's text information: Representation of spatial orientation by the intrinsic dynamics of the head-direction cell ensemble: a theory. : The head-direction (HD) cells found in the limbic system in freely moving rats represent the instantaneous head direction of the animal in the horizontal plane regardless of the location of the animal. The internal direction represented by these cells uses both self-motion information for inertially based updating and familiar visual landmarks for calibration. Here, a model of the dynamics of the HD cell ensemble is presented. The stability of a localized static activity profile in the network and a dynamic shift mechanism are explained naturally by synaptic weight distribution components with even and odd symmetry, respectively. Under symmetric weights or symmetric reciprocal connections, a stable activity profile close to the known directional tuning curves will emerge. By adding a slight asymmetry to the weights, the activity profile will shift continuously without disturbances to its shape, and the shift speed can be accurately controlled by the strength of the odd-weight component. The generic formulation of the shift mechanism is determined uniquely within the current theoretical framework. The attractor dynamics of the system ensures modality-independence of the internal representation and facilitates the correction for cumulative error by the putative local-view detectors. The model offers a specific one-dimensional example of a computational mechanism in which a truly world-centered representation can be derived from observer-centered sensory inputs by integrating self-motion information. 1-hop neighbor's text information: Redish. Beyond the Cognitive Map: Contributions to a Computational Neu-roscience Theory of Rodent Navigation. : Target text information: Separating hippocampal maps Spatial Functions of the Hippocampal Formation and the: The place fields of hippocampal cells in old animals sometimes change when an animal is removed from and then returned to an environment [ Barnes et al., 1997 ] . The ensemble correlation between two sequential visits to the same environment shows a strong bimodality for old animals (near 0, indicative of remapping, and greater than 0.7, indicative of a similar representation between experiences), but a strong unimodality for young animals (greater than 0.7, indicative of a similar representation between experiences). One explanation for this is the multi-map hypothesis in which multiple maps are encoded in the hippocampus: old animals may sometimes be returning to the wrong map. A theory proposed by Samsonovich and McNaughton (1997) suggests that the Barnes et al. experiment implies that the maps are pre-wired in the CA3 region of hippocampus. Here, we offer an alternative explanation in which orthogonalization properties in the dentate gyrus (DG) region of hippocampus interact with errors in self-localization (reset of the path integrator on re-entry into the environment) to produce the bimodality. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
151
test
1-hop neighbor's text information: Hierarchical Mixtures of Experts and the EM Algorithm, : We present a tree-structured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIM's). Learning is treated as a maximum likelihood problem; in particular, we present an Expectation-Maximization (EM) algorithm for adjusting the parameters of the architecture. We also develop an on-line learning algorithm in which the parameters are updated incrementally. Comparative simulation results are presented in the robot dynamics domain. *We want to thank Geoffrey Hinton, Tony Robinson, Mitsuo Kawato and Daniel Wolpert for helpful comments on the manuscript. This project was supported in part by a grant from the McDonnell-Pew Foundation, by a grant from ATR Human Information Processing Research Laboratories, by a grant from Siemens Corporation, by by grant IRI-9013991 from the National Science Foundation, and by grant N00014-90-J-1942 from the Office of Naval Research. The project was also supported by NSF grant ASC-9217041 in support of the Center for Biological and Computational Learning at MIT, including funds provided by DARPA under the HPCC program, and NSF grant ECS-9216531 to support an Initiative in Intelligent Control at MIT. Michael I. Jordan is a NSF Presidential Young Investigator. 1-hop neighbor's text information: Recurrent Neural Networks for Missing or Asynchronous Data: In this paper we propose recurrent neural networks with feedback into the input units for handling two types of data analysis problems. On the one hand, this scheme can be used for static data when some of the input variables are missing. On the other hand, it can also be used for sequential data, when some of the input variables are missing or are available at different frequencies. Unlike in the case of probabilistic models (e.g. Gaussian) of the missing variables, the network does not attempt to model the distribution of the missing variables given the observed variables. Instead it is a more "discriminant" approach that fills in the missing variables for the sole purpose of minimizing a learning criterion (e.g., to minimize an output error). 1-hop neighbor's text information: Using Temporal-Difference Reinforcement Learning to Improve Decision-Theoretic Utilities for Diagnosis: Probability theory represents and manipulates uncertainties, but cannot tell us how to behave. For that we need utility theory which assigns values to the usefulness of different states, and decision theory which concerns optimal rational decisions. There are many methods for probability modeling, but few for learning utility and decision models. We use reinforcement learning to find the optimal sequence of questions in a diagnosis situation while maintaining a high accuracy. Automated diagnosis on a heart-disease domain is used to demonstrate that temporal-difference learning can improve diagnosis. On the Cleveland heart-disease database our results are better than those reported from all previous methods. Target text information: Supervised learning from incomplete data via an EM approach. : Real-world learning tasks may involve high-dimensional data sets with arbitrary patterns of missing data. In this paper we present a framework based on maximum likelihood density estimation for learning from such data sets. We use mixture models for the density estimates and make two distinct appeals to the Expectation-Maximization (EM) principle (Dempster et al., 1977) in deriving a learning algorithm|EM is used both for the estimation of mixture components and for coping with missing data. The resulting algorithm is applicable to a wide range of supervised as well as unsupervised learning problems. Results from a classification benchmark|the iris data set|are presented. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
1,833
test
1-hop neighbor's text information: Bias-driven revision of logical domain theories. : The theory revision problem is the problem of how best to go about revising a deficient domain theory using information contained in examples that expose inaccuracies. In this paper we present our approach to the theory revision problem for propositional domain theories. The approach described here, called PTR, uses probabilities associated with domain theory elements to numerically track the ``ow'' of proof through the theory. This allows us to measure the precise role of a clause or literal in allowing or preventing a (desired or undesired) derivation for a given example. This information is used to efficiently locate and repair awed elements of the theory. PTR is proved to converge to a theory which correctly classifies all examples, and shown experimentally to be fast and accurate even for deep theories. 1-hop neighbor's text information: Theory refinement combining analytical and empirical methods. : This article describes a comprehensive approach to automatic theory revision. Given an imperfect theory, the approach combines explanation attempts for incorrectly classified examples in order to identify the failing portions of the theory. For each theory fault, correlated subsets of the examples are used to inductively generate a correction. Because the corrections are focused, they tend to preserve the structure of the original theory. Because the system starts with an approximate domain theory, in general fewer training examples are required to attain a given level of performance (classification accuracy) compared to a purely empirical system. The approach applies to classification systems employing a propositional Horn-clause theory. The system has been tested in a variety of application domains, and results are presented for problems in the domains of molecular biology and plant disease diagnosis. 1-hop neighbor's text information: `Multistrategy learning and theory revision\', : This paper presents the system WHY, which learns and updates a diagnostic knowledge base using domain knowledge and a set of examples. The a-priori knowledge consists of a causal model of the domain, stating the relationships among basic phenomena, and a body of phenomenological theory, describing the links between abstract concepts and their possible manifestations in the world. The phenomenological knowledge is used deductively, the causal model is used abductively and the examples are used inductively. The problems of imperfection and intractability of the theory are handled by allowing the system to make assumptions during its reasoning. In this way, robust knowledge can be learned with limited complexity and limited number of examples. The system works in a first order logic environment and has been applied in a real domain. Target text information: Tractability of Theory Patching: In this paper we consider the problem of theory patching, in which we are given a domain theory, some of whose components are indicated to be possibly flawed, and a set of labeled training examples for the domain concept. The theory patching problem is to revise only the indicated components of the theory, such that the resulting theory correctly classifies all the training examples. Theory patching is thus a type of theory revision in which revisions are made to individual components of the theory. Our concern in this paper is to determine for which classes of logical domain theories the theory patching problem is tractable. We consider both propositional and first-order domain theories, and show that the theory patching problem is equivalent to that of determining what information contained in a theory is stable regardless of what revisions might be performed to the theory. We show that determining stability is tractable if the input theory satisfies two conditions: that revisions to each theory component have monotonic effects on the classification of examples, and that theory components act independently in the classification of examples in the theory. We also show how the concepts introduced can be used to determine the soundness and completeness of particular theory patching algorithms. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
381
test
1-hop neighbor's text information: "Measures for performance evaluation of genetic algorithms," : This paper proposes four performance measures of a genetic algorithm (GA) which enable us to compare different GAs for an op timization problem and different choices of their parameters' values. The performance measures are defined in terms of observations in simulation, such as the frequency of optimal solutions, fitness values, the frequency of evolution leaps, and the number of generations needed to reach an optimal solution. We present a case study in which parameters of a GA for robot path planning was tuned and its performance was optimized through performance evaluation by using the measures. Especially, one of the performance measures is used to demonstrate the adaptivity of the GA for robot path planning. We also propose a process of systematic tuning based on techniques for the design of experiments. 1-hop neighbor's text information: An overview of genetic algorithms: Part 1, fundamentals. : 1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming. Target text information: A genetic algorithm for 3-D path planning of a mobile robots, : This paper proposes genetic algorithms (GAs) for path planning and trajectory planning of an autonomous mobile robot. Our GA-based approach has an advantage of adaptivity such that the GAs work even if an environment is time-varying or unknown. Therefore, it is suitable for both off-line and on-line motion planning. We first presents a GA for path planning in a 2D terrain. Simulation results on the performance and adaptivity of the GA on randomly generated terrains are shown. Then, we discuss extensions of the GA for solving both path planning and trajectory planning simultaneously. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
1,302
train
1-hop neighbor's text information: Regularization thory and neural networks architectures. : We had previously shown that regularization principles lead to approximation schemes which are equivalent to networks with one layer of hidden units, called Regularization Networks. In particular, standard smoothness functionals lead to a subclass of regularization networks, the well known Radial Basis Functions approximation schemes. This paper shows that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models and some of the neural networks. In particular, we introduce new classes of smoothness functionals that lead to different classes of basis functions. Additive splines as well as some tensor product splines can be obtained from appropriate classes of smoothness functionals. Furthermore, the same generalization that extends Radial Basis Functions (RBF) to Hyper Basis Functions (HBF) also leads from additive models to ridge approximation models, containing as special cases Breiman's hinge functions, some forms of Projection Pursuit Regression and several types of neural networks. We propose to use the term Generalized Regularization Networks for this broad class of approximation schemes that follow from an extension of regularization. In the probabilistic interpretation of regularization, the different classes of basis functions correspond to different classes of prior probabilities on the approximating function spaces, and therefore to different types of smoothness assumptions. In summary, different multilayer networks with one hidden layer, which we collectively call Generalized Regularization Networks, correspond to different classes of priors and associated smoothness functionals in a classical regularization principle. Three broad classes are a) Radial Basis Functions that can be generalized to Hyper Basis Functions, b) some tensor product splines, and c) additive splines that can be generalized to schemes of the type of ridge approximation, hinge functions and several perceptron-like neural networks with one-hidden layer. 1 This paper will appear on Neural Computation, vol. 7, pages 219-269, 1995. An earlier version of 1-hop neighbor's text information: A new view of the EM algorithm that justifies incremental and other variants. : The EM algorithm performs maximum likelihood estimation for data in which some variables are unobserved. We present a function that resembles negative free energy and show that the M step maximizes this function with respect to the model parameters and the E step maximizes it with respect to the distribution over the unobserved variables. From this perspective, it is easy to justify an incremental variant of the EM algorithm in which the distribution for only one of the unobserved variables is recalculated in each E step. This variant is shown empirically to give faster convergence in a mixture estimation problem. A variant of the algorithm that exploits sparse conditional distributions is also described, and a wide range of other variant algorithms are also seen to be possible. 1-hop neighbor's text information: Hierarchical Mixtures of Experts and the EM Algorithm, : We present a tree-structured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIM's). Learning is treated as a maximum likelihood problem; in particular, we present an Expectation-Maximization (EM) algorithm for adjusting the parameters of the architecture. We also develop an on-line learning algorithm in which the parameters are updated incrementally. Comparative simulation results are presented in the robot dynamics domain. *We want to thank Geoffrey Hinton, Tony Robinson, Mitsuo Kawato and Daniel Wolpert for helpful comments on the manuscript. This project was supported in part by a grant from the McDonnell-Pew Foundation, by a grant from ATR Human Information Processing Research Laboratories, by a grant from Siemens Corporation, by by grant IRI-9013991 from the National Science Foundation, and by grant N00014-90-J-1942 from the Office of Naval Research. The project was also supported by NSF grant ASC-9217041 in support of the Center for Biological and Computational Learning at MIT, including funds provided by DARPA under the HPCC program, and NSF grant ECS-9216531 to support an Initiative in Intelligent Control at MIT. Michael I. Jordan is a NSF Presidential Young Investigator. Target text information: State Reconstruction for Determining Predictability in Driven Nonlinear Acoustical Systems: I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
731
test
1-hop neighbor's text information: An empirical comparison of selection measures for decision-tree induction. : Ourston and Mooney, 1990b ] D. Ourston and R. J. Mooney. Improving shared rules in multiple category domain theories. Technical Report AI90-150, Artificial Intelligence Labora tory, University of Texas, Austin, TX, December 1990. 1-hop neighbor's text information: In defense of C4.5: Notes on learning one-level decision trees, : We discuss the implications of Holte's recently-published article, which demonstrated that on the most commonly used data very simple classification rules are almost as accurate as decision trees produced by Quinlan's C4.5. We consider, in particular, what is the significance of Holte's results for the future of top-down induction of decision trees. To an extent, Holte questioned the sense of further research on multilevel decision tree learning. We go in detail through all the parts of Holte's study. We try to put the results into perspective. We argue that the (in absolute terms) small difference in accuracy between 1R and C4.5 that was witnessed by Holte is still significant. We claim that C4.5 possesses additional accuracy-related advantages over 1R. In addition we discuss the representativeness of the databases used by Holte. We compare empirically the optimal accuracies of multilevel and one-level decision trees and observe some significant differences. We point out several deficien cies of limited-complexity classifiers. 1-hop neighbor's text information: "An analysis of bayesian classifiers," : In this paper we present an average-case analysis of the Bayesian classifier, a simple probabilistic induction algorithm that fares remarkably well on many learning tasks. Our analysis assumes a monotone conjunctive target concept, Boolean attributes that are independent of each other and that follow a single distribution, and the absence of attribute noise. We first calculate the probability that the algorithm will induce an arbitrary pair of concept descriptions; we then use this expression to compute the probability of correct classification over the space of instances. The analysis takes into account the number of training instances, the number of relevant and irrelevant attributes, the distribution of these attributes, and the level of class noise. In addition, we explore the behavioral implications of the analysis by presenting predicted learning curves for a number of artificial domains. We also give experimental results on these domains as a check on our reasoning. Finally, we discuss some unresolved questions about the behavior of Bayesian classifiers and outline directions for future research. Note: Without acknowledgements and references, this paper fits into 12 pages with dimensions 5.5 inches fi 7.5 inches using 12 point LaTeX type. However, we find the current format more desirable. We have not submitted the paper to any other conference or journal. Target text information: Induction of one-level decision trees. : In recent years, researchers have made considerable progress on the worst-case analysis of inductive learning tasks, but for theoretical results to have impact on practice, they must deal with the average case. In this paper we present an average-case analysis of a simple algorithm that induces one-level decision trees for concepts defined by a single relevant attribute. Given knowledge about the number of training instances, the number of irrelevant attributes, the amount of class and attribute noise, and the class and attribute distributions, we derive the expected classification accuracy over the entire instance space. We then examine the predictions of this analysis for different settings of these domain parameters, comparing them to exper imental results to check our reasoning. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
2,692
test
1-hop neighbor's text information: Submitted to the Future Generation Computer Systems special issue on Data Mining. Using Neural Networks: Neural networks have been successfully applied in a wide range of supervised and unsupervised learning applications. Neural-network methods are not commonly used for data-mining tasks, however, because they often produce incomprehensible models and require long training times. In this article, we describe neural-network learning algorithms that are able to produce comprehensible models, and that do not require excessive training times. Specifically, we discuss two classes of approaches for data mining with neural networks. The first type of approach, often called rule extraction, involves extracting symbolic models from trained neural networks. The second approach is to directly learn simple, easy-to-understand networks. We argue that, given the current state of the art, neural-network methods deserve a place in the tool boxes of data-mining specialists. 1-hop neighbor's text information: Constructing Fuzzy Graphs from Examples: Methods to build function approximators from example data have gained considerable interest in the past. Especially methodologies that build models that allow an interpretation have attracted attention. Most existing algorithms, however, are either complicated to use or infeasible for high-dimensional problems. This article presents an efficient and easy to use algorithm to construct fuzzy graphs from example data. The resulting fuzzy graphs are based on locally independent fuzzy rules that operate solely on selected, important attributes. This enables the application of these fuzzy graphs also to problems in high dimensional spaces. Using illustrative examples and a real world data set it is demonstrated how the resulting fuzzy graphs offer quick insights into the structure of the example data, that is, the underlying model. 1-hop neighbor's text information: Using sampling and queries to extract rules from trained neural networks. : Concepts learned by neural networks are difficult to understand because they are represented using large assemblages of real-valued parameters. One approach to understanding trained neural networks is to extract symbolic rules that describe their classification behavior. There are several existing rule-extraction approaches that operate by searching for such rules. We present a novel method that casts rule extraction not as a search problem, but instead as a learning problem. In addition to learning from training examples, our method exploits the property that networks can be efficiently queried. We describe algorithms for extracting both conjunctive and M -of-N rules, and present experiments that show that our method is more efficient than conventional search-based approaches. Target text information: "Extracting rules from artificial neural networks with distributed representations", : Although artificial neural networks have been applied in a variety of real-world scenarios with remarkable success, they have often been criticized for exhibiting a low degree of human comprehensibility. Techniques that compile compact sets of symbolic rules out of artificial neural networks offer a promising perspective to overcome this obvious deficiency of neural network representations. This paper presents an approach to the extraction of if-then rules from artificial neural networks. Its key mechanism is validity interval analysis, which is a generic tool for extracting symbolic knowledge by propagating rule-like knowledge through Backpropagation-style neural networks. Empirical studies in a robot arm domain illustrate the appropriateness of the proposed method for extracting rules from networks with real-valued and distributed representations. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
1,494
val
1-hop neighbor's text information: Using Case-Based Reasoning for Mobile Robot Navigation: This paper presents an approach to mobile robot path planning using case-based reasoning together with map-based path planning. The map-based path planner is used to seed the case-base with innovative solutions. The casebase stores the paths and the information about their traversability. While planning the route those paths are preferred that according to the former experience are least risky. 1-hop neighbor's text information: Abstract: We describe an ongoing project to develop an adaptive training system (ATS) that dynamically models a students learning processes and can provide specialized tutoring adapted to a students knowledge state and learning style. The student modeling component of the ATS, ML-Modeler, uses machine learning (ML) techniques to emulate the students novice-to-expert transition. ML-Modeler infers which learning methods the student has used to reach the current knowledge state by comparing the students solution trace to an expert solution and generating plausible hypotheses about what misconceptions and errors the student has made. A case-based approach is used to generate hypotheses through incorrectly applying analogy, overgeneralization, and overspecialization. The student and expert models use a network-based representation that includes abstract concepts and relationships as well as strategies for problem solving. Fuzzy methods are used to represent the uncertainty in the student model. This paper describes the design of the ATS and ML-Modeler, and gives a detailed example of how the system would model and tutor the student in a typical session. The domain we use for this example is high-school level chemistry. 1-hop neighbor's text information: Abstract: We describe an ongoing project to develop an adaptive training system (ATS) that dynamically models a students learning processes and can provide specialized tutoring adapted to a students knowledge state and learning style. The student modeling component of the ATS, ML-Modeler, uses machine learning (ML) techniques to emulate the students novice-to-expert transition. ML-Modeler infers which learning methods the student has used to reach the current knowledge state by comparing the students solution trace to an expert solution and generating plausible hypotheses about what misconceptions and errors the student has made. A case-based approach is used to generate hypotheses through incorrectly applying analogy, overgeneralization, and overspecialization. The student and expert models use a network-based representation that includes abstract concepts and relationships as well as strategies for problem solving. Fuzzy methods are used to represent the uncertainty in the student model. This paper describes the design of the ATS and ML-Modeler, and gives a detailed example of how the system would model and tutor the student in a typical session. The domain we use for this example is high-school level chemistry. Target text information: D.B. Leake. Modeling Case-based Planning for Repairing Reasoning Failures. : One application of models of reasoning behavior is to allow a reasoner to introspectively detect and repair failures of its own reasoning process. We address the issues of the transferability of such models versus the specificity of the knowledge in them, the kinds of knowledge needed for self-modeling and how that knowledge is structured, and the evaluation of introspective reasoning systems. We present the ROBBIE system which implements a model of its planning processes to improve the planner in response to reasoning failures. We show how ROBBIE's hierarchical model balances model generality with access to implementation-specific details, and discuss the qualitative and quantitative measures we have used for evaluating its introspective component. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
2,467
train
1-hop neighbor's text information: A Neural Architecture for Content as well as Address-Based Storage and Recall: : 1-hop neighbor's text information: Toward Learning Systems That Integrate Different Strategies and Representations. In: Artificial Intelligence and Neural Networks: Steps toward Principled Integration. Honavar, : 1-hop neighbor's text information: A Neural Network Architecture for Syntax Analysis. : Target text information: A Neural Architecture for a High-Speed Database Query System. : Artificial neural networks (ANN), due to their inherent parallelism and potential fault tolerance offer an attractive paradigm for robust and efficient implementations of large modern database and knowledge base systems. This paper explores a neural network model for efficient implementation of a database query system. The application of the proposed model to a high-speed library query system for retrieval of multiple items is based on partial match of the specified query criteria with the stored records. The performance of the ANN realization of the database query module is analyzed and compared with other techniques commonly in current computer systems. The results of this analysis suggest that the proposed ANN design offers an attractive approach for the realization of query modules in large database and knowledge base systems, especially for retrieval based on partial matches. fl This research was partially supported by the National Science Foundation through the grant IRI-9409580 to Vasant Honavar. A preliminary version of this paper [Chen and Honavar, 1995c] appears in the Proceedings of the 1995 World Congress on Neural Networks. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
23
val
1-hop neighbor's text information: Warmuth "How to use expert advice", : We analyze algorithms that predict a binary value by combining the predictions of several prediction strategies, called experts. Our analysis is for worst-case situations, i.e., we make no assumptions about the way the sequence of bits to be predicted is generated. We measure the performance of the algorithm by the difference between the expected number of mistakes it makes on the bit sequence and the expected number of mistakes made by the best expert on this sequence, where the expectation is taken with respect to the randomization in the predictions. We show that the minimum achievable difference is on the order of the square root of the number of mistakes of the best expert, and we give efficient algorithms that achieve this. Our upper and lower bounds have matching leading constants in most cases. We then show how this leads to certain kinds of pattern recognition/learning algorithms with performance bounds that improve on the best results currently known in this context. We also compare our analysis to the case in which log loss is used instead of the expected number of mistakes. 1-hop neighbor's text information: "The Power of Self-Directed Learning", : This paper studies self-directed learning, a variant of the on-line learning model in which the learner selects the presentation order for the instances. We give tight bounds on the complexity of self-directed learning for the concept classes of monomials, k-term DNF formulas, and orthogonal rectangles in f0; 1; ; n1g d . These results demonstrate that the number of mistakes under self-directed learning can be surprisingly small. We then prove that the model of self-directed learning is more powerful than all other commonly used on-line and query learning models. Next we explore the relationship between the complexity of self-directed learning and the Vapnik-Chervonenkis dimension. Finally, we explore a relationship between Mitchell's version space algorithm and the existence of self-directed learning algorithms that make few mistakes. fl Supported in part by a GE Foundation Junior Faculty Grant and NSF Grant CCR-9110108. Part of this research was conducted while the author was at the M.I.T. Laboratory for Computer Science and supported by NSF grant DCR-8607494 and a grant from the Siemens Corporation. Net address: [email protected]. 1-hop neighbor's text information: and P.M. Long. Apple tasting and nearly one-sided learning. : In the standard on-line model the learning algorithm tries to minimize the total number of mistakes made in a series of trials. On each trial the learner sees an instance, either accepts or rejects that instance, and then is told the appropriate response. We define a natural variant of this model ("apple tasting") where the learner gets feedback only when the instance is accepted. We use two transformations to relate the apple tasting model to an enhanced standard model where false acceptances are counted separately from false rejections. We present a strategy for trading between false acceptances and false rejections in the standard model. From one perspective this strategy is exactly optimal, including constants. We apply our results to obtain a good general purpose apple tasting algorithm as well as nearly optimal apple tasting algorithms for a variety of standard classes, such as conjunctions and disjunctions of n boolean variables. We also present and analyze a simpler transformation useful when the instances are drawn at random rather than selected by an adversary. Target text information: Online Learning versus O*ine Learning: We present an off-line variant of the mistake-bound model of learning. Just like in the well studied on-line model, a learner in the offline model has to learn an unknown concept from a sequence of elements of the instance space on which he makes "guess and test" trials. In both models, the aim of the learner is to make as few mistakes as possible. The difference between the models is that, while in the on-line model only the set of possible elements is known, in the off-line model the sequence of elements (i.e., the identity of the elements as well as the order in which they are to be presented) is known to the learner in advance. We give a combinatorial characterization of the number of mistakes in the off-line model. We apply this characterization to solve several natural questions that arise for the new model. First, we compare the mistake bounds of an off-line learner to those of a learner learning the same concept classes in the on-line scenario. We show that the number of mistakes in the on-line learning is at most a log n factor more than the off-line learning, where n is the length of the sequence. In addition, we show that if there is an off-line algorithm that does not make more than a constant number of mistakes for each sequence then there is an online algorithm that also does not make more than a constant number of mistakes. The second issue we address is the effect of the ordering of the elements on the number of mistakes of an off-line learner. It turns out that there are sequences on which an off-line learner can guarantee at most one mistake, yet a permutation of the same sequence forces him to err on many elements. We prove, however, that the gap, between the off-line mistake bounds on permutations of the same sequence of n-many elements, cannot be larger than a multiplicative factor of log n, and we present examples that obtain such a gap. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
158
test
1-hop neighbor's text information: Learning Singly Recursive Relations from Small Datasets. : The inductive logic programming system LOPSTER was created to demonstrate the advantage of basing induction on logical implication rather than -subsumption. LOPSTER's sub-unification procedures allow it to induce recursive relations using a minimum number of examples, whereas inductive logic programming algorithms based on -subsumption require many more examples to solve induction tasks. However, LOPSTER's input examples must be carefully chosen; they must be along the same inverse resolution path. We hypothesize that an extension of LOPSTER can efficiently induce recursive relations without this requirement. We introduce a generalization of LOPSTER named CRUSTACEAN that has this capability and empirically evaluate its ability to induce recursive relations. 1-hop neighbor's text information: Bottom-up induction of logic programs with more than one recursive clause: In this paper we present a bottom-up algorithm called MRI to induce logic programs from their examples. This method can induce programs with a base clause and more than one recursive clause from a very small number of examples. MRI is based on the analysis of saturations of examples. It first generates a path structure, which is an expression of a stream of values processed by predicates. The concept of path structure was originally introduced by Identam-Almquist and used in TIM [ Idestam-Almquist, 1996 ] . In this paper, we introduce the concepts of extension and difference of path structure. Recursive clauses can be expressed as a difference between a path structure and its extension. The paper presents the algorithm and shows experimental results obtained by the method. Target text information: "Inverting Implication with Small Training Sets", : We present an algorithm for inducing recursive clauses using inverse implication (rather than inverse resolution) as the underlying generalization method. Our approach applies to a class of logic programs similar to the class of primitive recursive functions. Induction is performed using a small number of positive examples that need not be along the same resolution path. Our algorithm, implemented in a system named CRUSTACEAN, locates matched lists of generating terms that determine the pattern of decomposition exhibited in the (target) recursive clause. Our theoretical analysis defines the class of logic programs for which our approach is complete, described in terms characteristic of other ILP approaches. Our current implementation is considerably faster than previously reported. We present evidence demonstrating that, given randomly selected inputs, increasing the number of positive examples increases accuracy and reduces the number of outputs. We relate our approach to similar recent work on inducing recursive clauses. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
0
Rule Learning
cora
1,777
test
1-hop neighbor's text information: (1994) Automatic smoothing spline projection pursuit. : kj = 1. The standard PPR algorithm of Friedman and Stuet-zle (1981) estimates the smooth functions f j using the supersmoother nonparametric scatterplot smoother. Friedman's algorithm constructs a model with M max linear combinations, then prunes back to a simpler model of size M M max , where M and M max are specified by the user. This paper discusses an alternative algorithm in which the smooth functions are estimated using smoothing splines. The direction coefficients ff j , the amount of smoothing in each direction, and 1-hop neighbor's text information: (1997) Simulation based Bayesian nonparametric regression methods. : 1-hop neighbor's text information: On Bayesian analysis of mixtures with an unknown number of components. : New methodology for fully Bayesian mixture analysis is developed, making use of reversible jump Markov chain Monte Carlo methods, that are capable of jumping between the parameter subspaces corresponding to different numbers of components in the mixture. A sample from the full joint distribution of all unknown variables is thereby generated, and this can be used as a basis for a thorough presentation of many aspects of the posterior distribution. The methodology is applied here to the analysis of univariate normal mixtures, using a hierarchical prior model that offers an approach to dealing with weak prior information while avoiding the mathematical pitfalls of using improper priors in the mixture context. Target text information: Bayesian MARS: A Bayesian approach to multivariate adaptive regression spline (MARS) fitting (Friedman, 1991) is proposed. This takes the form of a probability distribution over the space of possible MARS models which is explored using reversible jump Markov chain Monte Carlo methods (Green, 1995). The generated sample of MARS models produced is shown to have good predictive power when averaged and allows easy interpretation of the relative importance of predictors to the overall fit. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
861
test
1-hop neighbor's text information: Non-Deterministic, Constraint-Based Parsing of Human Genes: 1-hop neighbor's text information: "Adding Learning to the Cellular development of Neural Networks: Evolution and the Baldwin Effect," : This paper compares the efficiency of two encoding schemes for Artificial Neural Networks optimized by evolutionary algorithms. Direct Encoding encodes the weights for an a priori fixed neural network architecture. Cellular Encoding encodes both weights and the architecture of the neural network. In previous studies, Direct Encoding and Cellular Encoding have been used to create neural networks for balancing 1 and 2 poles attached to a cart on a fixed track. The poles are balanced by a controller that pushes the cart to the left or the right. In some cases velocity information about the pole and cart is provided as an input; in other cases the network must learn to balance a single pole without velocity information. A careful study of the behavior of these systems suggests that it is possible to balance a single pole with velocity information as an input and without learning to compute the velocity. A new fitness function is introduced that forces the neural network to compute the velocity. By using this new fitness function and tuning the syntactic constraints used with cellular encoding, we achieve a tenfold speedup over our previous study and solve a more difficult problem: balancing two poles when no information about the velocity is provided as input. Target text information: Evolving deterministic finite automata using cellular encoding. : This paper presents a method for the initial singlestate zygote. The results evolution of deterministic finite I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
1,730
test
1-hop neighbor's text information: Generating accurate and diverse members of a neural-network ensemble. : Neural-network ensembles have been shown to be very accurate classification techniques. Previous work has shown that an effective ensemble should consist of networks that are not only highly correct, but ones that make their errors on different parts of the input space as well. Most existing techniques, however, only indirectly address the problem of creating such a set of networks. In this paper we present a technique called Addemup that uses genetic algorithms to directly search for an accurate and diverse set of trained networks. Addemup works by first creating an initial population, then uses genetic operators to continually create new networks, keeping the set of networks that are as accurate as possible while disagreeing with each other as much as possible. Experiments on three DNA problems show that Addemup is able to generate a set of trained networks that is more accurate than several existing approaches. Experiments also show that Addemup is able to effectively incorporate prior knowledge, if available, to improve the quality of its ensemble. 1-hop neighbor's text information: "A framework of combining symbolic and neural learning," : The primary goal of inductive learning is to generalize well that is, induce a function that accurately produces the correct output for future inputs. Hansen and Salamon showed that, under certain assumptions, combining the predictions of several separately trained neural networks will improve generalization. One of their key assumptions is that the individual networks should be independent in the errors they produce. In the standard way of performing backpropagation this assumption may be violated, because the standard procedure is to initialize network weights in the region of weight space near the origin. This means that backpropagation's gradient-descent search may only reach a small subset of the possible local minima. In this paper we present an approach to initializing neural networks that uses competitive learning to intelligently create networks that are originally located far from the origin of weight space, thereby potentially increasing the set of reachable local minima. We report experiments on two real-world datasets where combinations of networks initialized with our method generalize better than combina tions of networks initialized the traditional way. 1-hop neighbor's text information: A Decision-theoretic Generalization of On-line Learning and an Application to Boosting. : We consider the problem of dynamically apportioning resources among a set of options in a worst-case on-line framework. The model we study can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting. We show that the multiplicative weight-update rule of Littlestone and Warmuth [10] can be adapted to this model yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems. We show how the resulting learning algorithm can be applied to a variety of problems, including gambling, multiple-outcome prediction, repeated games and prediction of points in R n Target text information: The Sources of Increased Accuracy for Two Proposed Boosting Algorithms: We introduce two boosting algorithms that aim to increase the generalization accuracy of a given classifier by incorporating it as a level-0 component in a stacked generalizer. Both algorithms construct a complementary level-0 classifier that can only generate coarse hypotheses for the training data. We show that the two algorithms boost generalization accuracy on a representative collection of data sets. The two algorithms are distinguished in that one of them modifies the class targets of selected training instances in order to train the complementary classifier. We show that the two algorithms achieve approximately equal generalization accuracy, but that they create complementary classifiers that display different degrees of accuracy and diversity. Our study provides evidence that it may be useful to investigate families of boosting algorithms that incorporate varying levels of accuracy and diversity, so as to achieve an appropriate mix for a given task and domain. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
331
test
1-hop neighbor's text information: Using Recurrent Neural Networks to Learn the Structure of Interconnection Networks, : A modified Recurrent Neural Network (RNN) is used to learn a Self-Routing Interconnection Network (SRIN) from a set of routing examples. The RNN is modified so that it has several distinct initial states. This is equivalent to a single RNN learning multiple different synchronous sequential machines. We define such a sequential machine structure as augmented and show that a SRIN is essentially an Augmented Synchronous Sequential Machine (ASSM). As an example, we learn a small six-switch SRIN. After training we extract the net-work's internal representation of the ASSM and corresponding SRIN. fl This paper is adapted from ( Goudreau, 1993, Chapter 6 ) . A shortened version of this paper was published in ( Goudreau & Giles, 1993 ) . 1-hop neighbor's text information: Routing in Optical Multistage Interconnection Networks: a Neural Network Solution, : There has been much interest in using optics to implement computer interconnection networks. However, there has been little discussion of any routing methodologies besides those already used in electronics. In this paper, a neural network routing methodology is proposed that can generate control bits for an optical multistage interconnection network (OMIN). Though we present no optical implementation of this methodology, we illustrate its control for an optical interconnection network. These OMINs may be used as communication media for shared memory, distributed computing systems. The routing methodology makes use of an Artificial Neural Network (ANN) that functions as a parallel computer for generating the routes. The neural network routing scheme may be applied to electrical as well as optical interconnection networks. However, since the ANN can be implemented using optics, this routing approach is especially appealing for an optical computing environment. The parallel nature of the ANN computation may make this routing scheme faster than conventional routing approaches, especially for OMINs that are irregular. Furthermore, the neural network routing scheme is fault-tolerant. Results are shown for generating routes in a 16 fi 16, 3 stage OMIN. 1-hop neighbor's text information: D.M. Chiarulli, Predictive Control of Opto-Electronic Reconfigurable Interconnection Networks using Neural Networks, : Opto-electronic reconfigurable interconnection networks are limited by significant control latency when used in large multiprocessor systems. This latency is the time required to analyze the current traffic and reconfigure the network to establish the required paths. The goal of latency hiding is to minimize the effect of this control overhead. In this paper, we introduce a technique that performs latency hiding by learning the patterns of communication traffic and using that information to anticipate the need for communication paths. Hence, the network provides the required communication paths before a request for a path is made. In this study, the communication patterns (memory accesses) of a parallel program are used as input to a time delay neural network (TDNN) to perform on-line training and prediction. These predicted communication patterns are used by the interconnection network controller that provides routes for the memory requests. Based on our experiments, the neural network was able to learn highly repetitive communication patterns, and was thus able to predict the allocation of communication paths, resulting in a reduction of communication latency. Target text information: D.M. Chiarulli, On-Line Prediction of Multiprocessor Memory Access Patterns, : Technical Report UMIACS-TR-96-59 and CS-TR-3676 Institute for Advanced Computer Studies University of Maryland College Park, MD 20742 Abstract Shared memory multiprocessors require reconfigurable interconnection networks (INs) for scalability. These INs are reconfigured by an IN control unit. However, these INs are often plagued by undesirable reconfiguration time that is primarily due to control latency, the amount of time delay that the control unit takes to decide on a desired new IN configuration. To reduce control latency, a trainable prediction unit (PU) was devised and added to the IN controller. The PUs job is to anticipate and reduce control configuration time, the major component of the control latency. Three different on-line prediction techniques were tested to learn and predict repetitive memory access patterns for three typical parallel processing applications, the 2-D relaxation algorithm, matrix multiply and Fast Fourier Transform. The predictions were then used by a routing control algorithm to reduce control latency by configuring the IN to provide needed memory access paths before they were requested. Three prediction techniques were used and tested: 1). a Markov predictor, 2). a linear predictor and 3). a time delay neural network (TDNN) predictor. As expected, different predictors performed best on different applications, however, the TDNN produced the best overall results. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
902
test
1-hop neighbor's text information: Learning one-dimensional geometric patterns under one-sided random misclassification noise. : 1-hop neighbor's text information: "A General Lower Bound on the Number of Examples Needed for Learning," : We prove a lower bound of ( 1 * ln 1 ffi + VCdim(C) * ) on the number of random examples required for distribution-free learning of a concept class C, where VCdim(C) is the Vapnik-Chervonenkis dimension and * and ffi are the accuracy and confidence parameters. This improves the previous best lower bound of ( 1 * ln 1 ffi + VCdim(C)), and comes close to the known general upper bound of O( 1 ffi + VCdim(C) * ln 1 * ) for consistent algorithms. We show that for many interesting concept classes, including kCNF and kDNF, our bound is actually tight to within a constant factor. Target text information: PAC learning of one-dimensional patterns. : Developing the ability to recognize a landmark from a visual image of a robot's current location is a fundamental problem in robotics. We consider the problem of PAC-learning the concept class of geometric patterns where the target geometric pattern is a configuration of k points on the real line. Each instance is a configuration of n points on the real line, where it is labeled according to whether or not it visually resembles the target pattern. To capture the notion of visual resemblance we use the Hausdorff metric. Informally, two geometric patterns P and Q resemble each other under the Hausdorff metric, if every point on one pattern is "close" to some point on the other pattern. We relate the concept class of geometric patterns to the landmark recognition problem and then present a polynomial-time algorithm that PAC-learns the class of one-dimensional geometric patterns. We also present some experimental results on how our algorithm performs. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
1,177
train
1-hop neighbor's text information: Semilinear predictability minimization produces well-known feature detectors. : Predictability minimization (PM | Schmidhuber, 1992) exhibits various intuitive and theoretical advantages over many other methods for unsupervised redundancy reduction. So far, however, there were only toy applications of PM. In this paper, we apply semilinear PM to static real world images and find: without a teacher and without any significant preprocessing, the system automatically learns to generate distributed representations based on well-known feature detectors, such as orientation sensitive edge detectors and off-center-on-surround-like structures, thus extracting simple features related to those considered useful for image pre-processing and compression. 1-hop neighbor's text information: Learning complex, extended sequences using the principle of history compression. : Previous neural network learning algorithms for sequence processing are computationally expensive and perform poorly when it comes to long time lags. This paper first introduces a simple principle for reducing the descriptions of event sequences without loss of information. A consequence of this principle is that only unexpected inputs can be relevant. This insight leads to the construction of neural architectures that learn to `divide and conquer' by recursively decomposing sequences. I describe two architectures. The first functions as a self-organizing multi-level hierarchy of recurrent networks. The second, involving only two recurrent networks, tries to collapse a multi-level predictor hierarchy into a single recurrent net. Experiments show that the system can require less computation per time step and many fewer training sequences than conventional training algorithms for recurrent nets. 1-hop neighbor's text information: An Information Maximization Approach to Blind Separation and Blind Deconvolution. : We derive a new self-organising learning algorithm which maximises the information transferred in a network of non-linear units. The algorithm does not assume any knowledge of the input distributions, and is defined here for the zero-noise limit. Under these conditions, information maximisation has extra properties not found in the linear case (Linsker 1989). The non-linearities in the transfer function are able to pick up higher-order moments of the input distributions and perform something akin to true redundancy reduction between units in the output representation. This enables the network to separate statistically independent components in the inputs: a higher-order generalisation of Principal Components Analysis. We apply the network to the source separation (or cocktail party) problem, successfully separating unknown mixtures of up to ten speakers. We also show that a variant on the network architecture is able to perform blind deconvolution (cancellation of unknown echoes and reverberation in a speech signal). Finally, we derive dependencies of information transfer on time delays. We suggest that information max-imisation provides a unifying framework for problems in `blind' signal processing. fl Please send comments to [email protected]. This paper will appear as Neural Computation, 7, 6, 1004-1034 (1995). The reference for this version is: Technical Report no. INC-9501, February 1995, Institute for Neural Computation, UCSD, San Diego, CA 92093-0523. Target text information: Learning factorial codes by predictability minimization. : I propose a novel general principle for unsupervised learning of distributed non-redundant internal representations of input patterns. The principle is based on two opposing forces. For each representational unit there is an adaptive predictor which tries to predict the unit from the remaining units. In turn, each unit tries to react to the environment such that it minimizes its predictability. This encourages each unit to filter `abstract concepts' out of the environmental input such that these concepts are statistically independent of those upon which the other units focus. I discuss various simple yet potentially powerful implementations of the principle which aim at finding binary factorial codes (Bar-low et al., 1989), i.e. codes where the probability of the occurrence of a particular input is simply the product of the probabilities of the corresponding code symbols. Such codes are potentially relevant for (1) segmentation tasks, (2) speeding up supervised learning, (3) novelty detection. Methods for finding factorial codes automatically implement Occam's razor for finding codes using a minimal number of units. Unlike previous methods the novel principle has a potential for removing not only linear but also non-linear output redundancy. Illustrative experiments show that algorithms based on the principle of predictability minimization are practically feasible. The final part of this paper describes an entirely local algorithm that has a potential for learning unique representations of extended input sequences. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
2,588
test
1-hop neighbor's text information: A Comparison of Random Search versus Genetic Programming as Engines for Collective Adaptation: We have integrated the distributed search of genetic programming (GP) based systems with collective memory to form a collective adaptation search method. Such a system significantly improves search as problem complexity is increased. Since the pure GP approach does not scale well with problem complexity, a natural question is which of the two components is actually contributing to the search process. We investigate a collective memory search which utilizes a random search engine and find that it significantly outperforms the GP based search engine. We examine the solution space and show that as problem complexity and search space grow, a collective adaptive system will perform better than a collective memory search employing random search as an engine. 1-hop neighbor's text information: Augmenting collective adaptation with a simple process agent. : We have integrated the distributed search of genetic programming based systems with collective memory to form a collective adaptation search method. Such a system significantly improves search as problem complexity is increased. However, there is still considerable scope for improvement. In collective adaptation, search agents gather knowledge of their environment and deposit it in a central information repository. Process agents are then able to manipulate that focused knowledge, exploiting the exploration of the search agents. We examine the utility of increasing the capabilities of the centralized pro cess agents. 1-hop neighbor's text information: Voting for Schemata: The schema theorem states that implicit parallel search is behind the power of the genetic algorithm. We contend that chromosomes can vote, proportionate to their fitness, for candidate schemata. We maintain a population of binary strings and ternary schemata. The string population not only works on solving its problem domain, but it supplies fitness for the schema population, which indirectly can solve the original problem. Target text information: Collective memory search. : I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
2,060
test
1-hop neighbor's text information: "Theory-Guided Induction of Logic Programs by Inference of Regular Languages", : resent allowed sequences of resolution steps for the initial theory. There are, however, many characterizations of allowed sequences of resolution steps that cannot be expressed by a set of resolvents. One approach to this problem is presented, the system mer-lin, which is based on an earlier technique for learning finite-state automata that represent allowed sequences of resolution steps. merlin extends the previous technique in three ways: i) negative examples are considered in addition to positive examples, ii) a new strategy for performing generalization is used, and iii) a technique for converting the learned automaton to a logic program is included. Results from experiments are presented in which merlin outperforms both a system using the old strategy for performing generalization, and a traditional covering technique. The latter result can be explained by the limited expressiveness of hypotheses produced by covering and also by the fact that covering needs to produce the correct base clauses for a recursive definition before Target text information: Predicate Invention and Learning from Positive Examples Only: Previous bias shift approaches to predicate invention are not applicable to learning from positive examples only, if a complete hypothesis can be found in the given language, as negative examples are required to determine whether new predicates should be invented or not. One approach to this problem is presented, MERLIN 2.0, which is a successor of a system in which predicate invention is guided by sequences of input clauses in SLD-refutations of positive and negative examples w.r.t. an overly general theory. In contrast to its predecessor which searches for the minimal finite-state automaton that can generate all positive and no negative sequences, MERLIN 2.0 uses a technique for inducing Hidden Markov Models from positive sequences only. This enables the system to invent new predicates without being triggered by negative examples. Another advantage of using this induction technique is that it allows for incremental learning. Experimental results are presented comparing MERLIN 2.0 with the positive only learning framework of Progol 4.2 and comparing the original induction technique with a new version that produces deterministic Hidden Markov Models. The results show that predicate invention may indeed be both necessary and possible when learning from positive examples only as well as it can be beneficial to keep the induced model deterministic. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
0
Rule Learning
cora
444
test
1-hop neighbor's text information: "A Study in Program Response and the Negative Effects of Introns in Genetic Programming," : The standard method of obtaining a response in tree-based genetic programming is to take the value returned by the root node. In non-tree representations, alternate methods have been explored. One alternative is to treat a specific location in indexed memory as the response value when the program terminates. The purpose of this paper is to explore the applicability of this technique to tree-structured programs and to explore the intron effects that these studies bring to light. This paper's experimental results support the finding that this memory-based program response technique is an improvement for some, but not all, problems. In addition, this paper's experimental results support the finding that, contrary to past research and speculation, the addition or even facilitation of introns can seriously degrade the search performance of genetic programming. 1-hop neighbor's text information: : An investigation into the dynamics of Genetic Programming applied to chaotic time series prediction is reported. An interesting characteristic of adaptive search techniques is their ability to perform well in many problem domains while failing in others. Because of Genetic Programming's flexible tree structure, any particular problem can be represented in myriad forms. These representations have variegated effects on search performance. Therefore, an aspect of fundamental engineering significance is to find a representation which, when acted upon by Genetic Programming operators, optimizes search performance. We discover, in the case of chaotic time series prediction, that the representation commonly used in this domain does not yield optimal solutions. Instead, we find that the population converges onto one "accurately replicating" tree before other trees can be explored. To correct for this premature convergence we make a simple modification to the crossover operator. In this paper we review previous work with GP time series prediction, pointing out an anomalous result related to overlearning, and report the improvement effected by our modified crossover operator. 1-hop neighbor's text information: Evolving compact solutions in genetic programming: A case study. : Genetic programming (GP) is a variant of genetic algorithms where the data structures handled are trees. This makes GP especially useful for evolving functional relationships or computer programs, as both can be represented as trees. Symbolic regression is the determination of a function dependence y = g(x) that approximates a set of data points (x i ; y i ). In this paper the feasibility of symbolic regression with GP is demonstrated on two examples taken from different domains. Furthermore several suggested methods from literature are compared that are intended to improve GP performance and the readability of solutions by taking into account introns or redundancy that occurs in the trees and keeping the size of the trees small. The experiments show that GP is an elegant and useful tool to derive complex functional dependencies on numerical data. Target text information: Complexity Compression and Evolution. : Compression of information is an important concept in the theory of learning. We argue for the hypothesis that there is an inherent compression pressure towards short, elegant and general solutions in a genetic programming system and other variable length evolutionary algorithms. This pressure becomes visible if the size or complexity of solutions are measured without non-effective code segments called introns. The built in parsimony pressure effects complex fitness functions, crossover probability, generality, maximum depth or length of solutions, explicit parsimony, granularity of fitness function, initialization depth or length, and modulariz-ation. Some of these effects are positive and some are negative. In this work we provide a basis for an analysis of these effects and suggestions to overcome the negative implications in order to obtain the balance needed for successful evolution. An empirical investigation that supports our hypothesis is also presented. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
2,654
test
1-hop neighbor's text information: Automated decomposition of model-based learning problems. : A new generation of sensor rich, massively distributed autonomous systems is being developed that has the potential for unprecedented performance, such as smart buildings, reconfigurable factories, adaptive traffic systems and remote earth ecosystem monitoring. To achieve high performance these massive systems will need to accurately model themselves and their environment from sensor information. Accomplishing this on a grand scale requires automating the art of large-scale modeling. This paper presents a formalization of decompositional, model-based learning (DML), a method developed by observing a modeler's expertise at decomposing large scale model estimation tasks. The method exploits a striking analogy between learning and consistency-based diagnosis. Moriarty, an implementation of DML, has been applied to thermal modeling of a smart building, demonstrating a significant improvement in learning rate. Target text information: Deriving monotonic function envelopes from ob-servations. : Much work in qualitative physics involves constructing models of physical systems using functional descriptions such as "flow monotonically increases with pressure." Semiquantitative methods improve model precision by adding numerical envelopes to these monotonic functions. Ad hoc methods are normally used to determine these envelopes. This paper describes a systematic method for computing a bounding envelope of a multivariate monotonic function given a stream of data. The derived envelope is computed by determining a simultaneous confidence band for a special neural network which is guaranteed to produce only monotonic functions. By composing these envelopes, more complex systems can be simulated using semiquantitative methods. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
1,886
test
1-hop neighbor's text information: Model selection and accounting for model uncertainty in graphical models using Occam\'s window. : We consider the problem of model selection and accounting for model uncertainty in high-dimensional contingency tables, motivated by expert system applications. The approach most used currently is a stepwise strategy guided by tests based on approximate asymptotic P -values leading to the selection of a single model; inference is then conditional on the selected model. The sampling properties of such a strategy are complex, and the failure to take account of model uncertainty leads to underestimation of uncertainty about quantities of interest. In principle, a panacea is provided by the standard Bayesian formalism which averages the posterior distributions of the quantity of interest under each of the models, weighted by their posterior model probabilities. Furthermore, this approach is optimal in the sense of maximising predictive ability. However, this has not been used in practice because computing the posterior model probabilities is hard and the number of models is very large (often greater than 10 11 ). We argue that the standard Bayesian formalism is unsatisfactory and we propose an alternative Bayesian approach that, we contend, takes full account of the true model uncertainty by averaging over a much smaller set of models. An efficient search algorithm is developed for finding these models. We consider two classes of graphical models that arise in expert systems: the recursive causal models and the decomposable fl David Madigan is Assistant Professor of Statistics and Adrian E. Raftery is Professor of Statistics and Sociology, Department of Statistics, GN-22, University of Washington, Seattle, WA 98195. Madigan's research was partially supported by the Graduate School Research Fund, University of Washington and by the NSF. Raftery's research was supported by ONR Contract no. N-00014-91-J-1074. The authors are grateful to Gregory Cooper, Leo Goodman, Shelby Haberman, David Hinkley, Graham Upton, Jon Wellner, Nanny Wermuth, Jeremy York, Walter Zucchini and two anonymous referees for helpful comments and discussions, and to Michael R. Butler for providing the data for the scrotal swellings example. 1-hop neighbor's text information: Hierarchical Mixtures of Experts and the EM Algorithm, : We present a tree-structured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIM's). Learning is treated as a maximum likelihood problem; in particular, we present an Expectation-Maximization (EM) algorithm for adjusting the parameters of the architecture. We also develop an on-line learning algorithm in which the parameters are updated incrementally. Comparative simulation results are presented in the robot dynamics domain. *We want to thank Geoffrey Hinton, Tony Robinson, Mitsuo Kawato and Daniel Wolpert for helpful comments on the manuscript. This project was supported in part by a grant from the McDonnell-Pew Foundation, by a grant from ATR Human Information Processing Research Laboratories, by a grant from Siemens Corporation, by by grant IRI-9013991 from the National Science Foundation, and by grant N00014-90-J-1942 from the Office of Naval Research. The project was also supported by NSF grant ASC-9217041 in support of the Center for Biological and Computational Learning at MIT, including funds provided by DARPA under the HPCC program, and NSF grant ECS-9216531 to support an Initiative in Intelligent Control at MIT. Michael I. Jordan is a NSF Presidential Young Investigator. 1-hop neighbor's text information: Tibshirani (1994) Combining Estimates in Regression and Classification, : We consider the problem of how to combine a collection of general regression fit vectors in order to obtain a better predictive model. The individual fits may be from subset linear regression, ridge regression, or something more complex like a neural network. We develop a general framework for this problem and examine a recent cross-validation-based proposal called "stacking" in this context. Combination methods based on the bootstrap and analytic methods are also derived and compared in a number of examples, including best subsets regression and regression trees. Finally, we apply these ideas to classification problems where the estimated combination weights can yield insight into the structure of the problem. Target text information: Stacked density estimation. : Technical Report No. 97-36, Information and Computer Science Department, University of California, Irvine I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
1,033
test
1-hop neighbor's text information: Slonim. The power of team exploration: Two robots can learn unlabeled directed graphs. : We show that two cooperating robots can learn exactly any strongly-connected directed graph with n indistinguishable nodes in expected time polynomial in n. We introduce a new type of homing sequence for two robots which helps the robots recognize certain previously-seen nodes. We then present an algorithm in which the robots learn the graph and the homing sequence simultaneously by wandering actively through the graph. Unlike most previous learning results using homing sequences, our algorithm does not require a teacher to provide counterexamples. Furthermore, the algorithm can use efficiently any additional information available that distinguishes nodes. We also present an algorithm in which the robots learn by taking random walks. The rate at which a random walk converges to the stationary distribution is characterized by the conductance of the graph. Our random-walk algorithm learns in expected time polynomial in n and in the inverse of the conductance and is more efficient than the homing-sequence algorithm for high-conductance graphs. 1-hop neighbor's text information: On the learnability and usage of acyclic probabilistic finite automata. : We propose and analyze a distribution learning algorithm for a subclass of Acyclic Probabilistic Finite Automata (APFA). This subclass is characterized by a certain distinguishability property of the automata's states. Though hardness results are known for learning distributions generated by general APFAs, we prove that our algorithm can indeed efficiently learn distributions generated by the subclass of APFAs we consider. In particular, we show that the KL-divergence between the distribution generated by the target source and the distribution generated by our hypothesis can be made small with high confidence in polynomial time. We present two applications of our algorithm. In the first, we show how to model cursively written letters. The resulting models are part of a complete cursive handwriting recognition system. In the second application we demonstrate how APFAs can be used to build multiple-pronunciation models for spoken words. We evaluate the APFA based pronunciation models on labeled speech data. The good performance (in terms of the log-likelihood obtained on test data) achieved by the APFAs and the incredibly small amount of time needed for learning suggests that the learning algorithm of APFAs might be a powerful alternative to commonly used probabilistic models. 1-hop neighbor's text information: The Power of a Pebble: Exploring and Mapping Directed Graphs: Exploring and mapping an unknown environment is a fundamental problem, which is studied in a variety of contexts. Many works have focused on finding efficient solutions to restricted versions of the problem. In this paper, we consider a model that makes very limited assumptions on the environment and solve the mapping problem in this general setting. We model the environment by an unknown directed graph G, and consider the problem of a robot exploring and mapping G. We do not assume that the vertices of G are labeled, and thus the robot has no hope of succeeding unless it is given some means of distinguishing between vertices. For this reason we provide the robot with a pebble a device that it can place on a vertex and use to identify the vertex later. In this paper we show: (1) If the robot knows an upper bound on the number of vertices then it can learn the graph efficiently with only one pebble. (2) If the robot does not know an upper bound on the number of vertices n, then fi(log log n) pebbles are both necessary and sufficient. In both cases our algorithms are deterministic. Target text information: Efficient learning of typical finite automata from random walks. : This paper describes new and efficient algorithms for learning deterministic finite automata. Our approach is primarily distinguished by two features: (1) the adoption of an average-case setting to model the "typical" labeling of a finite automaton, while retaining a worst-case model for the underlying graph of the automaton, along with (2) a learning model in which the learner is not provided with the means to experiment with the machine, but rather must learn solely by observing the automaton's output behavior on a random input sequence. The main contribution of this paper is in presenting the first efficient algorithms for learning non-trivial classes of automata in an entirely passive learning model. We adopt an on-line learning model in which the learner is asked to predict the output of the next state, given the next symbol of the random input sequence; the goal of the learner is to make as few prediction mistakes as possible. Assuming the learner has a means of resetting the target machine to a fixed start state, we first present an efficient algorithm that makes an expected polynomial number of mistakes in this model. Next, we show how this first algorithm can be used as a subroutine by a second algorithm that also makes a polynomial number of mistakes even in the absence of a reset. Along the way, we prove a number of combinatorial results for randomly labeled automata. We also show that the labeling of the states and the bits of the input sequence need not be truly random, but merely semi-random. Finally, we discuss an extension of our results to a model in which automata are used to represent distributions over binary strings. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
923
test
1-hop neighbor's text information: Two is Better than One: a Diploid Genotype for Neural Networks. : In nature the genotype of many organisms exhibits diploidy, i.e., it includes two copies of every gene. In this paper we describe the results of simulations comparing the behavior of haploid and diploid populations of ecological neural networks living in both fixed and changing environments. We show that diploid genotypes create more variability in fitness in the population than haploid genotypes and buffer better environmental change; as a consequence, if one wants to obtain good results for both average and peak fitness in a single population one should choose a diploid population with an appropriate mutation rate. Some results of our simulations parallel biological findings. 1-hop neighbor's text information: How to evolve autonomous robots: : Target text information: Investigating the role of diploidy in simulated populations of evolving individuals: In most work applying genetic algorithms to populations of neural networks there is no real distinction between genotype and phenotype. In nature both the information contained in the genotype and the mapping of the genetic information into the phenotype are usually much more complex. The genotypes of many organisms exhibit diploidy, i.e., they include two copies of each gene: if the two copies are not identical in their sequences and therefore have a functional difference in their products (usually proteins), the expressed phenotypic feature is termed the dominant one, the other one recessive (not expressed). In this paper we review the literature on the use of diploidy and dominance operators in genetic algorithms; we present the new results we obtained with our own simulations in changing environments; finally, we discuss some results of our simulations that parallel biological findings. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
642
test
1-hop neighbor's text information: Mixtures of probabilistic principle component analysers. : Principal component analysis (PCA) is one of the most popular techniques for processing, compressing and visualising data, although its effectiveness is limited by its global linearity. While nonlinear variants of PCA have been proposed, an alternative paradigm is to capture data complexity by a combination of local linear PCA projections. However, conventional PCA does not correspond to a probability density, and so there is no unique way to combine PCA models. Previous attempts to formulate mixture models for PCA have therefore to some extent been ad hoc. In this paper, PCA is formulated within a maximum-likelihood framework, based on a specific form of Gaussian latent variable model. This leads to a well-defined mixture model for probabilistic principal component analysers, whose parameters can be determined using an EM algorithm. We discuss the advantages of this model in the context of clustering, density modelling and local dimensionality reduction, and we demonstrate its application to image compression and handwritten digit recognition. 1-hop neighbor's text information: Bishop (1997a). Hierarchical models for data visualization. : Visualization has proven to be a powerful and widely-applicable tool for the analysis and interpretation of multi-variate data. Most visualization algorithms aim to find a projection from the data space down to a two-dimensional visualization space. However, for complex data sets living in a high-dimensional space it is unlikely that a single two-dimensional projection can reveal all of the interesting structure. We therefore introduce a hierarchical visualization algorithm which allows the complete data set to be visualized at the top level, with clusters and sub-clusters of data points visualized at deeper levels. The algorithm is based on a hierarchical mixture of latent variable models, whose parameters are estimated using the expectation-maximization algorithm. We demonstrate the principle of the approach on a toy data set, and we then apply the algorithm to the visualization of a synthetic data set in 12 dimensions obtained from a simulation of multi-phase flows in oil pipelines, and to data in 36 dimensions derived from satellite images. A Matlab software implementation of the algorithm is publicly available from the world-wide web. 1-hop neighbor's text information: Computation and Neural Systems, : I present an expectation-maximization (EM) algorithm for principal component analysis (PCA). The algorithm allows a few eigenvectors and eigenvalues to be extracted from large collections of high dimensional data. It is computationally very efficient in space and time. It also naturally accommodates missing information. I also introduce a new variant of PCA called sensible principal component analysis (SPCA) which defines a proper density model in the data space. Learning for SPCA is also done with an EM algorithm. I report results on synthetic and real data showing that these EM algorithms correctly and efficiently find the leading eigenvectors of the covariance of datasets in a few iterations using up to hundreds of thousands of datapoints in thousands of dimensions. Target text information: Probabilistic principal component analysis. : Principal component analysis (PCA) is a ubiquitous technique for data analysis and processing, but one which is not based upon a probability model. In this paper we demonstrate how the principal axes of a set of observed data vectors may be determined through maximum-likelihood estimation of parameters in a latent variable model closely related to factor analysis. We consider the properties of the associated likelihood function, giving an EM algorithm for estimating the principal subspace iteratively, and discuss the advantages conveyed by the definition of a probability density function for PCA. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
2,390
test
1-hop neighbor's text information: Bayesian forecasting and dynamic models. : We discuss the development of dynamic factor models for multivariate financial time series, and the incorporation of stochastic volatility components for latent factor processes. Bayesian inference and computation is developed and explored in a study of the dynamic factor structure of daily spot exchange rates for a selection of international currencies. The models are direct generalisations of univariate stochastic volatility models, and represent specific varieties of models recently discussed in the growing multivariate stochastic volatility literature. We also discuss connections and comparisons with the much simpler method of dynamic variance discounting that, for over a decade, has been a standard approach in applied financial econometrics in the Bayesian forecasting world. We review empirical findings in applying these models to the exchange rate series, including aspects of model performance in dynamic portfolio allocation. We conclude with comments on the potential practical utility of structured factor models and future potential developments and model extensions. The authors acknowledge useful discussions with Jose M Quintana, Neil Shephard and Hong Chang, and partial support from NSF grants DMS-9704432 and DMS-9707914. This manuscript represents a preliminary draft report subject to revision. Before quoting or referencing, please check the authors' web site for a possible updated version. The latest version will be found as ISDS Discussion Paper 98-03 on the Duke web site, http://www.stat.duke.edu/papers/ 1-hop neighbor's text information: Hierarchical spatio-temporal mapping of disease rates. : Maps of regional morbidity and mortality rates are useful tools in determining spatial patterns of disease. Combined with socio-demographic census information, they also permit assessment of environmental justice, i.e., whether certain subgroups suffer disproportionately from certain diseases or other adverse effects of harmful environmental exposures. Bayes and empirical Bayes methods have proven useful in smoothing crude maps of disease risk, eliminating the instability of estimates in low-population areas while maintaining geographic resolution. In this paper we extend existing hierarchical spatial models to account for temporal effects and spatio-temporal interactions. Fitting the resulting highly-parametrized models requires careful implementation of Markov chain Monte Carlo (MCMC) methods, as well as novel techniques for model evaluation and selection. We illustrate our approach using a dataset of county-specific lung cancer rates in the state of Ohio during the period 1968-1988. 1-hop neighbor's text information: Bayesian Detection of Clusters and Discontinuities in Disease Maps: Target text information: Modelling risk from a disease in time and space, : This paper combines existing models for longitudinal and spatial data in a hierarchical Bayesian framework, with particular emphasis on the role of time- and space-varying covariate effects. Data analysis is implemented via Markov chain Monte Carlo methods. The methodology is illustrated by a tentative re-analysis of Ohio lung cancer data 1968-88. Two approaches that adjust for unmeasured spatial covariates, particularly tobacco consumption, are described. The first includes random effects in the model to account for unobserved heterogeneity; the second adds a simple urbanization measure as a surrogate for smoking behaviour. The Ohio dataset has been of particular interest because of the suggestion that a nuclear facility in the southwest of the state may have caused increased levels of lung cancer there. However, we contend here that the data are inadequate for a proper investigation of this issue. fl Email: [email protected] I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
2,286
test
1-hop neighbor's text information: `Multistrategy learning and theory revision\', : This paper presents the system WHY, which learns and updates a diagnostic knowledge base using domain knowledge and a set of examples. The a-priori knowledge consists of a causal model of the domain, stating the relationships among basic phenomena, and a body of phenomenological theory, describing the links between abstract concepts and their possible manifestations in the world. The phenomenological knowledge is used deductively, the causal model is used abductively and the examples are used inductively. The problems of imperfection and intractability of the theory are handled by allowing the system to make assumptions during its reasoning. In this way, robust knowledge can be learned with limited complexity and limited number of examples. The system works in a first order logic environment and has been applied in a real domain. 1-hop neighbor's text information: "Combining Connectionist and Symbolic Learning to Refine Certainty Factor Rule Bases," : Target text information: Knowledge Based Systems: Technical Report No. 95/2 I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
311
test
1-hop neighbor's text information: Warmuth "How to use expert advice", : We analyze algorithms that predict a binary value by combining the predictions of several prediction strategies, called experts. Our analysis is for worst-case situations, i.e., we make no assumptions about the way the sequence of bits to be predicted is generated. We measure the performance of the algorithm by the difference between the expected number of mistakes it makes on the bit sequence and the expected number of mistakes made by the best expert on this sequence, where the expectation is taken with respect to the randomization in the predictions. We show that the minimum achievable difference is on the order of the square root of the number of mistakes of the best expert, and we give efficient algorithms that achieve this. Our upper and lower bounds have matching leading constants in most cases. We then show how this leads to certain kinds of pattern recognition/learning algorithms with performance bounds that improve on the best results currently known in this context. We also compare our analysis to the case in which log loss is used instead of the expected number of mistakes. Target text information: Large Margin Classification Using the Perceptron Algorithm: We introduce and analyze a new algorithm for linear classification which combines Rosenblatt's perceptron algorithm with Helmbold and Warmuth's leave-one-out method. Like Vapnik's maximal-margin classifier, our algorithm takes advantage of data that are linearly separable with large margins. Compared to Vapnik's algorithm, however, ours is much simpler to implement, and much more efficient in terms of computation time. We also show that our algorithm can be efficiently used in very high dimensional spaces using kernel functions. We performed some experiments using our algorithm, and some variants of it, for classifying images of handwritten digits. The performance of our algorithm is close to, but not as good as, the performance of maximal-margin classifiers on the same problem. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
351
val
1-hop neighbor's text information: Prior, stabilizers and basis functions : from regularization to radial, tensor and additive splines. : We had previously shown that regularization principles lead to approximation schemes which are equivalent to networks with one layer of hidden units, called Regularization Networks. In particular we had discussed how standard smoothness functionals lead to a subclass of regularization networks, the well-known Radial Basis Functions approximation schemes. In this paper we show that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models and some of the neural networks. In particular we introduce new classes of smoothness functionals that lead to different classes of basis functions. Additive splines as well as some tensor product splines can be obtained from appropriate classes of smoothness functionals. Furthermore, the same extension that leads from Radial Basis Functions (RBF) to Hyper Basis Functions (HBF) also leads from additive models to ridge approximation models, containing as special cases Breiman's hinge functions and some forms of Projection Pursuit Regression. We propose to use the term Generalized Regularization Networks for this broad class of approximation schemes that follow from an extension of regularization. In the probabilistic interpretation of regularization, the different classes of basis functions correspond to different classes of prior probabilities on the approximating function spaces, and therefore to different types of smoothness assumptions. In the final part of the paper, we show the relation between activation functions of the Gaussian and sigmoidal type by considering the simple case of the kernel G(x) = jxj. In summary, different multilayer networks with one hidden layer, which we collectively call Generalized Regularization Networks, correspond to different classes of priors and associated smoothness functionals in a classical regularization principle. Three broad classes are a) Radial Basis Functions that generalize into Hyper Basis Functions, b) some tensor product splines, and c) additive splines that generalize into schemes of the type of ridge approximation, hinge functions and one-hidden-layer perceptrons. This paper describes research done within the Center for Biological and Computational Learning in the Department of Brain and Cognitive Sciences and at the Artificial Intelligence Laboratory. This research is sponsored by grants from the Office of Naval Research under contracts N00014-91-J-1270 and N00014-92-J-1879; by a grant from the National Science Foundation under contract ASC-9217041 (which includes funds from DARPA provided under the HPCC program); and by a grant from the National Institutes of Health under contract NIH 2-S07-RR07047. Additional support is provided by the North Atlantic Treaty Organization, ATR Audio and Visual Perception Research Laboratories, Mitsubishi Electric Corporation, Sumitomo Metal Industries, and Siemens AG. Support for the A.I. Laboratory's artificial intelligence research is provided by ONR contract N00014-91-J-4038. Tomaso Poggio is supported by the Uncas and Helen Whitaker Chair at the Whitaker College, Massachusetts Institute of Technology. c fl Massachusetts Institute of Technology, 1993 1-hop neighbor's text information: A Theory of Networks for Approximation and Learning, : Learning an input-output mapping from a set of examples, of the type that many neural networks have been constructed to perform, can be regarded as synthesizing an approximation of a multi-dimensional function, that is solving the problem of hy-persurface reconstruction. From this point of view, this form of learning is closely related to classical approximation techniques, such as generalized splines and regularization theory. This paper considers the problems of an exact representation and, in more detail, of the approximation of linear and nonlinear mappings in terms of simpler functions of fewer variables. Kolmogorov's theorem concerning the representation of functions of several variables in terms of functions of one variable turns out to be almost irrelevant in the context of networks for learning. We develop a theoretical framework for approximation based on regularization techniques that leads to a class of three-layer networks that we call Generalized Radial Basis Functions (GRBF), since they are mathematically related to the well-known Radial Basis Functions, mainly used for strict interpolation tasks. GRBF networks are not only equivalent to generalized splines, but are also closely related to pattern recognition methods such as Parzen windows and potential functions and to several neural network algorithms, such as Kanerva's associative memory, backpropagation and Kohonen's topology preserving map. They also have an interesting interpretation in terms of prototypes that are synthesized and optimally combined during the learning stage. The paper introduces several extensions and applications of the technique and discusses intriguing analogies with neurobiological data. c fl Massachusetts Institute of Technology, 1994 This paper describes research done within the Center for Biological Information Processing, in the Department of Brain and Cognitive Sciences, and at the Artificial Intelligence Laboratory. This research is sponsored by a grant from the Office of Naval Research (ONR), Cognitive and Neural Sciences Division; by the Artificial Intelligence Center of Hughes Aircraft Corporation; by the Alfred P. Sloan Foundation; by the National Science Foundation. Support for the A. I. Laboratory's artificial intelligence research is provided by the Advanced Research Projects Agency of the Department of Defense under Army contract DACA76-85-C-0010, and in part by ONR contract N00014-85-K-0124. Target text information: TABLE DES MATI ERES 1 Apprentissage et approximation les techniques de regularisation 3 1.1 Introduction: I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
548
test
1-hop neighbor's text information: A model of similarity-based retrieval. : We present a model of similarity-based retrieval which attempts to capture three psychological phenomena: (1) people are extremely good at judging similarity and analogy when given items to compare. (2) Superficial remindings are much more frequent than structural remindings. (3) People sometimes experience and use purely structural analogical re-mindings. Our model, called MAC/FAC (for "many are called but few are chosen") consists of two stages. The first stage (MAC) uses a computationally cheap, non-structural matcher to filter candidates from a pool of memory items. That is, we redundantly encode structured representations as content vectors, whose dot product yields an estimate of how well the corresponding structural representations will match. The second stage (FAC) uses SME to compute a true structural match between the probe and output from the first stage. MAC/FAC has been fully implemented, and we show that it is capable of modeling patterns of access found in psychological data. 1-hop neighbor's text information: The Structure-Mapping Engine: Algorithms and Examples. : This paper describes the Structure-Mapping Engine (SME), a program for studying analogical processing. SME has been built to explore Gentner's Structure-mapping theory of analogy, and provides a "tool kit" for constructing matching algorithms consistent with this theory. Its flexibility enhances cognitive simulation studies by simplifying experimentation. Furthermore, SME is very efficient, making it a useful component in machine learning systems as well. We review the Structure-mapping theory and describe the design of the engine. We analyze the complexity of the algorithm, and demonstrate that most of the steps are polynomial, typically bounded by O (N 2 ). Next we demonstrate some examples of its operation taken from our cognitive simulation studies and work in machine learning. Finally, we compare SME to other analogy programs and discuss several areas for future work. This paper appeared in Artificial Intelligence, 41, 1989, pp 1-63. For more information, please contact [email protected] Target text information: Plate. Holographic Reduced Representations. : A solution to the problem of representing compositional structure using distributed representations is described. The method uses circular convolution to associate items, which are represented by vectors. Arbitrary variable bindings, short sequences of various lengths, frames, and reduced representations can be compressed into a fixed width vector. These representations are items in their own right, and can be used in constructing compositional structures. The noisy reconstructions given by convolution memories can be cleaned up by using a separate associative memory that has good reconstructive properties. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
1,594
test
1-hop neighbor's text information: "On Functional Relation between Recognition Error and Class-Selective Reject," : This report reviews various optimum decision rules for pattern recognition, namely, Bayes rule, Chow's rule (optimum error-reject tradeoff), and a recently proposed class-selective rejection rule. The latter provides an optimum tradeoff between the error rate and the average number of (selected) classes. A new general relation between the error rate and the average number of classes is presented. The error rate can directly be computed from the class-selective reject function, which in turn can be estimated from unlabelled patterns, by simply counting the rejects. Theoretical as well as practical implications are discussed and some future research directions are proposed. Target text information: "An Optimum Decision Rule for Pattern Recognition," : I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
1,572
val
1-hop neighbor's text information: Learning to predict by the methods of temporal differences. : This article introduces a class of incremental learning procedures specialized for prediction|that is, for using past experience with an incompletely known system to predict its future behavior. Whereas conventional prediction-learning methods assign credit by means of the difference between predicted and actual outcomes, the new methods assign credit by means of the difference between temporally successive predictions. Although such temporal-difference methods have been used in Samuel's checker player, Holland's bucket brigade, and the author's Adaptive Heuristic Critic, they have remained poorly understood. Here we prove their convergence and optimality for special cases and relate them to supervised-learning methods. For most real-world prediction problems, temporal-difference methods require less memory and less peak computation than conventional methods; and they produce more accurate predictions. We argue that most problems to which supervised learning is currently applied are really prediction problems of the sort to which temporal-difference methods can be applied to advantage. 1-hop neighbor's text information: Neuronlike adaptive elements that can solve difficult learning control problems. : Miller, G. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. The Psychological Review, 63(2):81-97. Schmidhuber, J. (1990b). Towards compositional learning with dynamic neural networks. Technical Report FKI-129-90, Technische Universitat Munchen, Institut fu Informatik. Servan-Schreiber, D., Cleermans, A., and McClelland, J. (1988). Encoding sequential structure in simple recurrent networks. Technical Report CMU-CS-88-183, Carnegie Mellon University, Computer Science Department. 1-hop neighbor's text information: NEUROCONTROL BY REINFORCEMENT LEARNING: Reinforcement learning (RL) is a model-free tuning and adaptation method for control of dynamic systems. Contrary to supervised learning, based usually on gradient descent techniques, RL does not require any model or sensitivity function of the process. Hence, RL can be applied to systems that are poorly understood, uncertain, nonlinear or for other reasons untractable with conventional methods. In reinforcement learning, the overall controller performance is evaluated by a scalar measure, called reinforcement. Depending on the type of the control task, reinforcement may represent an evaluation of the most recent control action or, more often, of an entire sequence of past control moves. In the latter case, the RL system learns how to predict the outcome of each individual control action. This prediction is then used to adjust the parameters of the controller. The mathematical background of RL is closely related to optimal control and dynamic programming. This paper gives a comprehensive overview of the RL methods and presents an application to the attitude control of a satellite. Some well known applications from the literature are reviewed as well. Target text information: Optimal attitude control of satellites by artificial neural networks: a pilot study. : A pilot study is described on the practical application of artificial neural networks. The limit cycle of the attitude control of a satellite is selected as the test case. One of the sources of the limit cycle is a position dependent error in the observed attitude. A Reinforcement Learning method is selected, which is able to adapt a controller such that a cost function is optimised. An estimate of the cost function is learned by a neural `critic'. In our approach, the estimated cost function is directly represented as a function of the parameters of a linear controller. The critic is implemented as a CMAC network. Results from simulations show that the method is able to find optimal parameters without unstable behaviour. In particular in the case of large discontinuities in the attitude measurements, the method shows a clear improvement compared to the conventional approach: the RMS attitude error decreases approximately 30%. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
5
Reinforcement Learning
cora
2,209
test
1-hop neighbor's text information: A comparison of crossover and mutation in genetic programming. : This paper presents a large and systematic body of data on the relative effectiveness of mutation, crossover, and combinations of mutation and crossover in genetic programming (GP). The literature of traditional genetic algorithms contains related studies, but mutation and crossover in GP differ from their traditional counterparts in significant ways. In this paper we present the results from a very large experimental data set, the equivalent of approximately 12,000 typical runs of a GP system, systematically exploring a range of parameter settings. The resulting data may be useful not only for practitioners seeking to optimize parameters for GP runs, but also for theorists exploring issues such as the role of building blocks in GP. 1-hop neighbor's text information: Complexity Compression and Evolution. : Compression of information is an important concept in the theory of learning. We argue for the hypothesis that there is an inherent compression pressure towards short, elegant and general solutions in a genetic programming system and other variable length evolutionary algorithms. This pressure becomes visible if the size or complexity of solutions are measured without non-effective code segments called introns. The built in parsimony pressure effects complex fitness functions, crossover probability, generality, maximum depth or length of solutions, explicit parsimony, granularity of fitness function, initialization depth or length, and modulariz-ation. Some of these effects are positive and some are negative. In this work we provide a basis for an analysis of these effects and suggestions to overcome the negative implications in order to obtain the balance needed for successful evolution. An empirical investigation that supports our hypothesis is also presented. 1-hop neighbor's text information: Data Structures and Genetic Programming, : It is established good software engineering practice to ensure that programs use memory via abstract data structures such as stacks, queues and lists. These provide an interface between the program and memory, freeing the program of memory management details which are left to the data structures to implement. The main result presented herein is that GP can automatically generate stacks and queues. Typically abstract data structures support multiple operations, such as put and get. We show that GP can simultaneously evolve all the operations of a data structure by implementing each such operation with its own independent program tree. That is, the chromosome consists of a fixed number of independent program trees. Moreover, crossover only mixes genetic material of program trees that implement the same operation. Program trees interact with each other only via shared memory and shared "Automatically Defined Functions" (ADFs). Target text information: "A Study in Program Response and the Negative Effects of Introns in Genetic Programming," : The standard method of obtaining a response in tree-based genetic programming is to take the value returned by the root node. In non-tree representations, alternate methods have been explored. One alternative is to treat a specific location in indexed memory as the response value when the program terminates. The purpose of this paper is to explore the applicability of this technique to tree-structured programs and to explore the intron effects that these studies bring to light. This paper's experimental results support the finding that this memory-based program response technique is an improvement for some, but not all, problems. In addition, this paper's experimental results support the finding that, contrary to past research and speculation, the addition or even facilitation of introns can seriously degrade the search performance of genetic programming. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
2,193
test
1-hop neighbor's text information: Multi-parent Recombination: In this section we survey recombination operators that can apply more than two parents to create offspring. Some multi-parent recombination operators are defined for a fixed number of parents, e.g. have arity three, in some operators the number of parents is a random number that might be greater than two, and in yet other operators the arity is a parameter that can be set to an arbitrary integer number. We pay special attention to this latter type of operators and summarize results on the effect of operator arity on EA performance. 1-hop neighbor's text information: An Empirical Investigation of Multi-Parent Recombination Operators in Evolution Strategies: 1-hop neighbor's text information: : Eugenic Evolution for Combinatorial Optimization John William Prior Report AI98-268 May 1998 Target text information: Evolutionary programming and evolution strategies: Similarities and differences. : Evolutionary Programming and Evolution Strategies, rather similar representatives of a class of probabilistic optimization algorithms gleaned from the model of organic evolution, are discussed and compared to each other with respect to similarities and differences of their basic components as well as their performance in some experimental runs. Theoretical results on global convergence, step size control for a strictly convex, quadratic function and an extension of the convergence rate theory for Evolution Strategies are presented and discussed with respect to their implications on Evolutionary Programming. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
1,925
test
1-hop neighbor's text information: Cost-Sensitive Classification: Empirical Evaluation of a Hybrid Genetic Decision Tree Induction Algorithm. : This paper introduces ICET, a new algorithm for costsensitive classification. ICET uses a genetic algorithm to evolve a population of biases for a decision tree induction algorithm. The fitness function of the genetic algorithm is the average cost of classification when using the decision tree, including both the costs of tests (features, measurements) and the costs of classification errors. ICET is compared here with three other algorithms for costsensitive classification EG2, CS-ID3, and IDX and also with C4.5, which classifies without regard to cost. The five algorithms are evaluated empirically on five real-world medical datasets. Three sets of experiments are performed. The first set examines the baseline performance of the five algorithms on the five datasets and establishes that ICET performs significantly better than its competitors. The second set tests the robustness of ICET under a variety of conditions and shows that ICET maintains its advantage. The third set looks at ICETs search in bias space and discovers a way to improve the search. 1-hop neighbor's text information: Data Analysis Using Simulated Breeding and Inductive Learning Methods, : Marketing decision making tasks require the acquisition of efficient decision rules from noisy questionnaire data. Unlike popular learning-from-example methods, in such tasks, we must interpret the characteristics of the data without clear features of the data nor pre-determined evaluation criteria. The problem is how domain experts get simple, easy-to-understand, and accurate knowledge from noisy data. This paper describes a novel method to acquire efficient decision rules from questionnaire data using both simulated breeding and inductive learning techniques. The basic ideas of the method are that simulated breeding is used to get the effective features from the questionnaire data and that inductive learning is used to acquire simple decision rules from the data. The simulated breeding is one of the Genetic Algorithm based techniques to subjectively or interactively evaluate the qualities of offspring generated by genetic operations. The proposed method has been qualitatively and quantitatively validated by a case study on consumer product questionnaire data: the acquired rules are simpler than the results from the direct application of inductive learning; a domain expert admits that they are easy to understand; and they are at the same level on the accuracy compared with the other methods. 1-hop neighbor's text information: "Evolving Visual Routines," : Traditional machine vision assumes that the vision system recovers a a complete, labeled description of the world [ Marr, 1982 ] . Recently, several researchers have criticized this model and proposed an alternative model which considers perception as a distributed collection of task-specific, task-driven visual routines [ Aloimonos, 1993, Ullman, 1987 ] . Some of these researchers have argued that in natural living systems these visual routines are the product of natural selection [ Ramachandran, 1985 ] . So far, researchers have hand-coded task-specific visual routines for actual implementations (e.g. [ Chapman, 1993 ] ). In this paper we propose an alternative approach in which visual routines for simple tasks are evolved using an artificial evolution approach. We present results from a series of runs on actual camera images, in which simple routines were evolved using Genetic Programming techniques [ Koza, 1992 ] . The results obtained are promising: the evolved routines are able to correctly classify up to 93% of the images, which is better than the best algorithm we were able to write by hand. Target text information: Evolution, Learning, and Instinct: 100 Years of the Baldwin Effect Using Learning to Facilitate the: This paper describes a hybrid methodology that integrates genetic algorithms and decision tree learning in order to evolve useful subsets of discriminatory features for recognizing complex visual concepts. A genetic algorithm (GA) is used to search the space of all possible subsets of a large set of candidate discrimination features. Candidate feature subsets are evaluated by using C4.5, a decision-tree learning algorithm, to produce a decision tree based on the given features using a limited amount of training data. The classification performance of the resulting decision tree on unseen testing data is used as the fitness of the underlying feature subset. Experimental results are presented to show how increasing the amount of learning significantly improves feature set evolution for difficult visual recognition problems involving satellite and facial image data. In addition, we also report on the extent to which other more subtle aspects of the Baldwin effect are exhibited by the system. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
306
test
1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming. 1-hop neighbor's text information: Genetic algorithms, selection schemes, and the varying effects of noise. : IlliGAL Report No. 95006 July 1995 1-hop neighbor's text information: A comparison of selection schemes used in genetic algorithms. : TIK-Report Nr. 11, December 1995 Version 2 (2. Edition) Target text information: Determining Successful Negotiation Strategies: An Evolutionary Approach: To be successful in open, multi-agent environments, autonomous agents must be capable of adapting their negotiation strategies and tactics to their prevailing circumstances. To this end, we present an empirical study showing the relative success of different strategies against different types of opponent in different environments. In particular, we adopt an evolutionary approach in which strategies and tactics correspond to the genetic material in a genetic algorithm. We conduct a series of experiments to determine the most successful strategies and to see how and when these strategies evolve depending on the context and negotiation stance of the agent's opponent. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
287
test
1-hop neighbor's text information: Cardoso."On the performance of source separation algorithms" In Proc. : Source separation consists in recovering a set of n independent signals from m n observed instantaneous mixtures of these signals, possibly corrupted by additive noise. Many source separation algorithms use second order information in a whitening operation which reduces the non trivial part of the separation to determining a unitary matrix. Most of them further show a kind of invariance property which can be exploited to predict some general results about their performance. Our first contribution is to exhibit a lower bound to the performance in terms of accuracy of the separation. This bound is independent of the algorithm and, in the i.i.d. case, of the distribution of the source signals. Second, we show that the performance of invariant algorithms depends on the mixing matrix and on the noise level in a specific way. A consequence is that at low noise levels, the performance does not depend on the mixture but only on the distribution of the sources, via a function which is characteristic of the given source separation algorithm. 1-hop neighbor's text information: "Adaptive source separation without prewhitening," : Source separation consists in recovering a set of independent signals when only mixtures with unknown coefficients are observed. This paper introduces a class of adaptive algorithms for source separation which implements an adaptive version of equivariant estimation and is henceforth called EASI (Equivariant Adaptive Separation via Independence). The EASI algorithms are based on the idea of serial updating: this specific form of matrix updates systematically yields algorithms with a simple, parallelizable structure, for both real and complex mixtures. Most importantly, the performance of an EASI algorithm does not depend on the mixing matrix. In particular, convergence rates, stability conditions and interference rejection levels depend only on the (normalized) distributions of the source signals. Close form expressions of these quantities are given via an asymptotic performance analysis. This is completed by some numerical experiments illustrating the effectiveness of the proposed approach. Target text information: Maximum likelihood source separation for discrete sources: This communication deals with the source separation problem which consists in the separation of a noisy mixture of independent sources without a priori knowledge of the mixture coefficients. In this paper, we consider the maximum likelihood (ML) approach for discrete source signals with known probability distributions. An important feature of the ML approach in Gaussian noise is that the covariance matrix of the additive noise can be treated as a parameter. Hence, it is not necessary to know or to model the spatial structure of the noise. Another striking feature offered in the case of discrete sources is that, under mild assumptions, it is possible to separate more sources than sensors. In this paper, we consider maximization of the likelihood via the Expectation-Maximization (EM) algorithm. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
694
test
1-hop neighbor's text information: PAC learning axis-aligned rectangles with respect to product distributions from multiple-instance examples. : We describe a polynomial-time algorithm for learning axis-aligned rectangles in Q d with respect to product distributions from multiple-instance examples in the PAC model. Here, each example consists of n elements of Q d together with a label indicating whether any of the n points is in the rectangle to be learned. We assume that there is an unknown product distribution D over Q d such that all instances are independently drawn according to D. The accuracy of a hypothesis is measured by the probability that it would incorrectly predict whether one of n more points drawn from D was in the rectangle to be learned. Our algorithm achieves accuracy * with probability 1 ffi in 1-hop neighbor's text information: Solving the multiple-instance problem with axis-parallel rectangles. : The multiple instance problem arises in tasks where the training examples are ambiguous: a single example object may have many alternative feature vectors (instances) that describe it, and yet only one of those feature vectors may be responsible for the observed classification of the object. This paper describes and compares three kinds of algorithms that learn axis-parallel rectangles to solve the multiple-instance problem. Algorithms that ignore the multiple instance problem perform very poorly. An algorithm that directly confronts the multiple instance problem (by attempting to identify which feature vectors are responsible for the observed classifications) performs best, giving 89% correct predictions on a musk-odor prediction task. The paper also illustrates the use of artificial data to debug and compare these algorithms. 1-hop neighbor's text information: "A General Lower Bound on the Number of Examples Needed for Learning," : We prove a lower bound of ( 1 * ln 1 ffi + VCdim(C) * ) on the number of random examples required for distribution-free learning of a concept class C, where VCdim(C) is the Vapnik-Chervonenkis dimension and * and ffi are the accuracy and confidence parameters. This improves the previous best lower bound of ( 1 * ln 1 ffi + VCdim(C)), and comes close to the known general upper bound of O( 1 ffi + VCdim(C) * ln 1 * ) for consistent algorithms. We show that for many interesting concept classes, including kCNF and kDNF, our bound is actually tight to within a constant factor. Target text information: Approximating Hyper-Rectangles: Learning and Pseudo-random Sets: The PAC learning of rectangles has been studied because they have been found experimentally to yield excellent hypotheses for several applied learning problems. Also, pseudorandom sets for rectangles have been actively studied recently because (i) they are a subprob-lem common to the derandomization of depth-2 (DNF) circuits and derandomizing Randomized Logspace, and (ii) they approximate the distribution of n independent multivalued random variables. We present improved upper bounds for a class of such problems of approximating high-dimensional rectangles that arise in PAC learning and pseudorandomness. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
308
test
1-hop neighbor's text information: "Linear systems with sign-observations," : This paper deals with systems that are obtained from linear time-invariant continuous-or discrete-time devices followed by a function that just provides the sign of each output. Such systems appear naturally in the study of quantized observations as well as in signal processing and neural network theory. Results are given on observability, minimal realizations, and other system-theoretic concepts. Certain major differences exist with the linear case, and other results generalize in a surprisingly straightforward manner. Target text information: Lemma 2.3 The system is reachable and observable and realizes the same input/output behavior as: Here we show a similar construction for multiple-output systems, with some modifications. Let = (A; B; C) s be a discrete-time sign-linear system with state space IR n and p outputs. Perform a change of ; where A 1 (n 1 fi n 1 ) is invertible and A 2 (n 2 fi n 2 ) is nilpotent. If (A; B) is a reachable pair and (A; C) is an observable pair, then is minimal in the sense that any other sign-linear system with the same input/output behavior has dimension at least n. But, if n 1 < n, then det A = 0 and is not observable and hence not canonical. Let us find another system ~ (necessarily not sign-linear) which has the same input/output behavior as , but is canonical. Let i be the relative degree of the ith row of the Markov sequence A, and = minf i : i = 1; : : : ; pg. Let the initial state be x. There is a difference between the case when the smallest relative degree is greater or equal to n 2 and the case when < n 2 . Roughly speaking, when n 2 the outputs of the sign-linear system give us information about sign (Cx), sign (CAx), : : : , sign (CA 1 x), which are the first outputs of the sys tem. After that, we can use the inputs and outputs to learn only about x 1 (the first n 1 components of x). When < n 2 , we may be able to use some controls to learn more about x 2 (the last n 2 components of x) before time n 2 when the nilpotency of A 2 has finally Lemma 2.4 Two states x and z are indistinguishable for if and only if (x) = (z). Proof. In the case n 2 , we have only the equations x 1 = z 1 and the equality of the 's. The first ` output terms for are exactly the terms of . So these equalities are satisfied if and only if the first ` output terms coincide for x and z, for any input. Equality of everything but the first n 1 components is equivalent to the first n 2 output terms coinciding for x and z, since the jth row of the qth output, for initial state x, for example, is either sign (c j A q x) if j > q, or sign (c j A q x + + A j j u q j +1 + ) if j q in which case we may use the control u q j +1 to identify c j A q x (using Remark 3.3 in [1]). I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
630
test
1-hop neighbor's text information: Modeling Invention by Analogy in ACT-R: We investigate some aspects of cognition involved in invention, more precisely in the invention of the telephone by Alexander Graham Bell. We propose the use of the Structure-Behavior-Function (SBF) language for the representation of invention knowledge; we claim that because SBF has been shown to support a wide range of reasoning about physical devices, it constitutes a plausible account of how an inventor might represent knowledge of an invention. We further propose the use of the ACT-R architecture for the implementation of this model. ACT-R has been shown to very precisely model a wide range of human cognition. We draw upon the architecture for execution of productions and matching of declarative knowledge through spreading activation. Thus we present a model which combines the well-established cognitive validity of ACT-R with the powerful, specialized model-based reasoning methods facilitated by SBF. Target text information: Creative design: reasoning and understanding. In Leake, D.B. : This paper investigates memory issues that influence long- term creative problem solving and design activity, taking a case-based reasoning perspective. Our exploration is based on a well-documented example: the invention of the telephone by Alexander Graham Bell. We abstract Bell's reasoning and understanding mechanisms that appear time and again in long-term creative design. We identify that the understanding mechanism is responsible for analogical anticipation of design constraints and analogical evaluation, beside case-based design. But an already understood design can satisfy opportunistically suspended design problems, still active in background. The new mechanisms are integrated in a computational model, ALEC 1 , that accounts for some creative be I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
2,669
test
1-hop neighbor's text information: A distributed reinforcement learning scheme for network routing. : In this paper we describe a self-adjusting algorithm for packet routing in which a reinforcement learning method is embedded into each node of a network. Only local information is used at each node to keep accurate statistics on which routing policies lead to minimal routing times. In simple experiments involving a 36-node irregularly-connected network, this learning approach proves superior to routing based on precomputed shortest paths. Target text information: Predictive Q-routing: A memory-based reinforcement learning approach to adaptive traffic control. : In this paper, we propose a memory-based Q-learning algorithm called predictive Q-routing (PQ-routing) for adaptive traffic control. We attempt to address two problems encountered in Q-routing (Boyan & Littman, 1994), namely, the inability to fine-tune routing policies under low network load and the inability to learn new optimal policies under decreasing load conditions. Unlike other memory-based reinforcement learning algorithms in which memory is used to keep past experiences to increase learning speed, PQ-routing keeps the best experiences learned and reuses them by predicting the traffic trend. The effectiveness of PQ-routing has been verified under various network topologies and traffic conditions. Simulation results show that PQ-routing is superior to I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
5
Reinforcement Learning
cora
903
test
1-hop neighbor's text information: Knowledge Discovery in International Conflict Databases: In the last decade research in Machine Learning has developed a variety of powerful tools for inductive learning and data analysis. On the other hand, research in International Relations has developed a variety of different conflict databases that are mostly analyzed with classical statistical methods. As these databases are in general of a symbolic nature, they provide an interesting domain for application of Machine Learning algorithms. This paper gives a short overview of available conflict databases and subsequently concentrates on the application of machine learning methods for the analysis and interpretation of such databases. 1-hop neighbor's text information: A Weighted Nearest Neighbor Algorithm for Learning with Symbolic Features. : In the past, nearest neighbor algorithms for learning from examples have worked best in domains in which all features had numeric values. In such domains, the examples can be treated as points and distance metrics can use standard definitions. In symbolic domains, a more sophisticated treatment of the feature space is required. We introduce a nearest neighbor algorithm for learning in domains with symbolic features. Our algorithm calculates distance tables that allow it to produce real-valued distances between instances, and attaches weights to the instances to further modify the structure of feature space. We show that this technique produces excellent classification accuracy on three problems that have been studied by machine learning researchers: predicting protein secondary structure, identifying DNA promoter sequences, and pronouncing English text. Direct experimental comparisons with the other learning algorithms show that our nearest neighbor algorithm is comparable or superior in all three domains. In addition, our algorithm has advantages in training speed, simplicity, and perspicuity. We conclude that experimental evidence favors the use and continued development of nearest neighbor algorithms for domains such as the ones studied here. Target text information: `The possible contribution of AI to the avoidance of crises and wars: Using CBR methods with the KOSIMO database of conflicts\', : This paper presents the application of Case-Based Reasoning methods to the KOSIMO data base of international conflicts. A Case-Based Reasoning tool - VIE-CBR has been deveolped and used for the classification of various outcome variables, like political, military, and territorial outcome, solution modalities, and conflict intensity. In addition, the case retrieval algorithms are presented as an interactive, user-modifiable tool for intelli gently searching the conflict data base for precedent cases. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
2,607
val
1-hop neighbor's text information: : Instance-based learning methods explicitly remember all the data that they receive. They usually have no training phase, and only at prediction time do they perform computation. Then, they take a query, search the database for similar datapoints and build an on-line local model (such as a local average or local regression) with which to predict an output value. In this paper we review the advantages of instance based methods for autonomous systems, but we also note the ensuing cost: hopelessly slow computation as the database grows large. We present and evaluate a new way of structuring a database and a new algorithm for accessing it that maintains the advantages of instance-based learning. Earlier attempts to combat the cost of instance-based learning have sacrificed the explicit retention of all data, or been applicable only to instance-based predictions based on a small number of near neighbors or have had to re-introduce an explicit training phase in the form of an interpolative data structure. Our approach builds a multiresolution data structure to summarize the database of experiences at all resolutions of interest simultaneously. This permits us to query the database with the same exibility as a conventional linear search, but at greatly reduced computational cost. 1-hop neighbor's text information: : Instance-based learning methods explicitly remember all the data that they receive. They usually have no training phase, and only at prediction time do they perform computation. Then, they take a query, search the database for similar datapoints and build an on-line local model (such as a local average or local regression) with which to predict an output value. In this paper we review the advantages of instance based methods for autonomous systems, but we also note the ensuing cost: hopelessly slow computation as the database grows large. We present and evaluate a new way of structuring a database and a new algorithm for accessing it that maintains the advantages of instance-based learning. Earlier attempts to combat the cost of instance-based learning have sacrificed the explicit retention of all data, or been applicable only to instance-based predictions based on a small number of near neighbors or have had to re-introduce an explicit training phase in the form of an interpolative data structure. Our approach builds a multiresolution data structure to summarize the database of experiences at all resolutions of interest simultaneously. This permits us to query the database with the same exibility as a conventional linear search, but at greatly reduced computational cost. 1-hop neighbor's text information: Efficient Locally Weighted Polynomial Regression. : Locally weighted polynomial regression (LWPR) is a popular instance-based algorithm for learning continuous non-linear mappings. For more than two or three inputs and for more than a few thousand dat-apoints the computational expense of predictions is daunting. We discuss drawbacks with previous approaches to dealing with this problem, and present a new algorithm based on a multiresolution search of a quickly-constructible augmented kd-tree. Without needing to rebuild the tree, we can make fast predictions with arbitrary local weighting functions, arbitrary kernel widths and arbitrary queries. The paper begins with a new, faster, algorithm for exact LWPR predictions. Next we introduce an approximation that achieves up to a two-orders-of-magnitude speedup with negligible accuracy losses. Increasing a certain approximation parameter achieves greater speedups still, but with a correspondingly larger accuracy degradation. This is nevertheless useful during operations such as the early stages of model selection and locating optima of a fitted surface. We also show how the approximations can permit real-time query-specific optimization of the kernel width. We conclude with a brief discussion of potential extensions for tractable instance-based learning on datasets that are too large to fit in a com puter's main memory. Target text information: Bumptrees for Efficient Function, Constraint, and Classification Learning, : A new class of data structures called bumptrees is described. These structures are useful for efficiently implementing a number of neural network related operations. An empirical comparison with radial basis functions is presented on a robot arm mapping learning task. Applications to density estimation, classification, and constraint representation and learning are also outlined. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
1,713
val
1-hop neighbor's text information: An empirical comparison of selection measures for decision-tree induction. : Ourston and Mooney, 1990b ] D. Ourston and R. J. Mooney. Improving shared rules in multiple category domain theories. Technical Report AI90-150, Artificial Intelligence Labora tory, University of Texas, Austin, TX, December 1990. 1-hop neighbor's text information: LEARNING FOR DECISION MAKING: The FRD Approach and a Comparative Study Machine Learning and Inference Laboratory: This paper concerns the issue of what is the best form for learning, representing and using knowledge for decision making. The proposed answer is that such knowledge should be learned and represented in a declarative form. When needed for decision making, it should be efficiently transferred to a procedural form that is tailored to the specific decision making situation. Such an approach combines advantages of the declarative representation, which facilitates learning and incremental knowledge modification, and the procedural representation, which facilitates the use of knowledge for decision making. This approach also allows one to determine decision structures that may avoid attributes that unavailable or difficult to measure in any given situation. Experimental investigations of the system, FRD-1, have demonstrated that decision structures obtained via the declarative route often have not only higher predictive accuracy but are also are simpler than those learned directly from facts. 1-hop neighbor's text information: The Estimation of Probabilities in Attribute Selection Measures for Decision Structure Induction in Proceeding of the European Summer School on Machine Learning, : In this paper we analyze two well-known measures for attribute selection in decision tree induction, informativity and gini index. In particular, we are interested in the influence of different methods for estimating probabilities on these two measures. The results of experiments show that different measures, which are obtained by different probability estimation methods, determine the preferential order of attributes in a given node. Therefore, they determine the structure of a constructed decision tree. This feature can be very beneficial, especially in real-world applications where several different trees are often required. Target text information: R.S. and Imam, I.F. On Learning Decision Structures. : A decision structure is an acyclic graph that specifies an order of tests to be applied to an object (or a situation) to arrive at a decision about that object. and serves as a simple and powerful tool for organizing a decision process. This paper proposes a methodology for learning decision structures that are oriented toward specific decision making situations. The methodology consists of two phases: 1determining and storing declarative rules describing the decision process, 2deriving online a decision structure from the rules. The first step is performed by an expert or by an AQ-based inductive learning program that learns decision rules from examples of decisions (AQ15 or AQ17). The second step transforms the decision rules to a decision structure that is most suitable for the given decision making situation. The system, AQDT-2, implementing the second step, has been applied to a problem in construction engineering. In the experiments, AQDT-2 outperformed all other programs applied to the same problem in terms of the accuracy and the simplicity of the generated decision structures. Key words: machine learning, inductive learning, decision structures, decision rules, attribute selection. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
0
Rule Learning
cora
1,990
test
1-hop neighbor's text information: A performance analysis of the CNS-1 on large, dense backpropagation networks. : We determine in this study the sustained performance of the CNS-1 during training and evaluation of large multilayered feedforward neural networks. Using a sophisticated coding, the 128-node machine would achieve up to 111 Giga connections per second (GCPS) and 22 Giga connection updates per second (GCUPS). During recall the machine would archieve 87% of the peak multiply-accumulate performance. The training of large nets is less efficient than the recall but only by a factor of 1.5 to 2. The benchmark is parallelized and the machine code is optimized before analyzing the performance. Starting from an optimal parallel algorithm, CNS specific optimizations still reduce the run time by a factor of 4 for recall and by a factor of 3 for training. Our analysis also yields some strategies for code optimization. The CNS-1 is still in design, and therefore we have to model the run time behavior of the memory system and the interconnection network. This gives us the option of changing some parameters of the CNS-1 system in order to analyze their performance impact. 1-hop neighbor's text information: A performance analysis of CNS on sparse connectionist networks. : This report deals with the efficient mapping of sparse neural networks on CNS-1. We develop parallel vector code for an idealized sparse network and determine its performance under three memory systems. We use the code to evaluate the memory systems (one of which will be implemented in the prototype), and to pinpoint bottlenecks in the current CNS-1 design. Target text information: All-to-all Broadcast on the CNS-1. : This study deals with the all-to-all broadcast on the CNS-1. We determine a lower bound for the run time and present an algorithm meeting this bound. Since this study points out a bottleneck in the network interface, we also analyze the performance of alternative interface designs. Our analyses are based on a run time model of the network. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
2,484
test
1-hop neighbor's text information: : Eugenic Evolution for Combinatorial Optimization John William Prior Report AI98-268 May 1998 1-hop neighbor's text information: An overview of evolutionary computation. Euro. : Evolutionary computation uses computational models of evolution - ary processes as key elements in the design and implementation of computer-based problem solving systems. In this paper we provide an overview of evolutionary computation, and describe several evolutionary algorithms that are currently of interest. Important similarities and di fferences are noted, which lead to a discussion of important issues that need to be resolved, and items for future research. 1-hop neighbor's text information: Back. An evolutionary heuristic for the minimum vertex cover prob-lem. : Target text information: An evolutionary approach to com-binatorial optimization problems. : The paper reports on the application of genetic algorithms, probabilistic search algorithms based on the model of organic evolution, to NP-complete combinatorial optimization problems. In particular, the subset sum, maximum cut, and minimum tardy task problems are considered. Except for the fitness function, no problem-specific changes of the genetic algorithm are required in order to achieve results of high quality even for the problem instances of size 100 used in the paper. For constrained problems, such as the subset sum and the minimum tardy task, the constraints are taken into account by incorporating a graded penalty term into the fitness function. Even for large instances of these highly multimodal optimization problems, an iterated application of the genetic algorithm is observed to find the global optimum within a number of runs. As the genetic algorithm samples only a tiny fraction of the search space, these results are quite encouraging. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
1,927
val
1-hop neighbor's text information: A survey of intron research in genetics. : A brief survey of biological research on non-coding DNA is presented here. There has been growing interest in the effects of non-coding segments in evolutionary algorithms (EAs). To better understand and conduct research on non-coding segments and EAs, it is important to understand the biological background of such work. This paper begins with a review of basic genetics and terminology, describes the different types of non-coding DNA, and then surveys recent intron research. 1-hop neighbor's text information: The royal road for genetic algorithms: fitness landscapes and genetic algorithm performance. : Genetic algorithms (GAs) play a major role in many artificial-life systems, but there is often little detailed understanding of why the GA performs as it does, and little theoretical basis on which to characterize the types of fitness landscapes that lead to successful GA performance. In this paper we propose a strategy for addressing these issues. Our strategy consists of defining a set of features of fitness landscapes that are particularly relevant to the GA, and experimentally studying how various configurations of these features affect the GA's performance along a number of dimensions. In this paper we informally describe an initial set of proposed feature classes, describe in detail one such class ("Royal Road" functions), and present some initial experimental results concerning the role of crossover and "building blocks" on landscapes constructed from features of this class. 1-hop neighbor's text information: Testing the Robustness of the Genetic Algorithm on the Floating Building Block Representation.: Recent studies on a floating building block representation for the genetic algorithm (GA) suggest that there are many advantages to using the floating representation. This paper investigates the behavior of the GA on floating representation problems in response to three different types of pressures: (1) a reduction in the amount of genetic material available to the GA during the problem solving process, (2) functions which have negative-valued building blocks, and (3) randomizing non-coding segments. Results indicate that the GA's performance on floating representation problems is very robust. Significant reductions in genetic material (genome length) may be made with relatively small decrease in performance. The GA can effectively solve problems with negative building blocks. Randomizing non-coding segments appears to improve rather than harm GA performance. Target text information: A comparison of the fixed and floating building block representation in the genetic algorithm. : This article compares the traditional, fixed problem representation style of a genetic algorithm (GA) with a new floating representation in which the building blocks of a problem are not fixed at specific locations on the individuals of the population. In addition, the effects of non-coding segments on both of these representations is studied. Non-coding segments are a computational model of non-coding DNA and floating building blocks mimic the location independence of genes. The fact that these structures are prevalent in natural genetic systems suggests that they may provide some advantages to the evolutionary process. Our results show that there is a significant difference in how GAs solve a problem in the fixed and floating representations. GAs are able to maintain a more diverse population with the floating representation. The combination of non-coding segments and floating building blocks appears to encourage a GA to take advantage of its parallel search and recombination abilities. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
1,413
test
1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming. 1-hop neighbor's text information: An overview of genetic algorithms: Part 1, fundamentals. : 1-hop neighbor's text information: Simple Subpopulation Schemes: This paper considers a new method for maintaining diversity by creating subpopulations in a standard generational evolutionary algorithm. Unlike other methods, it replaces the concept of distance between individuals with tag bits that identify the subpopulation to which an individual belongs. Two variations of this method are presented, illustrating the feasibility of this approach. Target text information: A sequential niche technique for multimodal function optimization. : c fl UWCC COMMA Technical Report No. 93001, February 1993 x No part of this article may be reproduced for commercial purposes. Abstract A technique is described which allows unimodal function optimization methods to be extended to efficiently locate all optima of multimodal problems. We describe an algorithm based on a traditional genetic algorithm (GA). This involves iterating the GA, but uses knowledge gained during one iteration to avoid re-searching, on subsequent iterations, regions of problem space where solutions have already been found. This is achieved by applying a fitness derating function to the raw fitness function, so that fitness values are depressed in the regions of the problem space where solutions have already been found. Consequently, the likelihood of discovering a new solution on each iteration is dramatically increased. The technique may be used with various styles of GA, or with other optimization methods, such as simulated annealing. The effectiveness of the algorithm is demonstrated on a number of multimodal test functions. The technique is at least as fast as fitness sharing methods. It provides a speedup of between 1 and 10p on a problem with p optima, depending on the value of p and the convergence time complexity. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
1,870
test
1-hop neighbor's text information: "Concept learning and Heuristic Classification in Weak-Theory Domains," : 1-hop neighbor's text information: Reasoning with portions of precedents. : This paper argues that the task of matching in case-based reasoning can often be improved by comparing new cases to portions of precedents. An example is presented that illustrates how combining portions of multiple precedents can permit new cases to be resolved that would be indeterminate if new cases could only be compared to entire precedents. A system that uses of portions of precedents for legal analysis in the domain of Texas worker's compensation law, GREBE, is described, and examples of GREBE's analysis that combine reasoning steps from multiple precedents are presented. Target text information: Four Challenges for a Computational Model of Legal Precedent: Identifying the open research issues in a field is a necessary step for progress in that field. This paper describes four open research problems in computational models of precedent-based legal reasoning: relating case representation to precedent use; modeling the selection and construction of both arguments based on pairwise case comparison and multiple-precedent arguments; modeling the process whereby purposes, policies, and principles are used in case similarity assessment; and extending the applicability of precedents to tasks other than classification. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
300
test
1-hop neighbor's text information: "Evolving non-trivial behaviors on real robots: a garbage collecting robot", : Recently, a new approach that involves a form of simulated evolution has been proposed for the building of autonomous robots. However, it is still not clear if this approach may be adequate to face real life problems. In this paper we show how control systems that perform a nontrivial sequence of behaviors can be obtained with this methodology by carefully designing the conditions in which the evolutionary process operates. In the experiment described in the paper, a mobile robot is trained to locate, recognize, and grasp a target object. The controller of the robot has been evolved in simulation and then downloaded and tested on the real robot. 1-hop neighbor's text information: Analysis of neurocon-trollers designed by simulated evolution. : Randomized adaptive greedy search, using evolutionary algorithms, offers a powerful and versatile approach to the automated design of neural network architectures for a variety of tasks in artificial intelligence and robotics. In this paper we present results from the evolutionary design of a neuro-controller for a robotic bulldozer. This robot is given the task of clearing an arena littered with boxes by pushing boxes to the sides. Through a careful analysis of the evolved networks we show how evolution exploits the design constraints and properties of the environment to produce network structures of high fitness. We conclude with a brief summary of related ongoing research examining the intricate interplay between environment and evolutionary processes in determining the structure and function of the resulting neural architectures. 1-hop neighbor's text information: Har-vey (1993) Evolving Visually Guided Robots. : A version of this paper appears in: Proceedings of SAB92, the Second International Conference on Simulation of Adaptive Behaviour J.-A. Meyer, H. Roitblat, and S. Wilson, editors, MIT Press Bradford Books, Cambridge, MA, 1993. Target text information: Cliff (1993). "Issues in evolutionary robotics," From Animals to Animats 2 (Ed. : A version of this paper appears in: Proceedings of SAB92, the Second International Conference on Simulation of Adaptive Behaviour J.-A. Meyer, H. Roitblat, and S. Wilson, editors, MIT Press Bradford Books, Cambridge, MA, 1993. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
1,809
val
1-hop neighbor's text information: Power System Security Margin Prediction Using Radial Basis Function Networks: Dr. McCalley's research is partially supported through grants from National Science Foundation and Pacific Gas and Electric Company. Dr. Honavar's research is partially supported through grants from National Science Foundation and the John Deere Foundation. This paper will appear in: Proceedings of the 29th Annual North American Power Symposium, Oct. 13-14. 1997, Laramie, Wyoming. Target text information: Feature subset selection using a genetic algorithm. : Many practical pattern classification applications require a careful selection of attributes or features (from a much larger set) to represent the patterns to be classified. This feature subset selection problem is a multi-criterion optimization problem. We propose a solution to this problem using a genetic algorithm. Our experiments demonstrate the feasibility of this approach for feature subset selection in the automated design of neural network pattern classifiers. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
908
test
1-hop neighbor's text information: Locally Connected Recurrent Networks: Lai-Wan CHAN and Evan Fung-Yu YOUNG Computer Science Department, The Chinese University of Hong Kong New Territories, Hong Kong Email : [email protected] Technical Report : CS-TR-95-10 Abstract The fully connected recurrent network (FRN) using the on-line training method, Real Time Recurrent Learning (RTRL), is computationally expensive. It has a computational complexity of O(N 4 ) and storage complexity of O(N 3 ), where N is the number of non-input units. We have devised a locally connected recurrent model which has a much lower complexity in both computational time and storage space. The ring-structure recurrent network (RRN), the simplest kind of the locally connected has the corresponding complexity of O(mn+np) and O(np) respectively, where p, n and m are the number of input, hidden and output units respectively. We compare the performance between RRN and FRN in sequence recognition and time series prediction. We tested the networks' ability in temporal memorizing power and time warpping ability in the sequence recognition task. In the time series prediction task, we used both networks to train and predict three series; a periodic series with white noise, a deterministic chaotic series and the sunspots data. Both tasks show that RRN needs a much shorter training time and the performance of RRN is comparable to that of FRN. 1-hop neighbor's text information: Some studies in machine learning using the game of Checkers. : 1-hop neighbor's text information: Self-Organization and Associative Memory, : Selective suppression of transmission at feedback synapses during learning is proposed as a mechanism for combining associative feedback with self-organization of feedforward synapses. Experimental data demonstrates cholinergic suppression of synaptic transmission in layer I (feedback synapses), and a lack of suppression in layer IV (feed-forward synapses). A network with this feature uses local rules to learn mappings which are not linearly separable. During learning, sensory stimuli and desired response are simultaneously presented as input. Feedforward connections form self-organized representations of input, while suppressed feedback connections learn the transpose of feedfor-ward connectivity. During recall, suppression is removed, sensory input activates the self-organized representation, and activity generates the learned response. Target text information: A local learning algorithm for dynamic feedforward and recurrent networks. : Most known learning algorithms for dynamic neural networks in non-stationary environments need global computations to perform credit assignment. These algorithms either are not local in time or not local in space. Those algorithms which are local in both time and space usually can not deal sensibly with `hidden units'. In contrast, as far as we can judge by now, learning rules in biological systems with many `hidden units' are local in both space and time. In this paper we propose a parallel on-line learning algorithm which performs local computations only, yet still is designed to deal with hidden units and with units whose past activations are `hidden in time'. The approach is inspired by Holland's idea of the bucket brigade for classifier systems, which is transformed to run on a neural network with fixed topology. The result is a feedforward or recurrent `neural' dissipative system which is consuming `weight-substance' and permanently trying to distribute this substance onto its connections in an appropriate way. Simple experiments demonstrating the feasability of the algorithm are reported. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
1,128
test
1-hop neighbor's text information: Blue. Optimal decision trees. : Key ideas from statistical learning theory and support vector machines are generalized to decision trees. A support vector machine is used for each decision in the tree. The "optimal" decision tree is characterized, and both a primal and dual space formulation for constructing the tree are proposed. The result is a method for generating logically simple decision trees with multivariate linear or nonlinear decisions. The preliminary results indicate that the method produces simple trees that generalize well with respect to other decision tree algorithms and single support vector machines. 1-hop neighbor's text information: Mathematical programming in data mining. Data Mining and Knowledge Discovery, : Mathematical programming approaches to three fundamental problems will be described: feature selection, clustering and robust representation. The feature selection problem considered is that of discriminating between two sets while recognizing irrelevant and redundant features and suppressing them. This creates a lean model that often generalizes better to new unseen data. Computational results on real data confirm improved generalization of leaner models. Clustering is exemplified by the unsupervised learning of patterns and clusters that may exist in a given database and is a useful tool for knowledge discovery in databases (KDD). A mathematical programming formulation of this problem is proposed that is theoretically justifiable and computationally implementable in a finite number of steps. A resulting k-Median Algorithm is utilized to discover very useful survival curves for breast cancer patients from a medical database. Robust representation is concerned with minimizing trained model degradation when applied to new problems. A novel approach is proposed that purposely tolerates a small error in the training process in order to avoid overfitting data that may contain errors. Examples of applications of these concepts are given. 1-hop neighbor's text information: Street. Feature selection via mathematical programming. : The problem of discriminating between two finite point sets in n-dimensional feature space by a separating plane that utilizes as few of the features as possible, is formulated as a mathematical program with a parametric objective function and linear constraints. The step function that appears in the objective function can be approximated by a sigmoid or by a concave exponential on the nonnegative real line, or it can be treated exactly by considering the equivalent linear program with equilibrium constraints (LPEC). Computational tests of these three approaches on publicly available real-world databases have been carried out and compared with an adaptation of the optimal brain damage (OBD) method for reducing neural network complexity. One feature selection algorithm via concave minimization (FSV) reduced cross-validation error on a cancer prognosis database by 35.4% while reducing problem features from 32 to 4. Feature selection is an important problem in machine learning [18, 15, 16, 17, 33]. In its basic form the problem consists of eliminating as many of the features in a given problem as possible, while still carrying out a preassigned task with acceptable accuracy. Having a minimal number of features often leads to better generalization and simpler models that can be more easily interpreted. In the present work, our task is to discriminate between two given sets in an n-dimensional feature space by using as few of the given features as possible. We shall formulate this problem as a mathematical program with a parametric objective function that will attempt to achieve this task by generating a separating plane in a feature space of as small a dimension as possible while minimizing the average distance of misclassified points to the plane. One of the computational experiments that we carried out on our feature selection procedure showed its effectiveness, not only in minimizing the number of features selected, but also in quickly recognizing and removing spurious random features that were introduced. Thus, on the Wisconsin Prognosis Breast Cancer WPBC database [36] with a feature space of 32 dimensions and 6 random features added, one of our algorithms FSV (11) immediately removed the 6 random features as well as 28 of the original features resulting in a separating plane in a 4-dimensional reduced feature space. By using tenfold cross-validation [35], separation error in the 4-dimensional space was reduced 35.4% from the corresponding error in the original problem space. (See Section 3 for details.) We note that mathematical programming approaches to the feature selection problem have been recently proposed in [4, 22]. Even though the approach of [4] is based on an LPEC formulation, both the LPEC and its method of solution are different from the ones used here. The polyhedral concave minimization approach of [22] is principally involved with theoretical considerations of one specific algorithm and no cross-validatory results are given. Other effective computational applications of mathematical programming to neural networks are given in [30, 26]. Target text information: Parsimonious least norm approximation. : A theoretically justifiable fast finite successive linear approximation algorithm is proposed for obtaining a parsimonious solution to a corrupted linear system Ax = b + p, where the corruption p is due to noise or error in measurement. The proposed linear-programming-based algorithm finds a solution x by parametrically minimizing the number of nonzero elements in x and the error k Ax b p k 1 . Numerical tests on a signal-processing-based example indicate that the proposed method is comparable to a method that parametrically minimizes the 1-norm of the solution x and the error k Ax b p k 1 , and that both methods are superior, by orders of magnitude, to solutions obtained by least squares as well by combinatorially choosing an optimal solution with a specific number of nonzero elements. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
1,538
val
1-hop neighbor's text information: Supporting flexibility. a case-based reasoning approach. : The AAAI Fall Symposium; Flexible Computation in Intelligent Systems: Results, Issues, and Opportunities. Nov. 9-11, 1996, Cambridge, MA Abstract This paper presents a case-based reasoning system TA3. We address the flexibility of the case-based reasoning process, namely flexible retrieval of relevant experiences, by using a novel similarity assessment theory. To exemplify the advantages of such an approach, we have experimentally evaluated the system and compared its performance to the performance of non-flexible version of TA3 and to other machine learning algorithms on several domains. 1-hop neighbor's text information: Context-based similarity applied to retrieval of relevant cases. : Retrieving relevant cases is a crucial component of case-based reasoning systems. The task is to use user-defined query to retrieve useful information, i.e., exact matches or partial matches which are close to query-defined request according to certain measures. The difficulty stems from the fact that it may not be easy (or it may be even impossible) to specify query requests precisely and completely resulting in a situation known as a fuzzy-querying. It is usually not a problem for small domains, but for a large repositories which store various information (multifunctional information bases or a federated databases), a request specification becomes a bottleneck. Thus, a flexible retrieval algorithm is required, allowing for imprecise query specification and for changing the viewpoint. Efficient database techniques exists for locating exact matches. Finding relevant partial matches might be a problem. This document proposes a context-based similarity as a basis for flexible retrieval. Historical bacground on research in similarity assessment is presented and is used as a motivation for formal definition of context-based similarity. We also describe a similarity-based retrieval system for multifunctinal information bases. Target text information: Applying case-based reasoning to control in robotics. : The proposed architecture is experimentally evaluated on two real world domains and the results are compared to other machine learning algorithms applied to the same problem. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
1,355
test
1-hop neighbor's text information: On the logic of iterated belief revision. : We show in this paper that the AGM postulates are too week to ensure the rational preservation of conditional beliefs during belief revision, thus permitting improper responses to sequences of observations. We remedy this weakness by proposing four additional postulates, which are sound relative to a qualitative version of probabilistic conditioning. Contrary to the AGM framework, the proposed postulates characterize belief revision as a process which may depend on elements of an epistemic state that are not necessarily captured by a belief set. We also show that a simple modification to the AGM framework can allow belief revision to be a function of epistemic states. We establish a model-based representation theorem which characterizes the proposed postulates and constrains, in turn, the way in which entrenchment orderings may be transformed under iterated belief revision. Target text information: Qualitative probabiliteis for default reasoning, belief revision, and causal modeling. : This paper presents recent developments toward a formalism that combines useful properties of both logic and probabilities. Like logic, the formalism admits qualitative sentences and provides symbolic machinery for deriving deductively closed beliefs and, like probability, it permits us to express if-then rules with different levels of firmness and to retract beliefs in response to changing observations. Rules are interpreted as order-of-magnitude approximations of conditional probabilities which impose constraints over the rankings of worlds. Inferences are supported by a unique priority ordering on rules which is syntactically derived from the knowledge base. This ordering accounts for rule interactions, respects specificity considerations and facilitates the construction of coherent states of beliefs. Practical algorithms are developed and analyzed for testing consistency, computing rule ordering, and answering queries. Imprecise observations are incorporated using qualitative versions of Jef-frey's Rule and Bayesian updating, with the result that coherent belief revision is embodied naturally and tractably. Finally, causal rules are interpreted as imposing Markovian conditions that further constrain world rankings to reflect the modularity of causal organizations. These constraints are shown to facilitate reasoning about causal projections, explanations, actions and change. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
1,010
val
1-hop neighbor's text information: Learning polynomials with queries: The highly noisy case. : Given a function f mapping n-variate inputs from a finite field F into F , we consider the task of reconstructing a list of all n-variate degree d polynomials which agree with f on a tiny but non-negligible fraction, ffi, of the input space. We give a randomized algorithm for solving this task which accesses f as a black box and runs in time polynomial in 1 d=jF j). For the special case when d = 1, we solve this problem for all * def jF j > 0. In this case the running time of our algorithm is bounded by a polynomial in 1 * ; n and exponential in d. Our algorithm generalizes a previously known algorithm, due to Goldreich and Levin, that solves this 1-hop neighbor's text information: PAC-learning PROLOG clauses with or without errors. : In a nutshell we can describe a generic ILP problem as following: given a set E of (positive and negative) examples of a target predicate, and some background knowledge B about the world (usually a logic program including facts and auxiliary predicates), the task is to find a logic program H (our hypothesis) such that all positive examples can be deduced from B and H, while no negative example can. In this paper we review some of the results achieved in this area and discuss the techniques used. Moreover we prove the following new results: * Predicates described by non-recursive, local clauses of at most k literals are PAC-learnable under any distribution. This generalizes a previous result that was valid only for constrained clauses. * Predicates that are described by k non-recursive local clauses are PAC-learnable under any distribution. This generalizes a previous result that was non construc tive and valid only under some class of distributions. Finally we introduce what we believe is the first theoretical framework for learning Prolog clauses in the presence of errors. To this purpose we introduce a new noise model, that we call the fixed attribute noise model, for learning propositional concepts over the Boolean domain. This new noise model can be of its own interest. 1-hop neighbor's text information: Learning in hybrid noise environments using statistical queries. : We consider formal models of learning from noisy data. Specifically, we focus on learning in the probability approximately correct model as defined by Valiant. Two of the most widely studied models of noise in this setting have been classification noise and malicious errors. However, a more realistic model combining the two types of noise has not been formalized. We define a learning environment based on a natural combination of these two noise models. We first show that hypothesis testing is possible in this model. We next describe a simple technique for learning in this model, and then describe a more powerful technique based on statistical query learning. We show that the noise tolerance of this improved technique is roughly optimal with respect to the desired learning accuracy and that it provides a smooth tradeoff between the tolerable amounts of the two types of noise. Finally, we show that statistical query simulation yields learning algorithms for other combinations of noise models, thus demonstrating that statistical query specification truly An important goal of research in machine learning is to determine which tasks can be automated, and for those which can, to determine their information and computation requirements. One way to answer these questions is through the development and investigation of formal models of machine learning which capture the task of learning under plausible assumptions. In this work, we consider the formal model of learning from examples called "probably approximately correct" (PAC) learning as defined by Valiant [Val84]. In this setting, a learner attempts to approximate an unknown target concept simply by viewing positive and negative examples of the concept. An adversary chooses, from some specified function class, a hidden f0; 1g-valued target function defined over some specified domain of examples and chooses a probability distribution over this domain. The goal of the learner is to output in both polynomial time and with high probability, an hypothesis which is "close" to the target function with respect to the distribution of examples. The learner gains information about the target function and distribution by interacting with an example oracle. At each request by the learner, this oracle draws an example randomly according to the hidden distribution, labels it according to the hidden target function, and returns the labelled example to the learner. A class of functions F is said to be PAC learnable if captures the generic fault tolerance of a learning algorithm. Target text information: Learning in the presence of malicious errors, : In this paper we study an extension of the distribution-free model of learning introduced by Valiant [23] (also known as the probably approximately correct or PAC model) that allows the presence of malicious errors in the examples given to a learning algorithm. Such errors are generated by an adversary with unbounded computational power and access to the entire history of the learning algorithm's computation. Thus, we study a worst-case model of errors. Our results include general methods for bounding the rate of error tolerable by any learning algorithm, efficient algorithms tolerating nontrivial rates of malicious errors, and equivalences between problems of learning with errors and standard combinatorial optimization problems. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
13
test
1-hop neighbor's text information: A theory of inferred causation. : This paper concerns the empirical basis of causation, and addresses the following issues: We propose a minimal-model semantics of causation, and show that, contrary to common folklore, genuine causal influences can be distinguished from spurious covariations following standard norms of inductive reasoning. We also establish a sound characterization of the conditions under which such a distinction is possible. We provide an effective algorithm for inferred causation and show that, for a large class of data the algorithm can uncover the direction of causal influences as defined above. Finally, we ad dress the issue of non-temporal causation. 1-hop neighbor's text information: Probabilistic evaluation of counterfactual queries. : To appear in the Twelfth National Conference on Artificial Intelligence (AAAI-94), Seattle, WA, July 31 August 4, 1994. Technical Report R-213-A April, 1994 Abstract Evaluation of counterfactual queries (e.g., "If A were true, would C have been true?") is important to fault diagnosis, planning, and determination of liability. We present a formalism that uses probabilistic causal networks to evaluate one's belief that the counterfactual consequent, C, would have been true if the antecedent, A, were true. The antecedent of the query is interpreted as an external action that forces the proposition A to be true, which is consistent with Lewis' Miraculous Analysis. This formalism offers a concrete embodiment of the "closest world" approach which (1) properly reflects common understanding of causal influences, (2) deals with the uncertainties inherent in the world, and (3) is amenable to machine representation. 1-hop neighbor's text information: "Aspects of Graphical Models Connected With Causality," : This paper demonstrates the use of graphs as a mathematical tool for expressing independenices, and as a formal language for communicating and processing causal information in statistical analysis. We show how complex information about external interventions can be organized and represented graphically and, conversely, how the graphical representation can be used to facilitate quantitative predictions of the effects of interventions. We first review the Markovian account of causation and show that directed acyclic graphs (DAGs) offer an economical scheme for representing conditional independence assumptions and for deducing and displaying all the logical consequences of such assumptions. We then introduce the manipulative account of causation and show that any DAG defines a simple transformation which tells us how the probability distribution will change as a result of external interventions in the system. Using this transformation it is possible to quantify, from non-experimental data, the effects of external interventions and to specify conditions under which randomized experiments are not necessary. Finally, the paper offers a graphical interpretation for Rubin's model of causal effects, and demonstrates its equivalence to the manipulative account of causation. We exemplify the tradeoffs between the two approaches by deriving nonparametric bounds on treatment effects under conditions of imperfect compliance. Target text information: : In the Proceedings of the Conference on Uncertainty in Artificial Intelli- gence (UAI-94), Seattle, WA, 46-54, July 29-31, 1994. Technical Report R-213-B April, 1994 Abstract Evaluation of counterfactual queries (e.g., "If A were true, would C have been true?") is important to fault diagnosis, planning, and determination of liability. In this paper we present methods for computing the probabilities of such queries using the formulation proposed in [Balke and Pearl, 1994], where the antecedent of the query is interpreted as an external action that forces the proposition A to be true. When a prior probability is available on the causal mechanisms governing the domain, counterfactual probabilities can be evaluated precisely. However, when causal knowledge is specified as conditional probabilities on the observables, only bounds can computed. This paper develops techniques for evaluating these bounds, and demonstrates their use in two applications: (1) the determination of treatment efficacy from studies in which subjects may choose their own treatment, and (2) the determination of liability in product-safety litigation. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
873
val
1-hop neighbor's text information: Learning Visual Schemas in Neural Networks for Object Recognition and Scene Analysis, : VISOR is a large connectionist system that shows how visual schemas can be learned, represented, and used through mechanisms natural to neural networks. Processing in VISOR is based on cooperation, competition, and parallel bottom-up and top-down activation of schema representations. Simulations show that VISOR is robust against noise and variations in the inputs and parameters. It can indicate the confidence of its analysis, pay attention to important minor differences, and use context to recognize ambiguous objects. Experiments also suggest that the representation and learning are stable, and its behavior is consistent with human processes such as priming, perceptual reversal, and circular reaction in learning. The schema mechanisms of VISOR can serve as a starting point for building robust high-level vision systems, and perhaps for schema-based motor control and natural language processing systems as well. 1-hop neighbor's text information: Priming, perceptual reversal, and circular reaction in a neural network model of schema-based vision, : VISOR is a neural network system for object recognition and scene analysis that learns visual schemas from examples. Processing in VISOR is based on cooperation, competition, and parallel bottom-up and top-down activation of schema representations. Similar principles appear to underlie much of human visual processing, and VISOR can therefore be used to model various perceptual phenomena. This paper focuses on analyzing three phenomena through simulation with VISOR: (1) priming and mental imagery, (2) perceptual reversal, and (3) circular reaction. The results illustrate similarity and subtle differences between the mechanisms mediating priming and mental imagery, show how the two opposing accounts of perceptual reversal (neural satiation and cognitive factors) may both contribute to the phenomenon, and demonstrate how intentional actions can be gradually learned from reflex actions. Successful simulation of such effects suggests that similar mechanisms may govern human visual perception and learning of visual schemas. Target text information: VISOR: Schema-based scene analysis with structured neural networks. : A novel approach to object recognition and scene analysis based on neural network representation of visual schemas is described. Given an input scene, the VISOR system focuses attention successively at each component, and the schema representations cooperate and compete to match the inputs. The schema hierarchy is learned from examples through unsupervised adaptation and reinforcement learning. VISOR learns that some objects are more important than others in identifying the scene, and that the importance of spatial relations varies depending on the scene. As the inputs differ increasingly from the schemas, VISOR's recognition process is remarkably robust, and automatically generates a measure of confidence in the analysis. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
2,360
val
1-hop neighbor's text information: A Generalizing Adaptive Discriminant Network. : This paper overviews the AA1 (Adaptive Algorithm 1) model of ASOCS the (Adaptive Self - Organizing Concurrent Systems) approach. It also presents promising empirical generalization results of AA1 with actual data. AA1 is a topologically dynamic network which grows to fit the problem being learned. AA1 generalizes in a self-organizing fashion to a network which seeks to find features which discriminate between concepts. Convergence to a training set is both guaranteed and bounded linearly in time. 1-hop neighbor's text information: "Acquiring Recursive Concepts with Explanation-Based Learning," : University of Wisconsin Computer Sciences Technical Report 876 (September 1989) Abstract In explanation-based learning, a specific problem's solution is generalized into a form that can be later used to solve conceptually similar problems. Most research in explanation-based learning involves relaxing constraints on the variables in the explanation of a specific example, rather than generalizing the graphical structure of the explanation itself. However, this precludes the acquisition of concepts where an iterative or recursive process is implicitly represented in the explanation by a fixed number of applications. This paper presents an algorithm that generalizes explanation structures and reports empirical results that demonstrate the value of acquiring recursive and iterative concepts. The BAGGER2 algorithm learns recursive and iterative concepts, integrates results from multiple examples, and extracts useful subconcepts during generalization. On problems where learning a recursive rule is not appropriate, the system produces the same result as standard explanation-based methods. Applying the learned recursive rules only requires a minor extension to a PROLOG-like problem solver, namely, the ability to explicitly call a specific rule. Empirical studies demonstrate that generalizing the structure of explanations helps avoid the recently reported negative effects of learning. 1-hop neighbor's text information: Generalization by Controlled Expansion of Examples. : SG (Specific to General) is a network for supervised inductive learning from examples that uses ideas from neural networks and symbolic inductive learning to gain benefits of both methods. The network is built of many simple nodes that learn important features in the input space and then monitor the ability of the features to predict output values. The network avoids the exponential nature of the number of features by creating specific features for each example and then expanding those features; making them more general. Expansion of a feature terminates when it encounters another feature with contradicting outputs. Empirical evaluation of the model on real-world data has shown that the network provides good generalization performance. Convergence is accomplished within a small number of training passes. The network provides these benefits while automatically allocating and deleting nodes and without requiring user adjustment of any parameters. The network learns incrementally and operates in a parallel fashion. This paper describes a network architecture for supervised learning that combines techniques used in neural networks 1,7,8 with symbolic machine learning 3,4,6 to gain advantages of both approaches. In supervised learning the network is given a training set containing examples. Each example gives an input pattern along with the corresponding output that the network should produce when presented with the input. The task of the network is not only to converge to a representation that contains the information given by the training set, but to generalize that information so that the network will respond well to inputs that it has not been trained on. One approach to generalization is to look for important features in the input space. A feature is some subset of network inputs along with their associated values. A feature is matched when the values on the network inputs that are part of the feature are equal to the values for those inputs as given in the feature. Inputs that are not part of the feature can be any value. A feature that predicts an output with high probability is an important feature. The number of inputs contained in a feature is the order of the feature and determines the generality of the feature. A feature with few inputs is a general feature, while a feature with many inputs is a specific feature. It is impractical to monitor all possible input features because the number of features is exponential in the number of inputs. This paper proposes SG (Specific to General), a network that creates specific input features and then generalizes those features. One way SG generalizes is by combining similar specific features. If two features are similar, they are close to each other in the input space. Combining the two features by dropping inputs that are not common between the features creates a new feature that encompasses both of the original features. The new feature is general; it matches points in the input space that have not been defined by any example. This section presents an overview of the model while later sections provide detail about the system. The network is made up of many simple nodes. Each node contains the input feature that it monitors. During training, the node gathers statistics giving the discrete conditional probability of each possible output value given the input feature. Target text information: Eclectic Machine Learning: I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
808
test
1-hop neighbor's text information: Mistake-driven learning in text categorization. : Learning problems in the text processing domain often map the text to a space whose dimensions are the measured features of the text, e.g., its words. Three characteristic properties of this domain are (a) very high dimensionality, (b) both the learned concepts and the instances reside very sparsely in the feature space, and (c) a high variation in the number of active features in an instance. In this work we study three mistake-driven learning algorithms for a typical task of this nature - text categorization. We argue that these algorithms which categorize documents by learning a linear separator in the feature space have a few properties that make them ideal for this domain. We then show that a quantum leap in performance is achieved when we further modify the algorithms to better address some of the specific characteristics of the domain. In particular, we demonstrate (1) how variation in document length can be tolerated by either normalizing feature weights or by using negative weights, (2) the positive effect of applying a threshold range in training, (3) alternatives in considering feature frequency, and (4) the benefits of discarding features while training. Overall, we present an algorithm, a variation of Littlestone's Winnow, which performs significantly better than any other algorithm tested on this task using a similar feature set. 1-hop neighbor's text information: Active learning with committees for text categorization. : In many real-world domains, supervised learning requires a large number of training examples. In this paper, we describe an active learning method that uses a committee of learners to reduce the number of training examples required for learning. Our approach is similar to the Query by Committee framework, where disagreement among the committee members on the predicted label for the input part of the example is used to signal the need for knowing the actual value of the label. Our experiments are conducted in the text categorization domain, which is characterized by a large number of features, many of which are irrelevant. We report here on experiments using a committee of Winnow-based learners and demonstrate that this approach can reduce the number of labeled training examples required over that used by a single Winnow learner by 1-2 orders of magnitude. 1-hop neighbor's text information: Learning Distributions from Random Walks: We introduce a new model of distributions generated by random walks on graphs. This model suggests a variety of learning problems, using the definitions and models of distribution learning defined in [6]. Our framework is general enough to model previously studied distribution learning problems, as well as to suggest new applications. We describe special cases of the general problem, and investigate their relative difficulty. We present algorithms to solve the learning problem under various conditions. Target text information: Applying Winnow to Context Sensitive Spelling Correction, : Multiplicative weight-updating algorithms such as Winnow have been studied extensively in the COLT literature, but only recently have people started to use them in applications. In this paper, we apply a Winnow-based algorithm to a task in natural language: context-sensitive spelling correction. This is the task of fixing spelling errors that happen to result in valid words, such as substituting to for too, casual for causal, and so on. Previous approaches to this problem have been statistics-based; we compare Winnow to one of the more successful such approaches, which uses Bayesian classifiers. We find that: (1) When the standard (heavily-pruned) set of features is used to describe problem instances, Winnow performs comparably to the Bayesian method; (2) When the full (unpruned) set of features is used, Winnow is able to exploit the new features and convincingly outperform Bayes; and (3) When a test set is encountered that is dissimilar to the training set, Winnow is better than Bayes at adapting to the unfamiliar test set, using a strategy we will present for combining learning on the training set with unsupervised learning on the (noisy) test set. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
1,656
test
1-hop neighbor's text information: "A self-organizing multiple-view representation of 3-D objects," : We explore representation of 3D objects in which several distinct 2D views are stored for each object. We demonstrate the ability of a two-layer network of thresholded summation units to support such representations. Using unsupervised Hebbian relaxation, the network learned to recognize ten objects from different viewpoints. The training process led to the emergence of compact representations of the specific input views. When tested on novel views of the same objects, the network exhibited a substantial generalization capability. In simulated psychophysical experiments, the network's behavior was qualitatively similar to that of human subjects. 1-hop neighbor's text information: Invariant face and object recognition in the visual system. : Neurons in the ventral stream of the primate visual system exhibit responses to the images of objects which are invariant with respect to natural transformations such as translation, size, and view. Anatomical and neurophysiological evidence suggests that this is achieved through a series of hierarchical processing areas. In an attempt to elucidate the manner in which such representations are established, we have constructed a model of cortical visual processing which seeks to parallel many features of this system, specifically the multi-stage hierarchy with its topologically constrained convergent connectivity. Each stage is constructed as a competitive network utilising a modified Hebb-like learning rule, called the trace rule, which incorporates previous as well as current neuronal activity. The trace rule enables neurons to learn about whatever is invariant over short time periods (e.g. 0.5 s) in the representation of objects as the objects transform in the real world. The trace rule enables neurons to learn the statistical invariances about objects during their transformations, by associating together representations which occur close together in time. We show that by using the trace rule training algorithm the model can indeed learn to produce transformation invariant responses to natural stimuli such as faces. 1-hop neighbor's text information: Viewpoint invariant face recognition using independent component analysis and attractor networks. : We have explored two approaches to recognizing faces across changes in pose. First, we developed a representation of face images based on independent component analysis (ICA) and compared it to a principal component analysis (PCA) representation for face recognition. The ICA basis vectors for this data set were more spatially local than the PCA basis vectors and the ICA representation had greater invariance to changes in pose. Second, we present a model for the development of viewpoint invariant responses to faces from visual experience in a biological system. The temporal continuity of natural visual experience was incorporated into an attractor network model by Hebbian learning following a lowpass temporal filter on unit activities. When combined with the temporal filter, a basic Hebbian update rule became a generalization of Griniasty et al. (1993), which associates temporally proximal input patterns into basins of attraction. The system acquired rep resentations of faces that were largely independent of pose. Target text information: Implicit learning in 3D object recognition: The importance of temporal context: A novel architecture and set of learning rules for cortical self-organization is proposed. The model is based on the idea that multiple information channels can modulate one another's plasticity. Features learned from bottom-up information sources can thus be influenced by those learned from contextual pathways, and vice versa. A maximum likelihood cost function allows this scheme to be implemented in a biologically feasible, hierarchical neural circuit. In simulations of the model, we first demonstrate the utility of temporal context in modulating plasticity. The model learns a representation that categorizes people's faces according to identity, independent of viewpoint, by taking advantage of the temporal continuity in image sequences. In a second set of simulations, we add plasticity to the contextual stream and explore variations in the architecture. In this case, the model learns a two-tiered representation, starting with a coarse view-based clustering and proceeding to a finer clustering of more specific stimulus features. This model provides a tenable account of how people may perform 3D object recognition in a hierarchical, bottom-up fashion. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
514
test
1-hop neighbor's text information: Learning without state-estimation in Partially Observable Markovian Decision Processes, : Reinforcement learning (RL) algorithms provide a sound theoretical basis for building learning control architectures for embedded agents. Unfortunately all of the theory and much of the practice (see Barto et al., 1983, for an exception) of RL is limited to Marko-vian decision processes (MDPs). Many real-world decision tasks, however, are inherently non-Markovian, i.e., the state of the environment is only incompletely known to the learning agent. In this paper we consider only partially observable MDPs (POMDPs), a useful class of non-Markovian decision processes. Most previous approaches to such problems have combined computationally expensive state-estimation techniques with learning control. This paper investigates learning in POMDPs without resorting to any form of state estimation. We present results about what TD(0) and Q-learning will do when applied to POMDPs. It is shown that the conventional discounted RL framework is inadequate to deal with POMDPs. Finally we develop a new framework for learning without state-estimation in POMDPs by including stochastic policies in the search space, and by defining the value or utility of a dis tribution over states. 1-hop neighbor's text information: Neuronlike adaptive elements that can solve difficult learning control problems. : Miller, G. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. The Psychological Review, 63(2):81-97. Schmidhuber, J. (1990b). Towards compositional learning with dynamic neural networks. Technical Report FKI-129-90, Technische Universitat Munchen, Institut fu Informatik. Servan-Schreiber, D., Cleermans, A., and McClelland, J. (1988). Encoding sequential structure in simple recurrent networks. Technical Report CMU-CS-88-183, Carnegie Mellon University, Computer Science Department. 1-hop neighbor's text information: On the convergence of stochastic iterative dynamic programming algorithms. : This project was supported in part by a grant from the McDonnell-Pew Foundation, by a grant from ATR Human Information Processing Research Laboratories, by a grant from Siemens Corporation, and by grant N00014-90-J-1942 from the Office of Naval Research. The project was also supported by NSF grant ASC-9217041 in support of the Center for Biological and Computational Learning at MIT, including funds provided by DARPA under the HPCC program. Michael I. Jordan is a NSF Presidential Young Investigator. Target text information: Reinforcement Learning with Soft State Aggregation. : It is widely accepted that the use of more compact representations than lookup tables is crucial to scaling reinforcement learning (RL) algorithms to real-world problems. Unfortunately almost all of the theory of reinforcement learning assumes lookup table representations. In this paper we address the pressing issue of combining function approximation and RL, and present 1) a function approx-imator based on a simple extension to state aggregation (a commonly used form of compact representation), namely soft state aggregation, 2) a theory of convergence for RL with arbitrary, but fixed, soft state aggregation, 3) a novel intuitive understanding of the effect of state aggregation on online RL, and 4) a new heuristic adaptive state aggregation algorithm that finds improved compact representations by exploiting the non-discrete nature of soft state aggregation. Preliminary empirical results are also presented. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
5
Reinforcement Learning
cora
2,176
test
1-hop neighbor's text information: Structure oriented case retrieval. Fourth German Workshop on Case-Based Reasoning: System Development and Evaluation (pp. : 1-hop neighbor's text information: Generalizing from case studies: A case study. : Most empirical evaluations of machine learning algorithms are case studies evaluations of multiple algorithms on multiple databases. Authors of case studies implicitly or explicitly hypothesize that the pattern of their results, which often suggests that one algorithm performs significantly better than others, is not limited to the small number of databases investigated, but instead holds for some general class of learning problems. However, these hypotheses are rarely supported with additional evidence, which leaves them suspect. This paper describes an empirical method for generalizing results from case studies and an example application. This method yields rules describing when some algorithms significantly outperform others on some dependent measures. Advantages for generalizing from case studies and limitations of this particular approach are also described. 1-hop neighbor's text information: Continuous case-based reasoning. : Case-based reasoning systems have traditionally been used to perform high-level reasoning in problem domains that can be adequately described using discrete, symbolic representations. However, many real-world problem domains, such as autonomous robotic navigation, are better characterized using continuous representations. Such problem domains also require continuous performance, such as continuous sensori-motor interaction with the environment, and continuous adaptation and learning during the performance task. We introduce a new method for continuous case-based reasoning, and discuss how it can be applied to the dynamic selection, modification, and acquisition of robot behaviors in autonomous navigation systems. We conclude with a general discussion of case-based reasoning issues addressed by this work. Target text information: Systematic Evaluation of Design Decisions in CBR Systems: Two important goals in the evaluation of an AI theory or model are to assess the merit of the design decisions in the performance of an implemented computer system and to analyze the impact in the performance when the system faces problem domains with different characteristics. This is particularly difficult in case-based reasoning systems because such systems are typically very complex, as are the tasks and domains in which they operate. We present a methodology for the evaluation of case-based reasoning systems through systematic empirical experimentation over a range of system configurations and environmental conditions, coupled with rigorous statistical analysis of the results of the experiments. This methodology enables us to understand the behavior of the system in terms of the theory and design of the computational model, to select the best system configuration for a given domain, and to predict how the system will behave in response to changing domain and problem characteristics. A case study of a mul-tistrategy case-based and reinforcement learning system which performs autonomous robotic navigation is presented as an example. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
409
test
1-hop neighbor's text information: Bayesian graphical models for discrete data. : z York's research was supported by a NSF graduate fellowship. The authors are grateful to Julian Besag, David Bradshaw, Jeff Bradshaw, James Carlsen, David Draper, Ivar Heuch, Robert Kass, Augustine Kong, Steffen Lauritzen, Adrian Raftery, and James Zidek for helpful comments and discussions. 1-hop neighbor's text information: (1996c) Feedback Models: Interpretation and Discovery. : 1-hop neighbor's text information: Bayesian inference for nondecomposable graphical Gaussian models: In this paper we propose a method to calculate the posterior probability of a nondecomposable graphical Gaussian model. Our proposal is based on a new device to sample from Wishart distributions, conditional on the graphical constraints. As a result, our methodology allows Bayesian model selection within the whole class of graphical Gaussian models, including nondecomposable ones. Target text information: (1993) Linear dependencies represented by chain graphs. : 8] Dori, D. and Tarsi, M., "A Simple Algorithm to Construct a Consistent Extension of a Partially Oriented Graph," Computer Science Department, Tel-Aviv University. Also Technical Report R-185, UCLA, Cognitive Systems Laboratory, October 1992. [14] Pearl, J. and Wermuth, N., "When Can Association Graphs Admit a Causal Interpretation?," UCLA, Cognitive Systems Laboratory, Technical Report R-183-L, November 1992. [17] Verma, T.S. and Pearl, J., "Deciding Morality of Graphs is NP-complete," Technical Report R-188, UCLA, Cognitive Systems Laboratory, October 1992. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
1,123
test
1-hop neighbor's text information: Multipath Execution: Opportunities and Limits: Even sophisticated branch-prediction techniques necessarily suffer some mispredictions, and even relatively small mispredict rates hurt performance substantially in current-generation processors. In this paper, we investigate schemes for improving performance in the face of imperfect branch predictors by having the processor simultaneously execute code from both the taken and not-taken outcomes of a branch. This paper presents data regarding the limits of multipath execution, considers fetch-bandwidth needs for multipath execution, and discusses various dynamic confidence-prediction schemes that gauge the likelihood of branch mispredictions. Our evaluations consider executing along several (28) paths at once. Using 4 paths and a relatively simple confidence predictor, multipath execution garners speedups of up to 30% compared to the single-path case, with an average speedup of 14.4% for the SPECint suite. While associated increases in instruction-fetch-bandwidth requirements are not too surprising, a less expected result is the significance of having a separate return-address stack for each forked path. Overall, our results indicate that multipath execution offers significant improvements over single-path performance, and could be especially useful when combined with multithreading so that hardware costs can be amortized over both approaches. 1-hop neighbor's text information: Dynamic Hammock Predication for Non-predicated Instruction Set Architectures: Conventional speculative architectures use branch prediction to evaluate the most likely execution path during program execution. However, certain branches are difficult to predict. One solution to this problem is to evaluate both paths following such a conditional branch. Predicated execution can be used to implement this form of multi-path execution. Predicated architectures fetch and issue instructions that have associated predicates. These predicates indicate if the instruction should commit its result. Predicating a branch reduces the number of branches executed, eliminating the chance of branch misprediction at the cost of executing additional instructions. In this paper, we propose a restricted form of multi-path execution called Dynamic Predication for architectures with little or no support for predicated instructions in their instruction set. Dynamic predication dynamically predicates instruction sequences in the form of a branch hammock, concurrently executing both paths of the branch. A branch hammock is a short forward branch that spans a few instructions in the form of an if-then or if-then-else construct. We mark these and other constructs in the executable. When the decode stage detects such a sequence, it passes a predicated instruction sequence to a dynamically scheduled execution core. Our results show that dynamic predication can accrue speedups of up to 13%. 1-hop neighbor's text information: Threaded multiple path execution. : This paper presents Threaded Multi-Path Execution (TME), which exploits existing hardware on a Simultaneous Multi-threading (SMT) processor to speculatively execute multiple paths of execution. When there are fewer threads in an SMT processor than hardware contexts, threaded multi-path execution uses spare contexts to fetch and execute code along the less likely path of hard-to-predict branches. This paper describes the hardware mechanisms needed to enable an SMT processor to efficiently spawn speculative threads for threaded multi-path execution. The Mapping Synchronization Bus is described, which enables the spawning of these multiple paths. Policies are examined for deciding which branches to fork, and for managing competition between primary and alternate path threads for critical resources. Our results show that TME increases the single program performance of an SMT with eight thread contexts by 14%-23% on average, depending on the misprediction penalty, for programs with a high misprediction rate. Target text information: Exploiting Choice: Instruction Fetch and Issue on an implementable Simultaneous Multithread-ing Processor. : Simultaneous multithreading is a technique that permits multiple independent threads to issue multiple instructions each cycle. In previous work we demonstrated the performance potential of simultaneous multithreading, based on a somewhat idealized model. In this paper we show that the throughput gains from simultaneous multithreading can be achieved without extensive changes to a conventional wide-issue superscalar, either in hardware structures or sizes. We present an architecture for simultaneous multithreading that achieves three goals: (1) it minimizes the architectural impact on the conventional superscalar design, (2) it has minimal performance impact on a single thread executing alone, and (3) it achieves significant throughput gains when running multiple threads. Our simultaneous multithreading architecture achieves a throughput of 5.4 instructions per cycle, a 2.5-fold improvement over an unmodified superscalar with similar hardware resources. This speedup is enhanced by an advantage of multithreading previously unexploited in other architectures: the ability to favor for fetch and issue those threads most efficiently using the processor each cycle, thereby providing the best instructions to the processor. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
0
Rule Learning
cora
313
val
1-hop neighbor's text information: ABSTRACT In general, the machine learning process can be accelerated through the use of additional: 1-hop neighbor's text information: ABSTRACT: In general, the machine learning process can be accelerated through the use of heuristic knowledge about the problem solution. For example, monomorphic typed Genetic Programming (GP) uses type information to reduce the search space and improve performance. Unfortunately, monomorphic typed GP also loses the generality of untyped GP: the generated programs are only suitable for inputs with the specified type. Polymorphic typed GP improves over mono-morphic and untyped GP by allowing the type information to be expressed in a more generic manner, and yet still imposes constraints on the search space. This paper describes a polymorphic GP system which can generate polymorphic programs: programs which take inputs of more than one type and produces outputs of more than one type. We also demonstrate its operation through the generation of the map polymorphic program. Target text information: Performance enhanced genetic programming, : Genetic Programming is increasing in popularity as the basis for a wide range of learning algorithms. However, the technique has to date only been successfully applied to modest tasks because of the performance overheads of evolving a large number of data structures, many of which do not correspond to a valid program. We address this problem directly and demonstrate how the evolutionary process can be achieved with much greater efficiency through the use of a formally-based representation and strong typing. We report initial experimental results which demonstrate that our technique exhibits significantly better performance than previous work. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
2,125
test
1-hop neighbor's text information: Alecsys and the autonomouse: Learning to control a real robot by distributed classifier systems. : 1-hop neighbor's text information: On the Relations Between Search and Evolutionary Algorithms: Technical Report: CSRP-96-7 March 1996 Abstract Evolutionary algorithms are powerful techniques for optimisation whose operation principles are inspired by natural selection and genetics. In this paper we discuss the relation between evolutionary techniques, numerical and classical search methods and we show that all these methods are instances of a single more general search strategy, which we call the `evolutionary computation cookbook'. By combining the features of classical and evolutionary methods in different ways new instances of this general strategy can be generated, i.e. new evolutionary (or classical) algorithms can be designed. One such algorithm, GA fl , is described. 1-hop neighbor's text information: Genetic-based machine learning and behavior based robotics: a new syntesis. : difficult. We face this problem using an architecture based on learning classifier systems and on the description of the learning technique used and of the organizational structure proposed, we present experiments that show how behaviour acquisition can be achieved. Our simulated robot learns to structural properties of animal behavioural organization, as proposed by ethologists. After a Target text information: "Genetic and Non-Genetic Operators in Alecsys," : It is well known that standard learning classifier systems, when applied to many different domains, exhibit a number of problems: payoff oscillation, difficult to regulate interplay between the reward system and the background genetic algorithm (GA), rule chains instability, default hierarchies instability, are only a few. ALECSYS is a parallel version of a standard learning classifier system (CS), and as such suffers of these same problems. In this paper we propose some innovative solutions to some of these problems. We introduce the following original features. Mutespec, a new genetic operator used to specialize potentially useful classifiers. Energy, a quantity introduced to measure global convergence in order to apply the genetic algorithm only when the system is close to a steady state. Dynamical adjustment of the classifiers set cardinality, in order to speed up the performance phase of the algorithm. We present simulation results of experiments run in a simulated two-dimensional world in which a simple agent learns to follow a light source. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
2,024
test
1-hop neighbor's text information: A bootstrap evaluation of the effect of data splitting on financial time series, : This article exposes problems of the commonly used technique of splitting the available data into training, validation, and test sets that are held fixed, warns about drawing too strong conclusions from such static splits, and shows potential pitfalls of ignoring variability across splits. Using a bootstrap or resampling method, we compare the uncertainty in the solution stemming from the data splitting with neural network specific uncertainties (parameter initialization, choice of number of hidden units, etc.). We present two results on data from the New York Stock Exchange. First, the variation due to different resamplings is significantly larger than the variation due to different network conditions. This result implies that it is important to not over-interpret a model (or an ensemble of models) estimated on one specific split of the data. Second, on each split, the neural network solution with early stopping is very close to a linear model; no significant nonlinearities are extracted. 1-hop neighbor's text information: A Theory of Networks for Approximation and Learning, : Learning an input-output mapping from a set of examples, of the type that many neural networks have been constructed to perform, can be regarded as synthesizing an approximation of a multi-dimensional function, that is solving the problem of hy-persurface reconstruction. From this point of view, this form of learning is closely related to classical approximation techniques, such as generalized splines and regularization theory. This paper considers the problems of an exact representation and, in more detail, of the approximation of linear and nonlinear mappings in terms of simpler functions of fewer variables. Kolmogorov's theorem concerning the representation of functions of several variables in terms of functions of one variable turns out to be almost irrelevant in the context of networks for learning. We develop a theoretical framework for approximation based on regularization techniques that leads to a class of three-layer networks that we call Generalized Radial Basis Functions (GRBF), since they are mathematically related to the well-known Radial Basis Functions, mainly used for strict interpolation tasks. GRBF networks are not only equivalent to generalized splines, but are also closely related to pattern recognition methods such as Parzen windows and potential functions and to several neural network algorithms, such as Kanerva's associative memory, backpropagation and Kohonen's topology preserving map. They also have an interesting interpretation in terms of prototypes that are synthesized and optimally combined during the learning stage. The paper introduces several extensions and applications of the technique and discusses intriguing analogies with neurobiological data. c fl Massachusetts Institute of Technology, 1994 This paper describes research done within the Center for Biological Information Processing, in the Department of Brain and Cognitive Sciences, and at the Artificial Intelligence Laboratory. This research is sponsored by a grant from the Office of Naval Research (ONR), Cognitive and Neural Sciences Division; by the Artificial Intelligence Center of Hughes Aircraft Corporation; by the Alfred P. Sloan Foundation; by the National Science Foundation. Support for the A. I. Laboratory's artificial intelligence research is provided by the Advanced Research Projects Agency of the Department of Defense under Army contract DACA76-85-C-0010, and in part by ONR contract N00014-85-K-0124. 1-hop neighbor's text information: Nonlinear Prediction of Chaotic Time Series. : A novel method for regression has been recently proposed by V. Vapnik et al. [8, 9]. The technique, called Support Vector Machine (SVM), is very well founded from the mathematical point of view and seems to provide a new insight in function approximation. We implemented the SVM and tested it on the same data base of chaotic time series that was used in [1] to compare the performances of different approximation techniques, including polynomial and rational approximation, local polynomial techniques, Radial Basis Functions, and Neural Networks. The SVM performs better than the approaches presented in [1]. We also study, for a particular time series, the variability in performance with respect to the few free parameters of SVM. Target text information: "Modeling volatility using state space models", : In time series problems, noise can be divided into two categories: dynamic noise which drives the process, and observational noise which is added in the measurement process, but does not influence future values of the system. In this framework, empirical volatilities (the squared relative returns of prices) exhibit a significant amount of observational noise. To model and predict their time evolution adequately, we estimate state space models that explicitly include observational noise. We obtain relaxation times for shocks in the logarithm of volatility ranging from three weeks (for foreign exchange) to three to five months (for stock indices). In most cases, a two-dimensional hidden state is required to yield residuals that are consistent with white noise. We compare these results with ordinary autoregressive models (without a hidden state) and find that autoregressive models underestimate the relaxation times by about two orders of magnitude due to their ignoring the distinction between observational and dynamic noise. This new interpretation of the dynamics of volatility in terms of relaxators in a state space model carries over to stochastic volatility models and to GARCH models, and is useful for several problems in finance, including risk management and the pricing of derivative securities. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
2,364
test
1-hop neighbor's text information: A Divide-and-Conquer Approach to Learning from Prior Knowledge: This paper introduces a new machine learning task|model calibration|and presents a method for solving a particularly difficult model calibration task that arose as part of a global climate change research project. The model calibration task is the problem of training the free parameters of a scientific model in order to optimize the accuracy of the model for making future predictions. It is a form of supervised learning from examples in the presence of prior knowledge. An obvious approach to solving calibration problems is to formulate them as global optimization problems in which the goal is to find values for the free parameters that minimize the error of the model on training data. Unfortunately, this global optimization approach becomes computationally infeasible when the model is highly nonlinear. This paper presents a new divide-and-conquer method that analyzes the model to identify a series of smaller optimization problems whose sequential solution solves the global calibration problem. This paper argues that methods of this kind|rather than global optimization techniques|will be required in order for agents with large amounts of prior knowledge to learn efficiently. 1-hop neighbor's text information: Rule induction with CN2: some recent improvements. : The CN2 algorithm induces an ordered list of classification rules from examples using entropy as its search heuristic. In this short paper, we describe two improvements to this algorithm. Firstly, we present the use of the Laplacian error estimate as an alternative evaluation function and secondly, we show how unordered as well as ordered rules can be generated. We experimentally demonstrate significantly improved performances resulting from these changes, thus enhancing the usefulness of CN2 as an inductive tool. Comparisons with Quinlan's C4.5 are also made. 1-hop neighbor's text information: Representing and restructuring domain theories: A constructive induction approach. : Theory revision integrates inductive learning and background knowledge by combining training examples with a coarse domain theory to produce a more accurate theory. There are two challenges that theory revision and other theory-guided systems face. First, a representation language appropriate for the initial theory may be inappropriate for an improved theory. While the original representation may concisely express the initial theory, a more accurate theory forced to use that same representation may be bulky, cumbersome, and difficult to reach. Second, a theory structure suitable for a coarse domain theory may be insufficient for a fine-tuned theory. Systems that produce only small, local changes to a theory have limited value for accomplishing complex structural alterations that may be required. Consequently, advanced theory-guided learning systems require flexible representation and flexible structure. An analysis of various theory revision systems and theory-guided learning systems reveals specific strengths and weaknesses in terms of these two desired properties. Designed to capture the underlying qualities of each system, a new system uses theory-guided constructive induction. Experiments in three domains show improvement over previous theory-guided systems. This leads to a study of the behavior, limitations, and potential of theory-guided constructive induction. Target text information: Using qualitative models to guide inductive learning. : This paper presents a method for using qualitative models to guide inductive learning. Our objectives are to induce rules which are not only accurate but also explainable with respect to the qualitative model, and to reduce learning time by exploiting domain knowledge in the learning process. Such ex-plainability is essential both for practical application of inductive technology, and for integrating the results of learning back into an existing knowledge-base. We apply this method to two process control problems, a water tank network and an ore grinding process used in the mining industry. Surprisingly, in addition to achieving explainability the classificational accuracy of the induced rules is also increased. We show how the value of the qualitative models can be quantified in terms of their equivalence to additional training examples, and finally discuss possible extensions. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
0
Rule Learning
cora
2,346
test
1-hop neighbor's text information: Pronouncing Names by a Combination of Rule-Based and Case-Based Reasoning. : 1-hop neighbor's text information: Improving rule-based systems through case-based reasoning. : A novel architecture is presented for combining rule-based and case-based reasoning. The central idea is to apply the rules to a target problem to get a first approximation to the answer; but if the problem is judged to be compellingly similar to a known exception of the rules in any aspect of its behavior, then that aspect is modelled after the exception rather than the rules. The architecture is implemented for the full-scale task of pronouncing surnames. Preliminary results suggest that the system performs almost as well as the best commercial systems. However, of more interest than the absolute performance of the system is the result that this performance was better than what could have been achieved with the rules alone. This illustrates the capacity of the architecture to improve on the rule-based system it starts with. The results also demonstrate a beneficial interaction in the system, in that improving the rules speeds up the case-based component. 1-hop neighbor's text information: The evaluation of Anapron: A case study in evaluating a case-based system: This paper presents a case study in evaluating a case-based system. It describes the evaluation of Anapron, a system that pronounces names by a combination of rule-based and case-based reasoning. Three sets of experiments were run on Anapron: a set of exploratory measurements to profile the system's operation; a comparison between Anapron and other name-pronunciation systems; and a set of studies that modified various parts of the system to isolate the contribution of each. Lessons learned from these experiments for CBR evaluation methodology and for CBR theory are discussed. This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories of Cambridge, Massachusetts; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories. All rights reserved. Target text information: "A Comparison of ANAPRON with Seven Other Name-pronunciation Systems," : This paper presents an experiment comparing a new name-pronunciation system, Anapron, with seven existing systems: three state-of-the-art commercial systems (from Bellcore, Bell Labs, and DEC), two variants of a machine-learning system (NETtalk), and two humans. Anapron works by combining rule-based and case-based reasoning. It is based on the idea that it is much easier to improve a rule-based system by adding case-based reasoning to it than by tuning the rules to deal with every exception. In the experiment described here, Anapron used a set of rules adapted from MITalk and elementary foreign-language textbooks, and a case library of 5000 names. With these components | which required relatively little knowledge engineering | Anapron was found to perform almost at the level of the commercial systems, and significantly better than the two versions of NETtalk. This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories of Cambridge, Massachusetts; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories. All rights reserved. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
1,350
test
1-hop neighbor's text information: Toward an ideal trainer. : This paper appeared in 1994 in Machine Learning, 15 (3): 251-277. Abstract This paper demonstrates how the nature of the opposition during training affects learning to play two-person, perfect information board games. It considers different kinds of competitive training, the impact of trainer error, appropriate metrics for post-training performance measurement, and the ways those metrics can be applied. The results suggest that teaching a program by leading it repeatedly through the same restricted paths, albeit high quality ones, is overly narrow preparation for the variations that appear in real-world experience. The results also demonstrate that variety introduced into training by random choice is unreliable preparation, and that a program that directs its own training may overlook important situations. The results argue for a broad variety of training experience with play at many levels. This variety may either be inherent in the game or introduced deliberately into the training. Lesson and practice training, a blend of expert guidance and knowledge-based, self-directed elaboration, is shown to be particularly effective for learning during competition. 1-hop neighbor's text information: Neuronlike adaptive elements that can solve difficult learning control problems. : Miller, G. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. The Psychological Review, 63(2):81-97. Schmidhuber, J. (1990b). Towards compositional learning with dynamic neural networks. Technical Report FKI-129-90, Technische Universitat Munchen, Institut fu Informatik. Servan-Schreiber, D., Cleermans, A., and McClelland, J. (1988). Encoding sequential structure in simple recurrent networks. Technical Report CMU-CS-88-183, Carnegie Mellon University, Computer Science Department. Target text information: Why experimentation can be better than "perfect guidance". : Many problems correspond to the classical control task of determining the appropriate control action to take, given some (sequence of) observations. One standard approach to learning these control rules, called behavior cloning, involves watching a perfect operator operate a plant, and then trying to emulate its behavior. In the experimental learning approach, by contrast, the learner first guesses an initial operation-to-action policy and tries it out. If this policy performs sub-optimally, the learner can modify it to produce a new policy, and recur. This paper discusses the relative effectiveness of these two approaches, especially in the presence of perceptual aliasing, showing in particular that the experimental learner can often learn more effectively than the cloning one. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
1,563
test
1-hop neighbor's text information: Parity: the problem that won\'t go away. : It is well-known that certain learning methods (e.g., the perceptron learning algorithm) cannot acquire complete, parity mappings. But it is often overlooked that state-of-the-art learning methods such as C4.5 and backpropagation cannot generalise from incomplete parity mappings. The failure of such methods to generalise on parity mappings may be sometimes dismissed on the grounds that it is `impossible' to generalise over such mappings, or that parity problems are mathematical constructs having little to do with real-world learning. However, this paper argues that such a dismissal is unwarranted. It shows that parity mappings are hard to learn because they are statistically neutral and that statistical neutrality is a property which we should expect to encounter frequently in real-world contexts. It also shows that the generalization failure on parity mappings occurs even when large, minimally incomplete mappings are used for training purposes, i.e., when claims about the impossibility of generalization are particularly suspect. 1-hop neighbor's text information: Self-Organization and Associative Memory, : Selective suppression of transmission at feedback synapses during learning is proposed as a mechanism for combining associative feedback with self-organization of feedforward synapses. Experimental data demonstrates cholinergic suppression of synaptic transmission in layer I (feedback synapses), and a lack of suppression in layer IV (feed-forward synapses). A network with this feature uses local rules to learn mappings which are not linearly separable. During learning, sensory stimuli and desired response are simultaneously presented as input. Feedforward connections form self-organized representations of input, while suppressed feedback connections learn the transpose of feedfor-ward connectivity. During recall, suppression is removed, sensory input activates the self-organized representation, and activity generates the learned response. 1-hop neighbor's text information: Trading spaces: computation, representation and the limits of learning. : fl Research on this paper was partly supported by a Senior Research Leave fellowship granted by the Joint Council (SERC/MRC/ESRC) Cognitive Science Human Computer Interaction Initiative to one of the authors (Clark). Thanks to the Initiative for that support. y The order of names is arbitrary. Target text information: Truth-from-Trash Learning and the Mobot: As natural resources become less abundant, we naturally become more interested in, and more adept at utilisation of waste materials. In doing this we are bringing to bear a ploy which is of key importance in learning | or so I argue in this paper. In the `Truth from Trash' model, learning is viewed as a process which uses environmental feedback to assemble fortuitous sensory predispositions (sensory `trash') into useful, information vehicles, i.e., `truthful' indicators of salient phenomena. The main aim will be to show how a computer implementation of the model has been used to enhance (through learning) the strategic abilities of a simulated, football playing mobot. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
1
Neural Networks
cora
382
test
1-hop neighbor's text information: Plate. Distributed Representations and Nested Compositional Structure. : 1-hop neighbor's text information: Modeling Analogical Problem Solving in a Production System Architecture: This research is supported by a National Science Foundation Fellowship awarded to Dario Salvucci and Office of Naval Research grant N00014-96-1-0491 awarded to John Anderson. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the National Science Foundation, the Office of Naval Research, or the United States government. 1-hop neighbor's text information: Concept Sharing: A Means to Improve Multi-Concept Learning: This paper describes several means for sharing between related concepts to improve learning in the same domain. The sharing comes in the form of substructures or possibly entire structures of previous concepts which may aid in learning other concepts. These substructures highlight useful information in the domain. Using two domains, we evaluate the effectiveness of concept sharing with respect to accuracy, concept size, search complexity, and noise resistance. Target text information: The Structure-Mapping Engine: Algorithms and Examples. : This paper describes the Structure-Mapping Engine (SME), a program for studying analogical processing. SME has been built to explore Gentner's Structure-mapping theory of analogy, and provides a "tool kit" for constructing matching algorithms consistent with this theory. Its flexibility enhances cognitive simulation studies by simplifying experimentation. Furthermore, SME is very efficient, making it a useful component in machine learning systems as well. We review the Structure-mapping theory and describe the design of the engine. We analyze the complexity of the algorithm, and demonstrate that most of the steps are polynomial, typically bounded by O (N 2 ). Next we demonstrate some examples of its operation taken from our cognitive simulation studies and work in machine learning. Finally, we compare SME to other analogy programs and discuss several areas for future work. This paper appeared in Artificial Intelligence, 41, 1989, pp 1-63. For more information, please contact [email protected] I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
1,286
test
1-hop neighbor's text information: Bayesian Experimental Design: A Review. : Non Bayesian experimental design for linear models has been reviewed by Stein-berg and Hunter (1984) and in the recent book by Pukelsheim (1993); Ford, Kitsos and Titterington (1989) reviewed non Bayesian design for nonlinear models. Bayesian design for both linear and nonlinear models is reviewed here. We argue that the design problem is best considered as a decision problem and that it is best solved by maximizing the expected utility of the experiment. This paper considers only in a marginal way, when appropriate, the theory of non Bayesian design. Target text information: Bayesian Design of Experiments for the Linear Model. : Most of the Bayesian theory of optimal experimental design, for the normal linear model, has been developed under the restrictive assumption that the variance is known. In special cases, insensitivity of specific design criteria to specific prior assumptions on the variance has been demonstrated, but a general result to show the way in which Bayesian optimal designs are affected by prior information about the variance is lacking. This paper stresses the important distinction between expected utility functions and optimality criteria, examines a number of expected utility functions some of which possess interesting properties, and deserve wider use and derives the relevant Bayesian optimality criteria under normal assumptions. This unifying setup is useful for proving the main result of the paper, that clarifies the issue of designing for the normal linear model with unknown variance. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
1,911
test
1-hop neighbor's text information: A Market Framework for Pooling Opinions: Consider a group of Bayesians, each with a subjective probability distribution over a set of uncertain events. An opinion pool derives a single consensus distribution over the events, representative of the group as a whole. Several pooling functions have been proposed, each sensible under particular assumptions or measures. Many researchers over many years have failed to form a consensus on which method is best. We propose a market-based pooling procedure, and analyze its properties. Participants bet on securities, each paying off contingent on an uncertain event, so as to maximize their own expected utilities. The consensus probability of each event is defined as the corresponding security's equilibrium price. The market framework provides explicit monetary incentives for participation and honesty, and allows agents to maintain individual rationality and limited privacy. "No arbitrage" arguments ensure that the equilibrium prices form legal probabilities. We show that, when events are disjoint and all participants have exponential utility for money, the market derives the same result as the logarithmic opinion pool; similarly, logarithmic utility for money yields the linear opinion pool. In both cases, we prove that the group's behavior is, to an outside observer, indistinguishable from that of a rational individual, whose beliefs equal the equilibrium prices. 1-hop neighbor's text information: Toward a market model for Bayesian inference. : We present a methodology for representing probabilistic relationships in a general-equilibrium economic model. Specifically, we define a precise mapping from a Bayesian network with binary nodes to a market price system where consumers and producers trade in uncertain propositions. We demonstrate the correspondence between the equilibrium prices of goods in this economy and the probabilities represented by the Bayesian network. A computational market model such as this may provide a useful framework for investigations of belief aggregation, distributed probabilistic inference, resource allocation under uncertainty, and other problems of de centralized uncertainty. Target text information: Representing aggregate belief through the competitive equilibrium of a securities market. : We consider the problem of belief aggregation: given a group of individual agents with probabilistic beliefs over a set of of uncertain events, formulate a sensible consensus or aggregate probability distribution over these events. Researchers have proposed many aggregation methods, although on the question of which is best the general consensus is that there is no consensus. We develop a market-based approach to this problem, where agents bet on uncertain events by buying or selling securities contingent on their outcomes. Each agent acts in the market so as to maximize expected utility at given securities prices, limited in its activity only by its own risk aversion. The equilibrium prices of goods in this market represent aggregate beliefs. For agents with constant risk aversion, we demonstrate that the aggregate probability exhibits several desirable properties, and is related to independently motivated techniques. We argue that the market-based approach provides a plausible mechanism for belief aggregation in multiagent systems, as it directly addresses self-motivated agent incentives for participation and for truthfulness, and can provide a decision-theoretic foundation for the "expert weights" often employed in centralized pooling techniques. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
2,379
test
1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming. 1-hop neighbor's text information: Strongly Typed Genetic Programming. : BBN Technical Report #7866: Abstract Genetic programming is a powerful method for automatically generating computer programs via the process of natural selection [Koza 92]. However, it has the limitation known as "closure", i.e. that all the variables, constants, arguments for functions, and values returned from functions must be of the same data type. To correct this deficiency, we introduce a variation of genetic programming called "strongly typed" genetic programming (STGP). In STGP, variables, constants, arguments, and returned values can be of any data type with the provision that the data type for each such value be specified beforehand. This allows the initialization process and the genetic operators to only generate parse trees such that the arguments of each function in each tree have the required types. An extension to STGP which makes it easier to use is the concept of generic functions, which are not true strongly typed functions but rather templates for classes of such functions. To illustrate STGP, we present three examples involving vector and matrix manipulation: (1) a basis representation problem (which can be constructed to be deceptive by any reasonable definition of "deception"), (2) the n-dimensional least-squares regression problem, and (3) preliminary work on the Kalman filter. Target text information: A methodology for processing problem constraints in genetic programming. Computers and Mathematics with Application, : Search mechanisms of artificial intelligence combine two elements: representation, which determines the search space, and a search mechanism, which actually explores the space. Unfortunately, many searches may explore redundant and/or invalid solutions. Genetic programming refers to a class of evolutionary algorithms based on genetic algorithms but utilizing a parameterized representation in the form of trees. These algorithms perform searches based on simulation of nature. They face the same problems of redundant/invalid subspaces. These problems have just recently been addressed in a systematic manner. This paper presents a methodology devised for the public domain genetic programming tool lil-gp. This methodology uses data typing and semantic information to constrain the representation space so that only valid, and possibly unique, solutions will be explored. The user enters problem-specific constraints, which are transformed into a normal set. This set is checked for feasibility, and subsequently it is used to limit the space being explored. The constraints can determine valid, possibly unique space. Moreover, they can also be used to exclude subspaces the user considers uninteresting, using some problem-specific knowledge. A simple example is followed thoroughly to illustrate the constraint language, transformations, and the normal set. Experiments with boolean 11-multiplexer illustrate practical applications of the method to limit redundant space exploration by utilizing problem-specific knowledge. fl Supported by a grant from NASA/JSC: NAG 9-847. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
2,126
val
1-hop neighbor's text information: "Measures for performance evaluation of genetic algorithms," : This paper proposes four performance measures of a genetic algorithm (GA) which enable us to compare different GAs for an op timization problem and different choices of their parameters' values. The performance measures are defined in terms of observations in simulation, such as the frequency of optimal solutions, fitness values, the frequency of evolution leaps, and the number of generations needed to reach an optimal solution. We present a case study in which parameters of a GA for robot path planning was tuned and its performance was optimized through performance evaluation by using the measures. Especially, one of the performance measures is used to demonstrate the adaptivity of the GA for robot path planning. We also propose a process of systematic tuning based on techniques for the design of experiments. 1-hop neighbor's text information: Chapter 4 Empirical comparison of stochastic algorithms Empirical comparison of stochastic algorithms in a graph: There are several stochastic methods that can be used for solving NP-hard optimization problems approximatively. Examples of such algorithms include (in order of increasing computational complexity) stochastic greedy search methods, simulated annealing, and genetic algorithms. We investigate which of these methods is likely to give best performance in practice, with respect to the computational effort each requires. We study this problem empirically by selecting a set of stochastic algorithms with varying computational complexity, and by experimentally evaluating for each method how the goodness of the results achieved improves with increasing computational time. For the evaluation, we use a graph optimization problem, which is closely related to several real-world practical problems. To get a wider perspective of the goodness of the achieved results, the stochastic methods are also compared against special-case greedy heuristics. This investigation suggests that although genetic algorithms can provide good results, simpler stochastic algorithms can achieve similar performance more quickly. Target text information: An indexed bibliography of genetic algorithms: : DRAFT March 16, 1998 available via anonymous ftp: site ftp.uwasa.fi directory cs/report94-1 file gaGPbib.ps.Z I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
1,187
test
1-hop neighbor's text information: Belief maintenance in bayesian networks. : 1-hop neighbor's text information: Belief maintenance with probabilistic logic. : Target text information: Anytime Influence Diagrams: I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
859
val
1-hop neighbor's text information: Supervised learning from incomplete data via an EM approach. : Real-world learning tasks may involve high-dimensional data sets with arbitrary patterns of missing data. In this paper we present a framework based on maximum likelihood density estimation for learning from such data sets. We use mixture models for the density estimates and make two distinct appeals to the Expectation-Maximization (EM) principle (Dempster et al., 1977) in deriving a learning algorithm|EM is used both for the estimation of mixture components and for coping with missing data. The resulting algorithm is applicable to a wide range of supervised as well as unsupervised learning problems. Results from a classification benchmark|the iris data set|are presented. 1-hop neighbor's text information: A new view of the EM algorithm that justifies incremental and other variants. : The EM algorithm performs maximum likelihood estimation for data in which some variables are unobserved. We present a function that resembles negative free energy and show that the M step maximizes this function with respect to the model parameters and the E step maximizes it with respect to the distribution over the unobserved variables. From this perspective, it is easy to justify an incremental variant of the EM algorithm in which the distribution for only one of the unobserved variables is recalculated in each E step. This variant is shown empirically to give faster convergence in a mixture estimation problem. A variant of the algorithm that exploits sparse conditional distributions is also described, and a wide range of other variant algorithms are also seen to be possible. 1-hop neighbor's text information: Hierarchical Mixtures of Experts and the EM Algorithm, : We present a tree-structured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIM's). Learning is treated as a maximum likelihood problem; in particular, we present an Expectation-Maximization (EM) algorithm for adjusting the parameters of the architecture. We also develop an on-line learning algorithm in which the parameters are updated incrementally. Comparative simulation results are presented in the robot dynamics domain. *We want to thank Geoffrey Hinton, Tony Robinson, Mitsuo Kawato and Daniel Wolpert for helpful comments on the manuscript. This project was supported in part by a grant from the McDonnell-Pew Foundation, by a grant from ATR Human Information Processing Research Laboratories, by a grant from Siemens Corporation, by by grant IRI-9013991 from the National Science Foundation, and by grant N00014-90-J-1942 from the Office of Naval Research. The project was also supported by NSF grant ASC-9217041 in support of the Center for Biological and Computational Learning at MIT, including funds provided by DARPA under the HPCC program, and NSF grant ECS-9216531 to support an Initiative in Intelligent Control at MIT. Michael I. Jordan is a NSF Presidential Young Investigator. Target text information: A statistical approach to decision tree modeling. : A statistical approach to decision tree modeling is described. In this approach, each decision in the tree is modeled parametrically as is the process by which an output is generated from an input and a sequence of decisions. The resulting model yields a likelihood measure of goodness of fit, allowing ML and MAP estimation techniques to be utilized. An efficient algorithm is presented to estimate the parameters in the tree. The model selection problem is presented and several alternative proposals are considered. A hidden Markov version of the tree is described for data sequences that have temporal dependencies. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
6
Probabilistic Methods
cora
1,876
val
1-hop neighbor's text information: "Covering vs. Divide-and-Conquer for Top-Down Induction of Logic Programs", : covering has been formalized and used extensively. In this work, the divide-and-conquer technique is formalized as well and compared to the covering technique in a logic programming framework. Covering works by repeatedly specializing an overly general hypothesis, on each iteration focusing on finding a clause with a high coverage of positive examples. Divide-and-conquer works by specializing an overly general hypothesis once, focusing on discriminating positive from negative examples. Experimental results are presented demonstrating that there are cases when more accurate hypotheses can be found by divide-and-conquer than by covering. Moreover, since covering considers the same alternatives repeatedly it tends to be less efficient than divide-and-conquer, which never considers the same alternative twice. On the other hand, covering searches a larger hypothesis space, which may result in that more compact hypotheses are found by this technique than by divide-and-conquer. Furthermore, divide-and-conquer is, in contrast to covering, not applicable to learn ing recursive definitions. 1-hop neighbor's text information: Employing linear regression in regression tree leaves. : The advantage of using linear regression in the leaves of a regression tree is analysed in the paper. It is carried out how this modification affects the construction, pruning and interpretation of a regression tree. The modification is tested on artificial and real-life domains where its impact on classification error and stability of the induced trees is considered. The results show that the modification is beneficial, as it leads to smaller classification errors of induced regression trees. The Bayesian approach to estimation of class distributions is used in all experiments. 1-hop neighbor's text information: Rule-based machine learning methods for function prediction. : We describe a machine learning method for predicting the value of a real-valued function, given the values of multiple input variables. The method induces solutions from samples in the form of ordered disjunctive normal form (DNF) decision rules. A central objective of the method and representation is the induction of compact, easily interpretable solutions. This rule-based decision model can be extended to search efficiently for similar cases prior to approximating function values. Experimental results on real-world data demonstrate that the new techniques are competitive with existing machine learning and statistical methods and can sometimes yield superior regression performance. Target text information: Structural Regression Trees: In many real-world domains the task of machine learning algorithms is to learn a theory predicting numerical values. In particular several standard test domains used in Inductive Logic Programming (ILP) are concerned with predicting numerical values from examples and relational and mostly non-determinate background knowledge. However, so far no ILP algorithm except one can predict numbers and cope with non-determinate background knowledge. (The only exception is a covering algorithm called FORS.) In this paper we present Structural Regression Trees (SRT), a new algorithm which can be applied to the above class of problems by integrating the statistical method of regression trees into ILP. SRT constructs a tree containing a literal (an atomic formula or its negation) or a conjunction of literals in each node, and assigns a numerical value to each leaf. SRT provides more comprehensible results than purely statistical methods, and can be applied to a class of problems most other ILP systems cannot handle. Experiments in several real-world domains demonstrate that the approach is competitive with existing methods, indicating that the advantages are not at the expense of predictive accuracy. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
0
Rule Learning
cora
839
test
1-hop neighbor's text information: Rigorous learning curve bounds from statistical mechanics. : In this paper we introduce and investigate a mathematically rigorous theory of learning curves that is based on ideas from statistical mechanics. The advantage of our theory over the well-established Vapnik-Chervonenkis theory is that our bounds can be considerably tighter in many cases, and are also more reflective of the true behavior (functional form) of learning curves. This behavior can often exhibit dramatic properties such as phase transitions, as well as power law asymptotics not explained by the VC theory. The disadvantages of our theory are that its application requires knowledge of the input distribution, and it is limited so far to finite cardinality function classes. We illustrate our results with many concrete examples of learning curve bounds derived from our theory. 1-hop neighbor's text information: Pessimistic Decision Tree Pruning Based on Tree Size. : In this work we develop a new criteria to perform pessimistic decision tree pruning. Our method is theoretically sound and is based on theoretical concepts such as uniform convergence and the Vapnik-Chervonenkis dimension. We show that our criteria is very well motivated, from the theory side, and performs very well in practice. The accuracy of the new criteria is comparable to that of the current method used in C4.5. 1-hop neighbor's text information: On the sample complexity of noise-tolerant learning. : In this paper, we further characterize the complexity of noise-tolerant learning in the PAC model. Specifically, we show a general lower bound of log(1=ffi) on the number of examples required for PAC learning in the presence of classification noise. Combined with a result of Simon, we effectively show that the sample complexity of PAC learning in the presence of classification noise is VC(F) "(12) 2 : Furthermore, we demonstrate the optimality of the general lower bound by providing a noise-tolerant learning algorithm for the class of symmetric Boolean functions which uses a sample size within a constant factor of this bound. Finally, we note that our general lower bound compares favorably with various general upper bounds for PAC learning in the presence of classification noise. Target text information: Self bounding learning algorithms: Most of the work which attempts to give bounds on the generalization error of the hypothesis generated by a learning algorithm is based on methods from the theory of uniform convergence. These bounds are a-priori bounds that hold for any distribution of examples and are calculated before any data is observed. In this paper we propose a different approach for bounding the generalization error after the data has been observed. A self-bounding learning algorithm is an algorithm which, in addition to the hypothesis that it outputs, outputs a reliable upper bound on the generalization error of this hypothesis. We first explore the idea in the statistical query learning framework of Kearns [10]. After that we give an explicit self bounding algorithm for learning algorithms that are based on local search. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
263
test
1-hop neighbor's text information: Dietterich (1991). Learning with Many Irrelevant Features. : In many domains, an appropriate inductive bias is the MIN-FEATURES bias, which prefers consistent hypotheses definable over as few features as possible. This paper defines and studies this bias. First, it is shown that any learning algorithm implementing the MIN-FEATURES bias requires fi( 1 * [2 p + p ln n]) training examples to guarantee PAC-learning a concept having p relevant features out of n available features. This bound is only logarithmic in the number of irrelevant features. The paper also presents a quasi-polynomial time algorithm, FOCUS, which implements MIN-FEATURES. Experimental studies are presented that compare FOCUS to the ID3 and FRINGE algorithms. These experiments show that| contrary to expectations|these algorithms do not implement good approximations of MIN-FEATURES. The coverage, sample complexity, and generalization performance of FOCUS is substantially better than either ID3 or FRINGE on learning problems where the MIN-FEATURES bias is appropriate. This suggests that, in practical applications, training data should be preprocessed to remove irrelevant features before being Target text information: Efficient algorithms for identifying relevant features. : This paper describes efficient methods for exact and approximate implementation of the MIN-FEATURES bias, which prefers consistent hypotheses definable over as few features as possible. This bias is useful for learning domains where many irrelevant features are present in the training data. We first introduce FOCUS-2, a new algorithm that exactly implements the MIN-FEATURES bias. This algorithm is empirically shown to be substantially faster than the FOCUS algorithm previously given in [ Al-muallim and Dietterich, 1991 ] . We then introduce the Mutual-Information-Greedy, Simple-Greedy and Weighted-Greedy algorithms, which apply efficient heuristics for approximating the MIN-FEATURES bias. These algorithms employ greedy heuristics that trade optimality for computational efficiency. Experimental studies show that the learning performance of ID3 is greatly improved when these algorithms are used to preprocess the training data by eliminating the irrelevant features from ID3's consideration. In particular, the Weighted-Greedy algorithm provides an excellent and efficient approximation of the MIN I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
4
Theory
cora
1,116
test
1-hop neighbor's text information: Learning indices for schema selection. : In addition to learning new knowledge, a system must be able to learn when the knowledge is likely to be applicable. An index is a piece of information which, when identified in a given situation, triggers the relevant piece of knowledge (or schema) in the system's memory. We discuss the issue of how indices may be learned automatically in the context of a story understanding task, and present a program that can learn new indices for existing explanatory schemas. We discuss two methods using which the system can identify the relevant schema even if the input does not directly match an existing index, and learn a new index to allow it to retrieve this schema more efficiently in the future. 1-hop neighbor's text information: Use of Mental Models for Constraining Index Learning in Experience-Based Design. : The power of the case-based method comes from the ability to retrieve the "right" case when a new problem is specified. This implies that learning the "right" indices to a case before storing it for potential reuse is crucial for the success of the method. A hierarchical organization of the case memory raises two distinct but related issues in index learning: learning the indexing vocabulary, and learning the right level of generalization. In this paper we show how the use of structure-behavior-function (SBF) models constrains index learning in the context of experience-based design of physical devices. The SBF model of a design provides the functional and causal explanation of how the structure of the design delivers its function. We describe how the SBF model of a design, together with a specification of the task for which the design case might be reused, provides the vocabulary for indexing the design case in memory. We also discuss how the prior design experiences stored in case-memory help to determine the level of index generalization. The KRITIK2 system implements and evaluates the model-based method for learning indices to design cases. 1-hop neighbor's text information: Design, Analogy, and Creativity: : Target text information: Innovation in Analogical Design: A Model-Based Approach. : I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
2
Case Based
cora
2,634
val
1-hop neighbor's text information: Evolving globally synchronized cellular automata. : How does an evolutionary process interact with a decentralized, distributed system in order to produce globally coordinated behavior? Using a genetic algorithm (GA) to evolve cellular automata (CAs), we show that the evolution of spontaneous synchronization, one type of emergent coordination, takes advantage of the underlying medium's potential to form embedded particles. The particles, typically phase defects between synchronous regions, are designed by the evolutionary process to resolve frustrations in the global phase. We describe in detail one typical solution discovered by the GA, delineating the discovered synchronization algorithm in terms of embedded particles and their interactions. We also use the particle-level description to analyze the evolutionary sequence by which this solution was discovered. Our results have implications both for understanding emergent collective behavior in natural systems and for the automatic programming of decentralized spatially extended multiprocessor systems. Target text information: Mechanisms of Emergent Computation in Cellular Automata: We introduce a class of embedded-particle models for describing the emergent computational strategies observed in cellular automata (CAs) that were evolved for performing certain computational tasks. The models are evaluated by comparing their estimated performances with the actual performances of the CAs they model. The results show, via a close quantitative agreement, that the embedded-particle framework captures the main information processing mechanisms of the emergent computation that arise in these evolved CAs. I provide the content of the target node and its neighbors' information. The relation between the target node and its 1-hop neighbors is 'citation'. The 7 categories are: 0: Rule_Learning 1: Neural_Networks 2: Case_Based 3: Genetic_Algorithms 4: Theory 5: Reinforcement_Learning 6: Probabilistic_Methods Question: Based on the content of the target and neighbors' scientific publications, predict the category ID (0 to 6) for the target node.
3
Genetic Algorithms
cora
552
test